+ - 0:00:00
Notes for current slide
Notes for next slide

Chapter 1

(auto-generated TOC)

2/1692

Chapter 2

(auto-generated TOC)

3/1692

Chapter 9

(auto-generated TOC)

10/1692

Chapter 13

(auto-generated TOC)

14/1692

Chapter 14

(auto-generated TOC)

15/1692

Chapter 16

(auto-generated TOC)

17/1692

Chapter 22

(auto-generated TOC)

23/1692

Chapter 24

(auto-generated TOC)

shared/toc.md

25/1692

Image separating from the next chapter

26/1692

A brief introduction

(automatically generated title slide)

27/1692

A brief introduction

  • This was initially written by Jérôme Petazzoni to support in-person, instructor-led workshops and tutorials

  • Credit is also due to multiple contributors — thank you!

  • I recommend using the Slack Chat to help you ...

  • ... And be comfortable spending some time reading the Kubernetes documentation ...

  • ... And looking for answers on StackOverflow and other outlets

k8smastery/intro.md

28/1692

Hands on, you shall practice

  • Nobody ever became a Jedi by spending their lives reading Wookiepedia

  • Likewise, it will take more than merely reading these slides to make you an expert

  • These slides include tons of exercises and examples

  • They assume that you have access to a Kubernetes cluster

k8smastery/intro.md

29/1692

Image separating from the next chapter

30/1692

Pre-requirements

(automatically generated title slide)

31/1692

Pre-requirements

  • Be comfortable with the UNIX command line

    • navigating directories

    • editing files

    • a little bit of bash-fu (environment variables, loops)

  • Some Docker knowledge

    • docker run, docker ps, docker build

    • ideally, you know how to write a Dockerfile and build it
      (even if it's a FROM line and a couple of RUN commands)

  • It's totally OK if you are not a Docker expert!

k8smastery/prereqs.md

32/1692

Tell me and I forget.
Teach me and I remember.
Involve me and I learn.

Misattributed to Benjamin Franklin

(Probably inspired by Chinese Confucian philosopher Xunzi)

k8smastery/prereqs.md

33/1692

Hands-on exercises

  • The whole workshop is hands-on, with "exercies"

  • You are invited to reproduce these exercises with me

  • All exercises are identified with a dashed box plus keyboard icon

k8smastery/prereqs.md

34/1692

Image separating from the next chapter

35/1692

What and why of orchestration

(automatically generated title slide)

36/1692

What and why of orchestration

  • There are many computing orchestrators

  • They make decisions about when and where to "do work"

37/1692

What and why of orchestration

  • There are many computing orchestrators

  • They make decisions about when and where to "do work"

  • We've done this since the dawn of computing: Mainframe schedulers, Puppet, Terraform, AWS, Mesos, Hadoop, etc.

38/1692

What and why of orchestration

  • There are many computing orchestrators

  • They make decisions about when and where to "do work"

  • We've done this since the dawn of computing: Mainframe schedulers, Puppet, Terraform, AWS, Mesos, Hadoop, etc.

  • Since 2014 we've had a resurgence of new orchestration projects because:

39/1692

What and why of orchestration

  • There are many computing orchestrators

  • They make decisions about when and where to "do work"

  • We've done this since the dawn of computing: Mainframe schedulers, Puppet, Terraform, AWS, Mesos, Hadoop, etc.

  • Since 2014 we've had a resurgence of new orchestration projects because:

    1. Popularity of distributed computing
40/1692

What and why of orchestration

  • There are many computing orchestrators

  • They make decisions about when and where to "do work"

  • We've done this since the dawn of computing: Mainframe schedulers, Puppet, Terraform, AWS, Mesos, Hadoop, etc.

  • Since 2014 we've had a resurgence of new orchestration projects because:

    1. Popularity of distributed computing

    2. Docker containers as a app package and isolated runtime

41/1692

What and why of orchestration

  • There are many computing orchestrators

  • They make decisions about when and where to "do work"

  • We've done this since the dawn of computing: Mainframe schedulers, Puppet, Terraform, AWS, Mesos, Hadoop, etc.

  • Since 2014 we've had a resurgence of new orchestration projects because:

    1. Popularity of distributed computing

    2. Docker containers as a app package and isolated runtime

  • We needed "many servers to act like one, and run many containers"

42/1692

What and why of orchestration

  • There are many computing orchestrators

  • They make decisions about when and where to "do work"

  • We've done this since the dawn of computing: Mainframe schedulers, Puppet, Terraform, AWS, Mesos, Hadoop, etc.

  • Since 2014 we've had a resurgence of new orchestration projects because:

    1. Popularity of distributed computing

    2. Docker containers as a app package and isolated runtime

  • We needed "many servers to act like one, and run many containers"

  • And the Container Orchestrator was born

k8smastery/orchestration.md

43/1692

Container orchestrator

  • Many open source projects have been created in the last 5 years to:

    • Schedule running of containers on servers
44/1692

Container orchestrator

  • Many open source projects have been created in the last 5 years to:

    • Schedule running of containers on servers

    • Dispatch them across many nodes

45/1692

Container orchestrator

  • Many open source projects have been created in the last 5 years to:

    • Schedule running of containers on servers

    • Dispatch them across many nodes

    • Monitor and react to container and server health

46/1692

Container orchestrator

  • Many open source projects have been created in the last 5 years to:

    • Schedule running of containers on servers

    • Dispatch them across many nodes

    • Monitor and react to container and server health

    • Provide storage, networking, proxy, security, and logging features

47/1692

Container orchestrator

  • Many open source projects have been created in the last 5 years to:

    • Schedule running of containers on servers

    • Dispatch them across many nodes

    • Monitor and react to container and server health

    • Provide storage, networking, proxy, security, and logging features

    • Do all this in a declarative way, rather than imperative

48/1692

Container orchestrator

  • Many open source projects have been created in the last 5 years to:

    • Schedule running of containers on servers

    • Dispatch them across many nodes

    • Monitor and react to container and server health

    • Provide storage, networking, proxy, security, and logging features

    • Do all this in a declarative way, rather than imperative

    • Provide API's to allow extensibility and management

k8smastery/orchestration.md

49/1692

Major container orchestration projects

  • Kubernetes, aka K8s

  • Docker Swarm (and Swarm classic)

  • Apache Mesos/Marathon

  • Cloud Foundry

  • Amazon ECS (not OSS, AWS-only)

  • HashiCorp Nomad

50/1692

Major container orchestration projects

  • Kubernetes, aka K8s

  • Docker Swarm (and Swarm classic)

  • Apache Mesos/Marathon

  • Cloud Foundry

  • Amazon ECS (not OSS, AWS-only)

  • HashiCorp Nomad

  • Many of these tools run on top of Docker Engine
51/1692

Major container orchestration projects

  • Kubernetes, aka K8s

  • Docker Swarm (and Swarm classic)

  • Apache Mesos/Marathon

  • Cloud Foundry

  • Amazon ECS (not OSS, AWS-only)

  • HashiCorp Nomad

  • Many of these tools run on top of Docker Engine

  • Kubernetes is the one orchestrator with many distributions

k8smastery/orchestration.md

52/1692

Kubernetes distributions

  • Kubernetes "vanilla upstream" (not a distribution)
53/1692

Kubernetes distributions

  • Kubernetes "vanilla upstream" (not a distribution)

  • Cloud-Managed distros: AKS, GKE, EKS, DOK...

54/1692

Kubernetes distributions

  • Kubernetes "vanilla upstream" (not a distribution)

  • Cloud-Managed distros: AKS, GKE, EKS, DOK...

  • Self-Managed distros: RedHat OpenShift, Docker Enterprise, Rancher, Canonical Charmed, openSUSE Kubic...

55/1692

Kubernetes distributions

  • Kubernetes "vanilla upstream" (not a distribution)

  • Cloud-Managed distros: AKS, GKE, EKS, DOK...

  • Self-Managed distros: RedHat OpenShift, Docker Enterprise, Rancher, Canonical Charmed, openSUSE Kubic...

  • Vanilla installers: kubeadm, kops, kubicorn...

56/1692

Kubernetes distributions

  • Kubernetes "vanilla upstream" (not a distribution)

  • Cloud-Managed distros: AKS, GKE, EKS, DOK...

  • Self-Managed distros: RedHat OpenShift, Docker Enterprise, Rancher, Canonical Charmed, openSUSE Kubic...

  • Vanilla installers: kubeadm, kops, kubicorn...

  • Local dev/test: Docker Desktop, minikube, microK8s

57/1692

Kubernetes distributions

  • Kubernetes "vanilla upstream" (not a distribution)

  • Cloud-Managed distros: AKS, GKE, EKS, DOK...

  • Self-Managed distros: RedHat OpenShift, Docker Enterprise, Rancher, Canonical Charmed, openSUSE Kubic...

  • Vanilla installers: kubeadm, kops, kubicorn...

  • Local dev/test: Docker Desktop, minikube, microK8s

  • CI testing: kind

58/1692

Kubernetes distributions

  • Kubernetes "vanilla upstream" (not a distribution)

  • Cloud-Managed distros: AKS, GKE, EKS, DOK...

  • Self-Managed distros: RedHat OpenShift, Docker Enterprise, Rancher, Canonical Charmed, openSUSE Kubic...

  • Vanilla installers: kubeadm, kops, kubicorn...

  • Local dev/test: Docker Desktop, minikube, microK8s

  • CI testing: kind

  • Special builds: Rancher k3s

59/1692

Kubernetes distributions

  • Kubernetes "vanilla upstream" (not a distribution)

  • Cloud-Managed distros: AKS, GKE, EKS, DOK...

  • Self-Managed distros: RedHat OpenShift, Docker Enterprise, Rancher, Canonical Charmed, openSUSE Kubic...

  • Vanilla installers: kubeadm, kops, kubicorn...

  • Local dev/test: Docker Desktop, minikube, microK8s

  • CI testing: kind

  • Special builds: Rancher k3s

  • And Many, many more... (86 as of June 2019)

k8smastery/orchestration.md

60/1692

Image separating from the next chapter

61/1692

Kubernetes concepts

(automatically generated title slide)

62/1692

Kubernetes concepts

  • Kubernetes is a container management system

  • It runs and manages containerized applications on a cluster (one or more servers)

  • Often this is simply called "container orchestration"

  • Sometimes shortened to Kube or K8s ("Kay-eights" or "Kates")

k8smastery/concepts-k8s.md

63/1692

Basic things we can ask Kubernetes to do

64/1692

Basic things we can ask Kubernetes to do

  • Start 5 containers using image atseashop/api:v1.3
65/1692

Basic things we can ask Kubernetes to do

  • Start 5 containers using image atseashop/api:v1.3

  • Place an internal load balancer in front of these containers

66/1692

Basic things we can ask Kubernetes to do

  • Start 5 containers using image atseashop/api:v1.3

  • Place an internal load balancer in front of these containers

  • Start 10 containers using image atseashop/webfront:v1.3

67/1692

Basic things we can ask Kubernetes to do

  • Start 5 containers using image atseashop/api:v1.3

  • Place an internal load balancer in front of these containers

  • Start 10 containers using image atseashop/webfront:v1.3

  • Place a public load balancer in front of these containers

68/1692

Basic things we can ask Kubernetes to do

  • Start 5 containers using image atseashop/api:v1.3

  • Place an internal load balancer in front of these containers

  • Start 10 containers using image atseashop/webfront:v1.3

  • Place a public load balancer in front of these containers

  • It's Black Friday (or Christmas), traffic spikes, grow our cluster and add containers

69/1692

Basic things we can ask Kubernetes to do

  • Start 5 containers using image atseashop/api:v1.3

  • Place an internal load balancer in front of these containers

  • Start 10 containers using image atseashop/webfront:v1.3

  • Place a public load balancer in front of these containers

  • It's Black Friday (or Christmas), traffic spikes, grow our cluster and add containers

  • New release! Replace my containers with the new image atseashop/webfront:v1.4

70/1692

Basic things we can ask Kubernetes to do

  • Start 5 containers using image atseashop/api:v1.3

  • Place an internal load balancer in front of these containers

  • Start 10 containers using image atseashop/webfront:v1.3

  • Place a public load balancer in front of these containers

  • It's Black Friday (or Christmas), traffic spikes, grow our cluster and add containers

  • New release! Replace my containers with the new image atseashop/webfront:v1.4

  • Keep processing requests during the upgrade; update my containers one at a time

k8smastery/concepts-k8s.md

71/1692

Other things that Kubernetes can do for us

  • Basic autoscaling

  • Blue/green deployment, canary deployment

  • Long running services, but also batch (one-off) and CRON-like jobs

  • Overcommit our cluster and evict low-priority jobs

  • Run services with stateful data (databases etc.)

  • Fine-grained access control defining what can be done by whom on which resources

  • Integrating third party services (service catalog)

  • Automating complex tasks (operators)

k8smastery/concepts-k8s.md

72/1692

Kubernetes architecture

k8smastery/concepts-k8s.md

73/1692

Kubernetes architecture

  • Ha ha ha ha

  • OK, I was trying to scare you, it's much simpler than that ❤️

k8smastery/concepts-k8s.md

75/1692

Credits

  • The first schema is a Kubernetes cluster with storage backed by multi-path iSCSI

    (Courtesy of Yongbok Kim)

  • The second one is a simplified representation of a Kubernetes cluster

    (Courtesy of Imesh Gunaratne)

k8smastery/concepts-k8s.md

77/1692

Kubernetes architecture: the nodes

  • The nodes executing our containers run a collection of services:

    • a container Engine (typically Docker)

    • kubelet (the "node agent")

    • kube-proxy (a necessary but not sufficient network component)

  • Nodes were formerly called "minions"

    (You might see that word in older articles or documentation)

k8smastery/concepts-k8s.md

78/1692

Kubernetes architecture: the control plane

  • The Kubernetes logic (its "brains") is a collection of services:

    • the API server (our point of entry to everything!)

    • core services like the scheduler and controller manager

    • etcd (a highly available key/value store; the "database" of Kubernetes)

  • Together, these services form the control plane of our cluster

  • The control plane is also called the "master"

k8smastery/concepts-k8s.md

79/1692

Running the control plane on special nodes

  • It is common to reserve a dedicated node for the control plane

    (Except for single-node development clusters, like when using minikube)

  • This node is then called a "master"

    (Yes, this is ambiguous: is the "master" a node, or the whole control plane?)

  • Normal applications are restricted from running on this node

    (By using a mechanism called "taints")

  • When high availability is required, each service of the control plane must be resilient

  • The control plane is then replicated on multiple nodes

    (This is sometimes called a "multi-master" setup)

k8smastery/concepts-k8s.md

81/1692

Running the control plane outside containers

  • The services of the control plane can run in or out of containers

  • For instance: since etcd is a critical service, some people deploy it directly on a dedicated cluster (without containers)

    (This is illustrated on the first "super complicated" schema)

  • In some hosted Kubernetes offerings (e.g. AKS, GKE, EKS), the control plane is invisible

    (We only "see" a Kubernetes API endpoint)

  • In that case, there is no "master node"

For this reason, it is more accurate to say "control plane" rather than "master."

k8smastery/concepts-k8s.md

82/1692

Do we need to run Docker at all?

No!

83/1692

Do we need to run Docker at all?

No!

  • By default, Kubernetes uses the Docker Engine to run containers
84/1692

Do we need to run Docker at all?

No!

  • By default, Kubernetes uses the Docker Engine to run containers

  • Or leverage other pluggable runtimes through the Container Runtime Interface

85/1692

Do we need to run Docker at all?

No!

  • By default, Kubernetes uses the Docker Engine to run containers

  • Or leverage other pluggable runtimes through the Container Runtime Interface

  • We could also use rkt ("Rocket") from CoreOS (deprecated)

86/1692

Do we need to run Docker at all?

No!

  • By default, Kubernetes uses the Docker Engine to run containers

  • Or leverage other pluggable runtimes through the Container Runtime Interface

  • We could also use rkt ("Rocket") from CoreOS (deprecated)

  • containerd: maintained by Docker, IBM, and community

  • Used by Docker Engine, microK8s, k3s, GKE, and standalone; has ctr CLI

87/1692

Do we need to run Docker at all?

No!

  • By default, Kubernetes uses the Docker Engine to run containers

  • Or leverage other pluggable runtimes through the Container Runtime Interface

  • We could also use rkt ("Rocket") from CoreOS (deprecated)

  • containerd: maintained by Docker, IBM, and community

  • Used by Docker Engine, microK8s, k3s, GKE, and standalone; has ctr CLI

  • CRI-O: maintained by Red Hat, SUSE, and community; based on containerd

  • Used by OpenShift and Kubic, version matched to Kubernetes

88/1692

Do we need to run Docker at all?

No!

  • By default, Kubernetes uses the Docker Engine to run containers

  • Or leverage other pluggable runtimes through the Container Runtime Interface

  • We could also use rkt ("Rocket") from CoreOS (deprecated)

  • containerd: maintained by Docker, IBM, and community

  • Used by Docker Engine, microK8s, k3s, GKE, and standalone; has ctr CLI

  • CRI-O: maintained by Red Hat, SUSE, and community; based on containerd

  • Used by OpenShift and Kubic, version matched to Kubernetes

  • And more

k8smastery/concepts-k8s.md

89/1692

Do we need to run Docker at all?

Yes!

90/1692

Do we need to run Docker at all?

Yes!

  • In this course, we'll run our apps on a single node first

  • We may need to build images and ship them around

  • We can do these things without Docker
    (and get diagnosed with NIH¹ syndrome)

  • Docker is still the most stable container engine today
    (but other options are maturing very quickly)

¹Not Invented Here

k8smastery/concepts-k8s.md

91/1692

Do we need to run Docker at all?

  • On our development environments, CI pipelines ... :

    Yes, almost certainly

  • On our production servers:

    Yes (today)

    Probably not (in the future)

More information about CRI on the Kubernetes blog

k8smastery/concepts-k8s.md

92/1692

Interacting with Kubernetes

  • We will interact with our Kubernetes cluster through the Kubernetes API

  • The Kubernetes API is (mostly) RESTful

  • It allows us to create, read, update, delete resources

  • A few common resource types are:

    • node (a machine — physical or virtual — in our cluster)

    • pod (group of containers running together on a node)

    • service (stable network endpoint to connect to one or multiple containers)

k8smastery/concepts-k8s.md

93/1692

Pods

  • Pods are a new abstraction!
95/1692

Pods

  • Pods are a new abstraction!

  • A pod can have multiple containers working together

  • (But you usually only have on container per pod)

96/1692

Pods

  • Pods are a new abstraction!

  • A pod can have multiple containers working together

  • (But you usually only have on container per pod)

  • Pod is our smallest deployable unit; Kubernetes can't mange containers directly

97/1692

Pods

  • Pods are a new abstraction!

  • A pod can have multiple containers working together

  • (But you usually only have on container per pod)

  • Pod is our smallest deployable unit; Kubernetes can't mange containers directly

  • IP addresses are associated with pods, not with individual containers

  • Containers in a pod share localhost, and can share volumes

98/1692

Pods

  • Pods are a new abstraction!

  • A pod can have multiple containers working together

  • (But you usually only have on container per pod)

  • Pod is our smallest deployable unit; Kubernetes can't mange containers directly

  • IP addresses are associated with pods, not with individual containers

  • Containers in a pod share localhost, and can share volumes

  • Multiple containers in a pod are deployed together

  • In reality, Docker doesn't know a pod, only containers/namespaces/volumes k8smastery/concepts-k8s.md

99/1692

Credits

  • The first diagram is courtesy of Lucas Käldström, in this presentation

    • it's one of the best Kubernetes architecture diagrams available!
  • The second diagram is courtesy of Weaveworks

    • a pod can have multiple containers working together

    • IP addresses are associated with pods, not with individual containers

Both diagrams used with permission.

k8smastery/concepts-k8s.md

100/1692

Image separating from the next chapter

101/1692

Getting a Kubernetes cluster for learning

(automatically generated title slide)

102/1692

Getting a Kubernetes cluster for learning

  • Best: Get a environment locally

    • Docker Desktop (Win/macOS/Linux), Rancher Desktop (Win/macOS/Linux), or microk8s (Linux)
    • Small setup effort; free; flexible environments
    • Requires 2GB+ of memory
103/1692

Getting a Kubernetes cluster for learning

  • Best: Get a environment locally

    • Docker Desktop (Win/macOS/Linux), Rancher Desktop (Win/macOS/Linux), or microk8s (Linux)
    • Small setup effort; free; flexible environments
    • Requires 2GB+ of memory
  • Good: Setup a cloud Linux host to run microk8s

    • Great if you don't have the local resources to run Kubernetes
    • Small setup effort; only free for a while
    • My $50 DigitalOcean coupon lets you run Kubernetes free for a month
104/1692

Getting a Kubernetes cluster for learning

  • Best: Get a environment locally

    • Docker Desktop (Win/macOS/Linux), Rancher Desktop (Win/macOS/Linux), or microk8s (Linux)
    • Small setup effort; free; flexible environments
    • Requires 2GB+ of memory
  • Good: Setup a cloud Linux host to run microk8s

    • Great if you don't have the local resources to run Kubernetes
    • Small setup effort; only free for a while
    • My $50 DigitalOcean coupon lets you run Kubernetes free for a month
  • Last choice: Use a browser-based solution

    • Low setup effort; but host is short-lived and has limited resources
    • Not all hands-on examples will work in the browser sandbox
105/1692

Getting a Kubernetes cluster for learning

  • Best: Get a environment locally

    • Docker Desktop (Win/macOS/Linux), Rancher Desktop (Win/macOS/Linux), or microk8s (Linux)
    • Small setup effort; free; flexible environments
    • Requires 2GB+ of memory
  • Good: Setup a cloud Linux host to run microk8s

    • Great if you don't have the local resources to run Kubernetes
    • Small setup effort; only free for a while
    • My $50 DigitalOcean coupon lets you run Kubernetes free for a month
  • Last choice: Use a browser-based solution

    • Low setup effort; but host is short-lived and has limited resources
    • Not all hands-on examples will work in the browser sandbox
  • For all environments, we'll use shpod container for tools

k8smastery/install-summary.md

106/1692

Image separating from the next chapter

107/1692

Docker Desktop (Windows 10/macOS)

(automatically generated title slide)

108/1692

Docker Desktop (Windows 10/macOS)

  • Docker Desktop (DD) is great for a local dev/test setup
109/1692

Docker Desktop (Windows 10/macOS)

  • Docker Desktop (DD) is great for a local dev/test setup

  • Requires modern macOS or Windows 10 Pro/Ent/Edu (no Home)

  • Requires Hyper-V, and disables VirtualBox
110/1692

Docker Desktop (Windows 10/macOS)

  • Docker Desktop (DD) is great for a local dev/test setup

  • Requires modern macOS or Windows 10 Pro/Ent/Edu (no Home)

  • Requires Hyper-V, and disables VirtualBox
  • Download Windows or macOS versions and install

  • For Windows, ensure you pick "Linux Containers" mode

  • Once running, enabled Kubernetes in Settings/Preferences

k8smastery/install-docker-desktop.md
111/1692

Docker Desktop for Windows

k8smastery/install-docker-desktop.md

112/1692

Enable Kubernetes in settings

k8smastery/install-docker-desktop.md

113/1692

No Kubernetes option? Switch to Linux mode

k8smastery/install-docker-desktop.md

114/1692

Check your connection in a terminal

k8smastery/install-docker-desktop.md

115/1692

Docker Desktop for macOS

k8smastery/install-docker-desktop.md

116/1692

Enable Kubernetes in preferences

k8smastery/install-docker-desktop.md

117/1692

Check your connection in a terminal

k8smastery/install-docker-desktop.md

118/1692

Image separating from the next chapter

119/1692

minikube (Windows 10 Home)

(automatically generated title slide)

120/1692

minikube (Windows 10 Home)

  • A good local install option if you can't run Docker Desktop
121/1692

minikube (Windows 10 Home)

  • A good local install option if you can't run Docker Desktop

  • Inspired by Docker Toolbox

  • Will create a local VM and configure latest Kubernetes
  • Has lots of other features with its minikube CLI
122/1692

minikube (Windows 10 Home)

  • A good local install option if you can't run Docker Desktop

  • Inspired by Docker Toolbox

  • Will create a local VM and configure latest Kubernetes
  • Has lots of other features with its minikube CLI

  • But, requires separate install of VirtualBox and kubectl

  • May not work with older Windows versions (YMMV)
123/1692

minikube (Windows 10 Home)

  • A good local install option if you can't run Docker Desktop

  • Inspired by Docker Toolbox

  • Will create a local VM and configure latest Kubernetes
  • Has lots of other features with its minikube CLI

  • But, requires separate install of VirtualBox and kubectl

  • May not work with older Windows versions (YMMV)
124/1692

minikube (Windows 10 Home)

  • A good local install option if you can't run Docker Desktop

  • Inspired by Docker Toolbox

  • Will create a local VM and configure latest Kubernetes
  • Has lots of other features with its minikube CLI

  • But, requires separate install of VirtualBox and kubectl

  • May not work with older Windows versions (YMMV)

If you get an error about "This computer doesn't have VT-X/AMD-v enabled", you need to enable virtualization in your computer BIOS.

k8smastery/install-minikube.md

125/1692

Image separating from the next chapter

126/1692

MicroK8s (Linux)

(automatically generated title slide)

127/1692

MicroK8s (Linux)

128/1692

MicroK8s (Linux)

  • Easy install and management of local Kubernetes

  • Made by Canonical (Ubuntu). Installs using snap. Works nearly everywhere

  • Has lots of other features with its microk8s CLI
129/1692

MicroK8s (Linux)

  • Easy install and management of local Kubernetes

  • Made by Canonical (Ubuntu). Installs using snap. Works nearly everywhere

  • Has lots of other features with its microk8s CLI

  • But, requires you install snap if not on Ubuntu

  • Runs on containerd rather than Docker, no biggie
  • Needs alias setup for microk8s kubectl
130/1692

MicroK8s (Linux)

  • Easy install and management of local Kubernetes

  • Made by Canonical (Ubuntu). Installs using snap. Works nearly everywhere

  • Has lots of other features with its microk8s CLI

  • But, requires you install snap if not on Ubuntu

  • Runs on containerd rather than Docker, no biggie
  • Needs alias setup for microk8s kubectl
  • Install microk8s,change group permissions, then set alias in bashrc
    sudo snap install microk8s --classic
    sudo usermod -a -G microk8s <username>
    echo "alias kubectl='microk8s kubectl'" >> ~/.bashrc
    # log out and back in if using a non-root user

k8smastery/install-microk8s.md

131/1692

MicroK8s Additional Info

  • We'll need these later (these are done for us in Docker Desktop and minikube):
  • Create kubectl config file

    microk8s kubectl config view --raw > $HOME/.kube/config
  • Install CoreDNS in Kubernetes

    sudo microk8s enable dns
  • You can also install other plugins this way like microk8s enable dashboard or microk8s enable ingress

k8smastery/install-microk8s.md

132/1692

MicroK8s Troubleshooting

  • Run a check for any config problems
  • Test MicroK8s config for any potental problems
    sudo microk8s inspect
  • If you also have Docker installed, you can ignore warnings about iptables and registries

  • See troubleshooting site if you have issues

k8smastery/install-microk8s.md

133/1692

Image separating from the next chapter

134/1692

Web-based options

(automatically generated title slide)

135/1692

Web-based options

Last choice: Use a browser-based solution

136/1692

Web-based options

Last choice: Use a browser-based solution

  • Low setup effort; but host is short-lived and has limited resources
137/1692

Web-based options

Last choice: Use a browser-based solution

  • Low setup effort; but host is short-lived and has limited resources

  • Services are not always working right, and may not be up to date

138/1692

Web-based options

Last choice: Use a browser-based solution

  • Low setup effort; but host is short-lived and has limited resources

  • Services are not always working right, and may not be up to date

  • Not all hands-on examples will work in the browser sandbox

k8smastery/install-pwk.md

139/1692

Image separating from the next chapter

140/1692

shpod: For a consistent Kubernetes experience ...

(automatically generated title slide)

141/1692

shpod: For a consistent Kubernetes experience ...

  • You can use shpod for examples

  • shpod provides a shell running in a pod on the cluster

  • It comes with many tools pre-installed (helm, stern, curl, jq...)

  • These tools are used in many exercises in these slides

  • shpod also gives you shell completion and a fancy prompt

  • Create it with kubectl apply -f https://k8smastery.com/shpod.yaml

  • Attach to shell with kubectl attach --namespace=shpod -ti shpod

  • After finishing course kubectl delete -f https://k8smastery.com/shpod.yaml

k8smastery/install-shpod.md

142/1692

Image separating from the next chapter

143/1692

First contact with kubectl

(automatically generated title slide)

144/1692

First contact with kubectl

  • kubectl is (almost) the only tool we'll need to talk to Kubernetes

  • It is a rich CLI tool around the Kubernetes API

    (Everything you can do with kubectl, you can do directly with the API)

145/1692

First contact with kubectl

  • kubectl is (almost) the only tool we'll need to talk to Kubernetes

  • It is a rich CLI tool around the Kubernetes API

    (Everything you can do with kubectl, you can do directly with the API)

  • On our machines, there is a ~/.kube/config file with:

    • the Kubernetes API address

    • the path to our TLS certificates used to authenticate

  • You can also use the --kubeconfig flag to pass a config file

  • Or directly --server, --user, etc.

146/1692

First contact with kubectl

  • kubectl is (almost) the only tool we'll need to talk to Kubernetes

  • It is a rich CLI tool around the Kubernetes API

    (Everything you can do with kubectl, you can do directly with the API)

  • On our machines, there is a ~/.kube/config file with:

    • the Kubernetes API address

    • the path to our TLS certificates used to authenticate

  • You can also use the --kubeconfig flag to pass a config file

  • Or directly --server, --user, etc.

  • kubectl can be pronounced "Cube C T L", "Cube cuttle", "Cube cuddle"...

  • I'll be using the official name "Cube Control" 😎

k8s/kubectlget.md

147/1692

kubectl is the new SSH

  • We often start managing servers with SSH

    (installing packages, troubleshooting ...)

  • At scale, it becomes tedious, repetitive, error-prone

  • Instead, we use config management, central logging, etc.

  • In many cases, we still need SSH:

    • as the underlying access method (e.g. Ansible)

    • to debug tricky scenarios

    • to inspect and poke at things

k8s/kubectlget.md

148/1692

The parallel with kubectl

  • We often start managing Kubernetes clusters with kubectl

    (deploying applications, troubleshooting ...)

  • At scale (with many applications or clusters), it becomes tedious, repetitive, error-prone

  • Instead, we use automated pipelines, observability tooling, etc.

  • In many cases, we still need kubectl:

    • to debug tricky scenarios

    • to inspect and poke at things

  • The Kubernetes API is always the underlying access method

k8s/kubectlget.md

149/1692

kubectl get

  • Let's look at our Node resources with kubectl get!
  • Look at the composition of our cluster:

    kubectl get node
  • These commands are equivalent:

    kubectl get no
    kubectl get node
    kubectl get nodes

k8s/kubectlget.md

150/1692

Obtaining machine-readable output

  • kubectl get can output JSON, YAML, or be directly formatted
  • Give us more info about the nodes:

    kubectl get nodes -o wide
  • Let's have some YAML:

    kubectl get no -o yaml

    See that kind: List at the end? It's the type of our result!

k8s/kubectlget.md

151/1692

(Ab)using kubectl and jq

  • It's super easy to build custom reports
  • Show the capacity of all our nodes as a stream of JSON objects:
    kubectl get nodes -o json |
    jq ".items[] | {name:.metadata.name} + .status.capacity"

k8s/kubectlget.md

152/1692

Viewing details

  • We can use kubectl get -o yaml to see all available details

  • However, YAML output is often simultaneously too much and not enough

  • For instance, kubectl get node node1 -o yaml is:

    • too much information (e.g.: list of images available on this node)

    • not enough information (e.g.: doesn't show pods running on this node)

    • difficult to read for a human operator

  • For a comprehensive overview, we can use kubectl describe instead

k8s/kubectlget.md

153/1692

kubectl describe

  • kubectl describe needs a resource type and (optionally) a resource name

  • It is possible to provide a resource name prefix

    (all matching objects will be displayed)

  • kubectl describe will retrieve some extra information about the resource

  • Look at the information available for your node name with one of the following:
    kubectl describe node/<node>
    kubectl describe node <node>

(We should notice a bunch of control plane pods.)

k8s/kubectlget.md

154/1692

Exploring types and definitions

  • We can list all available resource types by running kubectl api-resources
    (In Kubernetes 1.10 and prior, this command used to be kubectl get)

  • We can view the definition for a resource type with:

    kubectl explain type
  • We can view the definition of a field in a resource, for instance:

    kubectl explain node.spec
  • Or get the list of all fields and sub-fields:

    kubectl explain node --recursive

k8s/kubectlget.md

155/1692

Introspection vs. documentation

  • We can access the same information by reading the API documentation

  • The API documentation is usually easier to read, but:

    • it won't show custom types (like Custom Resource Definitions)

    • we need to make sure that we look at the correct version

  • kubectl api-resources and kubectl explain perform introspection

    (they communicate with the API server and obtain the exact type definitions)

k8s/kubectlget.md

156/1692

Type names

  • The most common resource names have three forms:

    • singular (e.g. node, service, deployment)

    • plural (e.g. nodes, services, deployments)

    • short (e.g. no, svc, deploy)

  • Some resources do not have a short name

  • Endpoints only have a plural form

    (because even a single Endpoints resource is actually a list of endpoints)

k8s/kubectlget.md

157/1692

More get commands: Services

  • A service is a stable endpoint to connect to "something"

    (In the initial proposal, they were called "portals")

  • List the services on our cluster with one of these commands:
    kubectl get services
    kubectl get svc
158/1692

More get commands: Services

  • A service is a stable endpoint to connect to "something"

    (In the initial proposal, they were called "portals")

  • List the services on our cluster with one of these commands:
    kubectl get services
    kubectl get svc

There is already one service on our cluster: the Kubernetes API itself.

k8s/kubectlget.md

159/1692

More get commands: Listing running containers

  • Containers are manipulated through pods

  • A pod is a group of containers:

    • running together (on the same node)

    • sharing resources (RAM, CPU; but also network, volumes)

  • List pods on our cluster:
    kubectl get pods
160/1692

More get commands: Listing running containers

  • Containers are manipulated through pods

  • A pod is a group of containers:

    • running together (on the same node)

    • sharing resources (RAM, CPU; but also network, volumes)

  • List pods on our cluster:
    kubectl get pods

Where are the pods that we saw just a moment earlier?!?

k8s/kubectlget.md

161/1692

Namespaces

  • Namespaces allow us to segregate resources
  • List the namespaces on our cluster with one of these commands:
    kubectl get namespaces
    kubectl get namespace
    kubectl get ns
162/1692

Namespaces

  • Namespaces allow us to segregate resources
  • List the namespaces on our cluster with one of these commands:
    kubectl get namespaces
    kubectl get namespace
    kubectl get ns

You know what ... This kube-system thing looks suspicious.

In fact, I'm pretty sure it showed up earlier, when we did:

kubectl describe node <node-name>

k8s/kubectlget.md

163/1692

Accessing namespaces

  • By default, kubectl uses the default namespace

  • We can see resources in all namespaces with --all-namespaces

  • List the pods in all namespaces:

    kubectl get pods --all-namespaces
  • Since Kubernetes 1.14, we can also use -A as a shorter version:

    kubectl get pods -A

Here are our system pods!

k8s/kubectlget.md

164/1692

What are all these control plane pods?

  • etcd is our etcd server

  • kube-apiserver is the API server

  • kube-controller-manager and kube-scheduler are other control plane components

  • coredns provides DNS-based service discovery (replacing kube-dns as of 1.11)

  • kube-proxy is the (per-node) component managing port mappings and such

  • <net name> is the optional (per-node) component managing the network overlay

  • the READY column indicates the number of containers in each pod

  • Note: this only shows containers, you won't see host svcs (e.g. microk8s)

  • Also Note: you may see different namespaces depending on setup

k8s/kubectlget.md

165/1692

Scoping another namespace

  • We can also look at a different namespace (other than default)
  • List only the pods in the kube-system namespace:
    kubectl get pods --namespace=kube-system
    kubectl get pods -n kube-system

k8s/kubectlget.md

166/1692

Namespaces and other kubectl commands

  • We can use -n/--namespace with almost every kubectl command

  • Example:

    • kubectl create --namespace=X to create something in namespace X
  • We can use -A/--all-namespaces with most commands that manipulate multiple objects

  • Examples:

    • kubectl delete can delete resources across multiple namespaces

    • kubectl label can add/remove/update labels across multiple namespaces

k8s/kubectlget.md

167/1692

What about kube-public?

  • List the pods in the kube-public namespace:
    kubectl -n kube-public get pods

Nothing!

kube-public is created by our installer & used for security bootstrapping.

k8s/kubectlget.md

168/1692

Exploring kube-public

  • The only interesting object in kube-public is a ConfigMap named cluster-info
  • List ConfigMap objects:

    kubectl -n kube-public get configmaps
  • Inspect cluster-info:

    kubectl -n kube-public get configmap cluster-info -o yaml

Note the selfLink URI: /api/v1/namespaces/kube-public/configmaps/cluster-info

We can use that (later in kubectl context lectures)!

k8s/kubectlget.md

169/1692

What about kube-node-lease?

  • Starting with Kubernetes 1.14, there is a kube-node-lease namespace

    (or in Kubernetes 1.13 if the NodeLease feature gate is enabled)

  • That namespace contains one Lease object per node

  • Node leases are a new way to implement node heartbeats

    (i.e. node regularly pinging the control plane to say "I'm alive!")

  • For more details, see KEP-0009 or the node controller documentation k8s/kubectlget.md

170/1692

Services

  • A service is a stable endpoint to connect to "something"

    (In the initial proposal, they were called "portals")

  • List the services on our cluster with one of these commands:
    kubectl get services
    kubectl get svc
171/1692

Services

  • A service is a stable endpoint to connect to "something"

    (In the initial proposal, they were called "portals")

  • List the services on our cluster with one of these commands:
    kubectl get services
    kubectl get svc

There is already one service on our cluster: the Kubernetes API itself.

k8s/kubectlget.md

172/1692

ClusterIP services

  • A ClusterIP service is internal, available from the cluster only

  • This is useful for introspection from within containers

  • Try to connect to the API:

    curl -k https://10.96.0.1
    • -k is used to skip certificate verification

    • Make sure to replace 10.96.0.1 with the CLUSTER-IP shown by kubectl get svc

The command above should either time out, or show an authentication error. Why?

k8s/kubectlget.md

173/1692

Time out

  • Connections to ClusterIP services only work from within the cluster

  • If we are outside the cluster, the curl command will probably time out

    (Because the IP address, e.g. 10.96.0.1, isn't routed properly outside the cluster)

  • This is the case with most "real" Kubernetes clusters

  • To try the connection from within the cluster, we can use shpod

k8s/kubectlget.md

174/1692

Authentication error

This is what we should see when connecting from within the cluster:

$ curl -k https://10.96.0.1
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {
},
"code": 403
}

k8s/kubectlget.md

175/1692

Explanations

  • We can see kind, apiVersion, metadata

  • These are typical of a Kubernetes API reply

  • Because we are talking to the Kubernetes API

  • The Kubernetes API tells us "Forbidden"

    (because it requires authentication)

  • The Kubernetes API is reachable from within the cluster

    (many apps integrating with Kubernetes will use this)

k8s/kubectlget.md

176/1692

DNS integration

  • Each service also gets a DNS record

  • The Kubernetes DNS resolver is available from within pods

    (and sometimes, from within nodes, depending on configuration)

  • Code running in pods can connect to services using their name

    (e.g. https://kubernetes/...)

k8s/kubectlget.md

177/1692

Image separating from the next chapter

178/1692

Running our first containers on Kubernetes

(automatically generated title slide)

179/1692

Running our first containers on Kubernetes

  • First things first: we cannot run a container
180/1692

Running our first containers on Kubernetes

  • First things first: we cannot run a container

  • We are going to run a pod, and in that pod there will be a single container

181/1692

Running our first containers on Kubernetes

  • First things first: we cannot run a container

  • We are going to run a pod, and in that pod there will be a single container

  • In that container in the pod, we are going to run a simple ping command

  • Then we are going to start additional copies of the pod

k8s/kubectlrun.md

182/1692

Starting a simple pod with kubectl run

  • We need to specify at least a name and the image we want to use
  • Let's ping the address of localhost, the loopback interface:
    kubectl run pingpong --image alpine ping 127.0.0.1
183/1692

Starting a simple pod with kubectl run

  • We need to specify at least a name and the image we want to use
  • Let's ping the address of localhost, the loopback interface:
    kubectl run pingpong --image alpine ping 127.0.0.1

(Starting with Kubernetes 1.12, we get a message telling us that kubectl run is deprecated. Let's ignore it for now.)

k8s/kubectlrun.md

184/1692

Behind the scenes of kubectl run

  • Let's look at the resources that were created by kubectl run
  • List most resource types:
    kubectl get all
185/1692

Behind the scenes of kubectl run

  • Let's look at the resources that were created by kubectl run
  • List most resource types:
    kubectl get all

We should see the following things:

  • deployment.apps/pingpong (the deployment that we just created)
  • replicaset.apps/pingpong-xxxxxxxxxx (a replica set created by the deployment)
  • pod/pingpong-xxxxxxxxxx-yyyyy (a pod created by the replica set)

Note: as of 1.10.1, resource types are displayed in more detail.

k8s/kubectlrun.md

186/1692

What are these different things?

  • A deployment is a high-level construct

    • allows scaling, rolling updates, rollbacks

    • multiple deployments can be used together to implement a canary deployment

    • delegates pods management to replica sets

  • A replica set is a low-level construct

    • makes sure that a given number of identical pods are running

    • allows scaling

    • rarely used directly

  • Note: A replication controller is the deprecated predecessor of a replica set

k8s/kubectlrun.md

187/1692

Our pingpong deployment

  • kubectl run created a deployment, deployment.apps/pingpong
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/pingpong 1 1 1 1 10m
  • That deployment created a replica set, replicaset.apps/pingpong-xxxxxxxxxx
NAME DESIRED CURRENT READY AGE
replicaset.apps/pingpong-7c8bbcd9bc 1 1 1 10m
  • That replica set created a pod, pod/pingpong-xxxxxxxxxx-yyyyy
NAME READY STATUS RESTARTS AGE
pod/pingpong-7c8bbcd9bc-6c9qz 1/1 Running 0 10m
  • We'll see later how these folks play together for:

    • scaling, high availability, rolling updates

k8s/kubectlrun.md

188/1692

Viewing container output

  • Let's use the kubectl logs command

  • We will pass either a pod name, or a type/name

    (E.g. if we specify a deployment or replica set, it will get the first pod in it)

  • Unless specified otherwise, it will only show logs of the first container in the pod

    (Good thing there's only one in ours!)

  • View the result of our ping command:
    kubectl logs deploy/pingpong

k8s/kubectlrun.md

189/1692

Streaming logs in real time

  • Just like docker logs, kubectl logs supports convenient options:

    • -f/--follow to stream logs in real time (à la tail -f)

    • --tail to indicate how many lines you want to see (from the end)

    • --since to get logs only after a given timestamp

  • View the latest logs of our ping command:

    kubectl logs deploy/pingpong --tail 1 --follow
  • Leave that command running, so that we can keep an eye on these logs

k8s/kubectlrun.md

190/1692

Scaling our application

  • We can create additional copies of our container (I mean, our pod) with kubectl scale
  • Scale our pingpong deployment:

    kubectl scale deploy/pingpong --replicas 3
  • Note that this command does exactly the same thing:

    kubectl scale deployment pingpong --replicas 3

Note: what if we tried to scale replicaset.apps/pingpong-xxxxxxxxxx?

We could! But the deployment would notice it right away, and scale back to the initial level.

k8s/kubectlrun.md

191/1692

Log streaming

  • Let's look again at the output of kubectl logs

    (the one we started before scaling up)

  • kubectl logs shows us one line per second

  • We could expect 3 lines per second

    (since we should now have 3 pods running ping)

  • Let's try to figure out what's happening!

k8s/kubectlrun.md

192/1692

Streaming logs of multiple pods

  • What happens if we restart kubectl logs?
  • Interrupt kubectl logs (with Ctrl-C)
  • Restart it:
    kubectl logs deploy/pingpong --tail 1 --follow

kubectl logs will warn us that multiple pods were found, and that it's showing us only one of them.

Let's leave kubectl logs running while we keep exploring.

k8s/kubectlrun.md

193/1692

Resilience

  • The deployment pingpong watches its replica set

  • The replica set ensures that the right number of pods are running

  • What happens if pods disappear?

  • In a separate window, watch the list of pods:
    watch kubectl get pods
  • Destroy the pod currently shown by kubectl logs:
    kubectl delete pod pingpong-xxxxxxxxxx-yyyyy

k8s/kubectlrun.md

194/1692

What happened?

  • kubectl delete pod terminates the pod gracefully

    (sending it the TERM signal and waiting for it to shutdown)

  • As soon as the pod is in "Terminating" state, the Replica Set replaces it

  • But we can still see the output of the "Terminating" pod in kubectl logs

  • Until 30 seconds later, when the grace period expires

  • The pod is then killed, and kubectl logs exits

k8s/kubectlrun.md

195/1692

What if we wanted something different?

  • What if we wanted to start a "one-shot" container that doesn't get restarted?

  • We could use kubectl run --restart=OnFailure or kubectl run --restart=Never

  • These commands would create jobs or pods instead of deployments

  • Under the hood, kubectl run invokes "generators" to create resource descriptions

  • We could also write these resource descriptions ourselves (typically in YAML),
    and create them on the cluster with kubectl apply -f (discussed later)

  • With kubectl run --schedule=..., we can also create cronjobs

k8s/kubectlrun.md

196/1692

Scheduling periodic background work

  • A Cron Job is a job that will be executed at specific intervals

    (the name comes from the traditional cronjobs executed by the UNIX crond)

  • It requires a schedule, represented as five space-separated fields:

    • minute [0,59]
    • hour [0,23]
    • day of the month [1,31]
    • month of the year [1,12]
    • day of the week ([0,6] with 0=Sunday)
  • * means "all valid values"; /N means "every N"

  • Example: */3 * * * * means "every three minutes"

k8s/kubectlrun.md

197/1692

Creating a Cron Job

  • Let's create a simple job to be executed every three minutes

  • Cron Jobs need to terminate, otherwise they'd run forever

  • Create the Cron Job:

    kubectl run every3mins --schedule="*/3 * * * *" --restart=OnFailure \
    --image=alpine sleep 10
  • Check the resource that was created:

    kubectl get cronjobs

k8s/kubectlrun.md

198/1692

Cron Jobs in action

  • At the specified schedule, the Cron Job will create a Job

  • The Job will create a Pod

  • The Job will make sure that the Pod completes

    (re-creating another one if it fails, for instance if its node fails)

  • Check the Jobs that are created:
    kubectl get jobs

(It will take a few minutes before the first job is scheduled.)

k8s/kubectlrun.md

199/1692

What about that deprecation warning?

  • As we can see from the previous slide, kubectl run can do many things

  • The exact type of resource created is not obvious

  • To make things more explicit, it is better to use kubectl create:

    • kubectl create deployment to create a deployment

    • kubectl create job to create a job

    • kubectl create cronjob to run a job periodically
      (since Kubernetes 1.14)

  • Eventually, kubectl run will be used only to start one-shot pods

    (see https://github.com/kubernetes/kubernetes/pull/68132)

k8s/kubectlrun.md

200/1692

Various ways of creating resources

  • kubectl run

    • easy way to get started
    • versatile
  • kubectl create <resource>

    • explicit, but lacks some features
    • can't create a CronJob before Kubernetes 1.14
    • can't pass command-line arguments to deployments
  • kubectl create -f foo.yaml or kubectl apply -f foo.yaml

    • all features are available
    • requires writing YAML

k8s/kubectlrun.md

201/1692

Viewing logs of multiple pods

  • When we specify a deployment name, only one single pod's logs are shown

  • We can view the logs of multiple pods by specifying a selector

  • A selector is a logic expression using labels

  • Conveniently, when you kubectl run somename, the associated objects have a run=somename label

  • View the last line of log from all pods with the run=pingpong label:
    kubectl logs -l run=pingpong --tail 1

k8s/kubectlrun.md

202/1692

Streaming logs of multiple pods

  • Can we stream the logs of all our pingpong pods?
  • Combine -l and -f flags:
    kubectl logs -l run=pingpong --tail 1 -f

Note: combining -l and -f is only possible since Kubernetes 1.14!

Let's try to understand why ...

k8s/kubectlrun.md

203/1692

Streaming logs of many pods

  • Let's see what happens if we try to stream the logs for more than 5 pods
  • Scale up our deployment:

    kubectl scale deployment pingpong --replicas=8
  • Stream the logs:

    kubectl logs -l run=pingpong --tail 1 -f

We see a message like the following one:

error: you are attempting to follow 8 log streams,
but maximum allowed concurency is 5,
use --max-log-requests to increase the limit

k8s/kubectlrun.md

204/1692

Why can't we stream the logs of many pods?

  • kubectl opens one connection to the API server per pod

  • For each pod, the API server opens one extra connection to the corresponding kubelet

  • If there are 1000 pods in our deployment, that's 1000 inbound + 1000 outbound connections on the API server

  • This could easily put a lot of stress on the API server

  • Prior Kubernetes 1.14, it was decided to not allow multiple connections

  • From Kubernetes 1.14, it is allowed, but limited to 5 connections

    (this can be changed with --max-log-requests)

  • For more details about the rationale, see PR #67573

k8s/kubectlrun.md

205/1692

Shortcomings of kubectl logs

  • We don't see which pod sent which log line

  • If pods are restarted / replaced, the log stream stops

  • If new pods are added, we don't see their logs

  • To stream the logs of multiple pods, we need to write a selector

  • There are external tools to address these shortcomings

    (e.g.: Stern)

k8s/kubectlrun.md

206/1692

kubectl logs -l ... --tail N

  • If we run this with Kubernetes 1.12, the last command shows multiple lines

  • This is a regression when --tail is used together with -l/--selector

  • It always shows the last 10 lines of output for each container

    (instead of the number of lines specified on the command line)

  • The problem was fixed in Kubernetes 1.13

See #70554 for details.

k8s/kubectlrun.md

207/1692

Party tricks involving IP addresses

  • It is possible to specify an IP address with less than 4 bytes

    (example: 127.1)

  • Zeroes are then inserted in the middle

  • As a result, 127.1 expands to 127.0.0.1

  • So we can ping 127.1 to ping localhost!

(See this blog post for more details.)

k8s/kubectlrun.md

208/1692

More party tricks with IP addresses

  • We can also ping 1.1

  • 1.1 will expand to 1.0.0.1

  • This is one of the addresses of Cloudflare's public DNS resolver

  • This is a quick way to check connectivity

    (if we can reach 1.1, we probably have internet access)

k8s/kubectlrun.md

209/1692

Image separating from the next chapter

210/1692

Accessing logs from the CLI

(automatically generated title slide)

211/1692

Accessing logs from the CLI

  • The kubectl logs command has limitations:

    • it cannot stream logs from multiple pods at a time

    • when showing logs from multiple pods, it mixes them all together

  • We are going to see how to do it better

k8s/logs-cli.md

212/1692

Doing it manually

  • We could (if we were so inclined) write a program or script that would:

    • take a selector as an argument

    • enumerate all pods matching that selector (with kubectl get -l ...)

    • fork one kubectl logs --follow ... command per container

    • annotate the logs (the output of each kubectl logs ... process) with their origin

    • preserve ordering by using kubectl logs --timestamps ... and merge the output

213/1692

Doing it manually

  • We could (if we were so inclined) write a program or script that would:

    • take a selector as an argument

    • enumerate all pods matching that selector (with kubectl get -l ...)

    • fork one kubectl logs --follow ... command per container

    • annotate the logs (the output of each kubectl logs ... process) with their origin

    • preserve ordering by using kubectl logs --timestamps ... and merge the output

  • We could do it, but thankfully, others did it for us already!

k8s/logs-cli.md

214/1692

Stern

Stern is an open source project originally by Wercker.

From the README:

Stern allows you to tail multiple pods on Kubernetes and multiple containers within the pod. Each result is color coded for quicker debugging.

The query is a regular expression so the pod name can easily be filtered and you don't need to specify the exact id (for instance omitting the deployment id). If a pod is deleted it gets removed from tail and if a new pod is added it automatically gets tailed.

Exactly what we need!

k8s/logs-cli.md

215/1692

Checking if Stern is installed

  • Run stern (without arguments) to check if it's installed:

    $ stern
    Tail multiple pods and containers from Kubernetes
    Usage:
    stern pod-query [flags]
  • If it's missing, let's see how to install it

k8s/logs-cli.md

216/1692

Installing Stern

  • Stern is written in Go

  • Go programs are usually very easy to install

    (no dependencies, extra libraries to install, etc)

  • Binary releases are available on GitHub

  • Stern is also available through most package managers

    (e.g. on macOS, we can brew install stern or sudo port install stern) k8s/logs-cli.md

217/1692

Using Stern

  • There are two ways to specify the pods whose logs we want to see:

    • -l followed by a selector expression (like with many kubectl commands)

    • with a "pod query," i.e. a regex used to match pod names

  • These two ways can be combined if necessary

  • View the logs for all the pingpong containers:
    stern pingpong

k8s/logs-cli.md

218/1692

Stern convenient options

  • The --tail N flag shows the last N lines for each container

    (Instead of showing the logs since the creation of the container)

  • The -t / --timestamps flag shows timestamps

  • The --all-namespaces flag is self-explanatory

  • View what's up with the weave system containers:
    stern --tail 1 --timestamps --all-namespaces weave

k8s/logs-cli.md

219/1692

Using Stern with a selector

  • When specifying a selector, we can omit the value for a label

  • This will match all objects having that label (regardless of the value)

  • Everything created with kubectl run has a label run

  • Everything created with kubectl create deployment has a label app

  • We can use that property to view the logs of all the pods created with kubectl create deployment

  • View the logs for all the things started with kubectl create deployment:
    stern -l app
220/1692

:EN:- Viewing pod logs from the CLI :FR:- Consulter les logs des pods depuis la CLI k8s/logs-cli.md

Cleanup

Let's cleanup before we start the next lecture!

  • remove our deployment and cronjob:
    kubectl delete deployment/pingpong cronjob/sleep

k8smastery/cleanup-pingpong-sleep.md

221/1692

Image separating from the next chapter

222/1692

Assignment 1: first steps

(automatically generated title slide)

223/1692

Assignment 1: first steps

Answer these questions with the kubectl command you'd use to get the answer:

Cluster inventory

1.1. How many nodes does your cluster have?

1.2. What kernel version and what container engine is each node running?

(answers on next slide)

assignments/01kubectlrun.md

224/1692

Answers

1.1. We can get a list of nodes with kubectl get nodes.

1.2. kubectl get nodes -o wide will list extra information for each node.

This will include kernel version and container engine.

assignments/01kubectlrun.md

225/1692

Assignment 1: first steps

Control plane examination

2.1. List only the pods in the kube-system namespace.

2.2. Explain the role of some of these pods.

2.3. If there are few or no pods in kube-system, why could that be?

(answers on next slide)

assignments/01kubectlrun.md

226/1692

Answers

2.1. kubectl get pods --namespace=kube-system

2.2. This depends on how our cluster was set up.

On some clusters, we might see pods named etcd-XXX, kube-apiserver-XXX: these correspond to control plane components.

It's also common to see kubedns-XXX or coredns-XXX: these implement the DNS service that lets us resolve service names into their ClusterIP address.

2.3. On some clusters, the control plane is located outside the cluster itself.

In that case, the control plane won't show up in kube-system, but you can find on host with ps aux | grep kube.

assignments/01kubectlrun.md

227/1692

Assignment 1: first steps

Running containers

3.1. Create a deployment using kubectl create that runs the image bretfisher/clock and name it ticktock.

3.2. Start 2 more containers of that image in the ticktock deployment.

3.3. Use a selector to output only the last line of logs of each container.

(answers on next slide)

assignments/01kubectlrun.md

228/1692

Answers

3.1. kubectl create deployment ticktock --image=bretfisher/clock

By default, it will have one replica, translating to one container.

3.2. kubectl scale deployment ticktock --replicas=3

This will scale the deployment to three replicas (two more containers).

3.3. kubectl logs --selector=app=ticktock --tail=1

All the resources created with kubectl create deployment xxx will have the label app=xxx. If you needed to use a pod selector, you can see them in the resource that created them. In this case that's the ReplicaSet, so kubectl describe replicaset ticktock-xxxxx would help.

Therefore, we use the selector app=ticktock here to match all the pods belonging to this deployment.

assignments/01kubectlrun.md

229/1692

19,000 words

They say, "a picture is worth one thousand words."

The following 19 slides show what really happens when we run:

kubectl run web --image=nginx --replicas=3

k8s/deploymentslideshow.md

230/1692

Image separating from the next chapter

250/1692

Exposing containers

(automatically generated title slide)

251/1692

Exposing containers

  • We can connect to our pods using their IP address

  • Then we need to figure out a lot of things:

    • how do we look up the IP address of the pod(s)?

    • how do we connect from outside the cluster?

    • how do we load balance traffic?

    • what if a pod fails?

  • Kubernetes has a resource type named Service

  • Services address all these questions!

k8smastery/kubectlexpose.md

252/1692

Services in a nutshell

  • Services give us a stable endpoint to connect to a pod or a group of pods

  • An easy way to create a service is to use kubectl expose

  • If we have a deployment named my-little-deploy, we can run:

    kubectl expose deployment my-little-deploy --port=80

    ... and this will create a service with the same name (my-little-deploy)

  • Services are automatically added to an internal DNS zone

    (in the example above, our code can now connect to http://my-little-deploy/)

k8smastery/kubectlexpose.md

253/1692

Advantages of services

  • We don't need to look up the IP address of the pod(s)

    (we resolve the IP address of the service using DNS)

  • There are multiple service types; some of them allow external traffic

    (e.g. LoadBalancer and NodePort)

  • Services provide load balancing

    (for both internal and external traffic)

  • Service addresses are independent from pods' addresses

    (when a pod fails, the service seamlessly sends traffic to its replacement)

k8smastery/kubectlexpose.md

254/1692

Many kinds and flavors of service

  • There are different types of services:

    ClusterIP, NodePort, LoadBalancer, ExternalName

  • There are also headless services

  • Services can also have optional external IPs

  • There is also another resource type called Ingress

    (specifically for HTTP services)

  • Wow, that's a lot! Let's start with the basics ...

k8smastery/kubectlexpose.md

255/1692

ClusterIP

  • It's the default service type

  • A virtual IP address is allocated for the service

    (in an internal, private range; e.g. 10.96.0.0/12)

  • This IP address is reachable only from within the cluster (nodes and pods)

  • Our code can connect to the service using the original port number

  • Perfect for internal communication, within the cluster

k8smastery/kubectlexpose.md

256/1692

LoadBalancer

  • An external load balancer is allocated for the service

    (typically a cloud load balancer, e.g. ELB on AWS, GLB on GCE ...)

  • This is available only when the underlying infrastructure provides some kind of "load balancer as a service"

  • Each service of that type will typically cost a little bit of money

    (e.g. a few cents per hour on AWS or GCE)

  • Ideally, traffic would flow directly from the load balancer to the pods

  • In practice, it will often flow through a NodePort first

k8smastery/kubectlexpose.md

257/1692

NodePort

  • A port number is allocated for the service

    (by default, in the 30000-32767 range)

  • That port is made available on all our nodes and anybody can connect to it

    (we can connect to any node on that port to reach the service)

  • Our code needs to be changed to connect to that new port number

  • Under the hood: kube-proxy sets up a bunch of iptables rules on our nodes

  • Sometimes, it's the only available option for external traffic

    (e.g. most clusters deployed with kubeadm or on-premises)

k8smastery/kubectlexpose.md

258/1692

Running containers with open ports

  • Since ping doesn't have anything to connect to, we'll have to run something else

  • We could use the nginx official image, but ...

    ... we wouldn't be able to tell the backends from each other!

  • We are going to use bretfisher/httpenv, a tiny HTTP server written in Go

  • bretfisher/httpenv listens on port 8888

  • It serves its environment variables in JSON format

  • The environment variables will include HOSTNAME, which will be the pod name

    (and therefore, will be different on each backend)

k8smastery/kubectlexpose.md

259/1692

Creating a deployment for our HTTP server

  • We could do kubectl run httpenv --image=bretfisher/httpenv ...

  • But since kubectl run is changing, let's see how to use kubectl create instead

  • In another window, watch the pods (to see when they are created):
    kubectl get pods -w
  • Create a deployment for this very lightweight HTTP server:

    kubectl create deployment httpenv --image=bretfisher/httpenv
  • Scale it to 10 replicas:

    kubectl scale deployment httpenv --replicas=10

k8smastery/kubectlexpose.md

260/1692

Exposing our deployment

  • We'll create a default ClusterIP service
  • Expose the HTTP port of our server:

    kubectl expose deployment httpenv --port 8888
  • Look up which IP address was allocated:

    kubectl get service

k8smastery/kubectlexpose.md

261/1692

Services are layer 4 constructs

  • You can assign IP addresses to services, but they are still layer 4

    (i.e. a service is not an IP address; it's an IP address + protocol + port)

  • This is caused by the current implementation of kube-proxy

    (it relies on mechanisms that don't support layer 3)

  • As a result: you have to indicate the port number for your service

    (with some exceptions, like ExternalName or headless services, covered later)

k8smastery/kubectlexpose.md

262/1692

Testing our service

  • We will now send a few HTTP requests to our pods
  • Run shpod if not on Linux host so we can access internal ClusterIP

    kubectl attach --namespace=shpod -ti shpod
  • Let's obtain the IP address that was allocated for our service, programmatically:

    IP=$(kubectl get svc httpenv -o go-template --template '{{ .spec.clusterIP }}')
  • Send a few requests:

    curl http://$IP:8888/
  • Too much output? Filter it with jq:

    curl -s http://$IP:8888/ | jq .HOSTNAME

k8smastery/kubectlexpose.md

263/1692

ExternalName

  • Services of type ExternalName are quite different

  • No load balancer (internal or external) is created

  • Only a DNS entry gets added to the DNS managed by Kubernetes

  • That DNS entry will just be a CNAME to a provided record

Example:

kubectl create service externalname k8s --external-name kubernetes.io

Creates a CNAME k8s pointing to kubernetes.io

k8smastery/kubectlexpose.md

264/1692

External IPs

  • We can add an External IP to a service, e.g.:

    kubectl expose deploy my-little-deploy --port=80 --external-ip=1.2.3.4
  • 1.2.3.4 should be the address of one of our nodes

    (it could also be a virtual address, service address, or VIP, shared by multiple nodes)

  • Connections to 1.2.3.4:80 will be sent to our service

  • External IPs will also show up on services of type LoadBalancer

    (they will be added automatically by the process provisioning the load balancer)

k8smastery/kubectlexpose.md

265/1692

Headless services

  • Sometimes, we want to access our scaled services directly:

    • if we want to save a tiny little bit of latency (typically less than 1ms)

    • if we need to connect over arbitrary ports (instead of a few fixed ones)

    • if we need to communicate over another protocol than UDP or TCP

    • if we want to decide how to balance the requests client-side

    • ...

  • In that case, we can use a "headless service"

k8smastery/kubectlexpose.md

266/1692

Creating a headless services

  • A headless service is obtained by setting the clusterIP field to None

    (Either with --cluster-ip=None, or by providing a custom YAML)

  • As a result, the service doesn't have a virtual IP address

  • Since there is no virtual IP address, there is no load balancer either

  • CoreDNS will return the pods' IP addresses as multiple A records

  • This gives us an easy way to discover all the replicas for a deployment

k8smastery/kubectlexpose.md

267/1692

Services and endpoints

  • A service has a number of "endpoints"

  • Each endpoint is a host + port where the service is available

  • The endpoints are maintained and updated automatically by Kubernetes

  • Check the endpoints that Kubernetes has associated with our httpenv service:
    kubectl describe service httpenv

In the output, there will be a line starting with Endpoints:.

That line will list a bunch of addresses in host:port format.

k8smastery/kubectlexpose.md

268/1692

Viewing endpoint details

  • When we have many endpoints, our display commands truncate the list

    kubectl get endpoints
  • If we want to see the full list, we can use a different output:

    kubectl get endpoints httpenv -o yaml
  • These IP addresses should match the addresses of the corresponding pods:

    kubectl get pods -l app=httpenv -o wide

k8smastery/kubectlexpose.md

269/1692

endpoints not endpoint

  • endpoints is the only resource that cannot be singular
$ kubectl get endpoint
error: the server doesn't have a resource type "endpoint"
  • This is because the type itself is plural (unlike every other resource)

  • There is no endpoint object: type Endpoints struct

  • The type doesn't represent a single endpoint, but a list of endpoints

k8smastery/kubectlexpose.md

270/1692

The DNS zone

  • In the kube-system namespace, there should be a service named kube-dns

  • This is the internal DNS server that can resolve service names

  • The default domain name for the service we created is default.svc.cluster.local

  • Get the IP address of the internal DNS server:

    IP=$(kubectl -n kube-system get svc kube-dns -o jsonpath={.spec.clusterIP})
  • Resolve the cluster IP for the httpenv service:

    host httpenv.default.svc.cluster.local $IP

k8smastery/kubectlexpose.md

271/1692

Ingress

  • Ingresses are another type (kind) of resource

  • They are specifically for HTTP services

    (not TCP or UDP)

  • They can also handle TLS certificates, URL rewriting ...

  • They require an Ingress Controller to function

k8smastery/kubectlexpose.md

272/1692

Cleanup

Let's cleanup before we start the next lecture!

  • remove our httpenv resources:
    kubectl delete deployment/httpenv service/httpenv

k8smastery/cleanup-httpenv.md

273/1692

Image separating from the next chapter

274/1692

Kubernetes network model

(automatically generated title slide)

275/1692

Kubernetes network model

  • TL,DR:

    Our cluster (nodes and pods) is one big flat IP network.

276/1692

Kubernetes network model

  • TL,DR:

    Our cluster (nodes and pods) is one big flat IP network.

  • In detail:

    • all nodes must be able to reach each other, without NAT

    • all pods must be able to reach each other, without NAT

    • pods and nodes must be able to reach each other, without NAT

    • each pod is aware of its IP address (no NAT)

    • pod IP addresses are assigned by the network implementation

  • Kubernetes doesn't mandate any particular implementation

k8s/kubenet.md

277/1692

Kubernetes network model: the good

  • Everything can reach everything

  • No address translation

  • No port translation

  • No new protocol

  • The network implementation can decide how to allocate addresses

  • IP addresses don't have to be "portable" from a node to another

    (For example, We can use a subnet per node and use a simple routed topology)

  • The specification is simple enough to allow many various implementations

k8s/kubenet.md

278/1692

Kubernetes network model: the less good

  • Everything can reach everything

    • if you want security, you need to add network policies

    • the network implementation you use needs to support them

  • There are literally dozens of implementations out there

    (15 are listed in the Kubernetes documentation)

  • Pods have level 3 (IP) connectivity, but services are level 4 (TCP or UDP)

    (Services map to a single UDP or TCP port; no port ranges or arbitrary IP packets)

  • kube-proxy is on the data path when connecting to a pod or container,
    and it's not particularly fast (relies on userland proxying or iptables)

k8s/kubenet.md

279/1692

Kubernetes network model: in practice

  • The nodes we are using have been set up to use kubenet, Calico, or something else

  • Don't worry about the warning about kube-proxy performance

  • Unless you:

    • routinely saturate 10G network interfaces
    • count packet rates in millions per second
    • run high-traffic VOIP or gaming platforms
    • do weird things that involve millions of simultaneous connections
      (in which case you're already familiar with kernel tuning)
  • If necessary, there are alternatives to kube-proxy; e.g. kube-router

k8s/kubenet.md

280/1692

The Container Network Interface (CNI)

  • Most Kubernetes clusters use CNI "plugins" to implement networking

  • When a pod is created, Kubernetes delegates the network setup to these plugins

    (it can be a single plugin, or a combination of plugins, each doing one task)

  • Typically, CNI plugins will:

    • allocate an IP address (by calling an IPAM plugin)

    • add a network interface into the pod's network namespace

    • configure the interface as well as required routes, etc.

k8s/kubenet.md

281/1692

Multiple moving parts

  • The "pod-to-pod network" or "pod network":

    • provides communication between pods and nodes

    • is generally implemented with CNI plugins

  • The "pod-to-service network":

    • provides internal communication and load balancing

    • is generally implemented with kube-proxy (or maybe kube-router)

  • Network policies:

    • provide firewalling and isolation

    • can be bundled with the "pod network" or provided by another component

k8s/kubenet.md

282/1692

Even more moving parts

  • Inbound traffic can be handled by multiple components:

    • something like kube-proxy or kube-router (for NodePort services)

    • load balancers (ideally, connected to the pod network)

  • It is possible to use multiple pod networks in parallel

    (with "meta-plugins" like CNI-Genie or Multus)

  • Some solutions can fill multiple roles

    (e.g. kube-router can be set up to provide the pod network and/or network policies and/or replace kube-proxy)

k8s/kubenet.md

283/1692

Image separating from the next chapter

284/1692

Assignment 2: more about deployments

(automatically generated title slide)

285/1692

Assignment 2: more about deployments

  1. Create a deployment called littletomcat using the tomcat image.

  2. What command will help you get the IP address of that Tomcat server?

  3. What steps would you take to ping it from another container?

    (Use the shpod environment if necessary.)

  4. What command would delete the running pod inside that deployment?

  5. What happens if we delete the pod that holds Tomcat, while the ping is running?

(answers on next two slides)

assignments/02kubectlexpose.md

286/1692

Answers

  1. kubectl create deployment littletomcat --image=tomcat

  2. List all pods with label app=littletomcat, with extra details including IP address: kubectl get pods --selector=app=littletomcat -o wide. You could also describe the pod: kubectl describe pod littletomcat-XXX-XXX

  3. Start a shell inside the cluster: One way to start a shell inside the cluster: kubectl apply -f https://k8smastery.com/shpod.yaml then kubectl attach --namespace=shpod -ti shpod

  • A easier way is to use a special domain we created curl https://shpod.sh | sh

  • Then the IP address of the pod should ping correctly. You could also start a deployment or pod temporarily (like nginx), then exec in, install ping, and ping the IP.

assignments/02kubectlexpose.md

287/1692

Answers

  1. We can delete the pod with: kubectl delete pods --selector=app=littletomcat or copy/paste the exact pod name and delete it.

  2. If we delete the pod, the following things will happen:

  • the pod will be gracefully terminated,

  • the ping command that we left running will fail,

  • the replica set will notice that it doens't have the right count of pods and create a replacement pod,

  • that new pod will have a different IP address (so the ping command won't recover).

assignments/02kubectlexpose.md

288/1692

Assignment 2: first service

  1. What command can give our Tomcat server a stable DNS name and IP address?

    (An address that doesn't change when something bad happens to the container.)

  2. What commands would you run to curl Tomcat with that DNS address?

    (Use the shpod environment if necessary.)

  3. If we delete the pod that holds Tomcat, does the IP address still work?

(answers on next slide)

assignments/02kubectlexpose.md

289/1692

Answers

  1. We need to create a Service for our deployment, which will have a ClusterIP that is usable from within the cluster. One way is with kubectl expose deployment littletomcat --port=8080 (The Tomcat image is listening on port 8080 according to Docker Hub). Another way is with kubectl create service clusterip littletomcat --tcp 8080

  2. In the shpod environment that we started earlier:

    # Install curl
    apk add curl
    # Make a request to the littletomcat service (in a different namespace)
    curl http://littletomcat.default:8080

    Note that shpod runs in the shpod namespace, so to find a DNS name of a different namespace in the same cluster, you should use <hostname>.<namespace> syntax. That was a little advanced, so A+ if you got it on the first try!

  3. Yes. If we delete the pod, another will be created to replace it. The ClusterIP will still work.

    (Except during a short period while the replacement container is being started.)

assignments/02kubectlexpose.md

290/1692

Image separating from the next chapter

291/1692

Our sample application

(automatically generated title slide)

292/1692

What's this application?

293/1692

What's this application?

  • It is a DockerCoin miner! 💰🐳📦🚢
294/1692

What's this application?

  • It is a DockerCoin miner! 💰🐳📦🚢

  • No, you can't buy coffee with DockerCoins

295/1692

What's this application?

  • It is a DockerCoin miner! 💰🐳📦🚢

  • No, you can't buy coffee with DockerCoins

  • How DockerCoins works:

    • generate a few random bytes

    • hash these bytes

    • increment a counter (to keep track of speed)

    • repeat forever!

296/1692

What's this application?

  • It is a DockerCoin miner! 💰🐳📦🚢

  • No, you can't buy coffee with DockerCoins

  • How DockerCoins works:

    • generate a few random bytes

    • hash these bytes

    • increment a counter (to keep track of speed)

    • repeat forever!

  • DockerCoins is not a cryptocurrency

    (the only common points are "randomness," "hashing," and "coins" in the name)

shared/sampleapp.md

297/1692

DockerCoins in the microservices era

  • DockerCoins is made of 5 services:

    • rng = web service generating random bytes

    • hasher = web service computing hash of POSTed data

    • worker = background process calling rng and hasher

    • webui = web interface to watch progress

    • redis = data store (holds a counter updated by worker)

  • These 5 services are visible in the application's Compose file, dockercoins-compose.yml

shared/sampleapp.md

298/1692

How DockerCoins works

  • worker invokes web service rng to generate random bytes

  • worker invokes web service hasher to hash these bytes

  • worker does this in an infinite loop

  • Every second, worker updates redis to indicate how many loops were done

  • webui queries redis, and computes and exposes "hashing speed" in our browser

(See diagram on next slide!)

shared/sampleapp.md

299/1692

Service discovery in container-land

How does each service find out the address of the other ones?

301/1692

Service discovery in container-land

How does each service find out the address of the other ones?

  • We do not hard-code IP addresses in the code

  • We do not hard-code FQDNs in the code, either

  • We just connect to a service name, and container-magic does the rest

    (And by container-magic, we mean "a crafty, dynamic, embedded DNS server")

shared/sampleapp.md

302/1692

Example in worker/worker.py

redis = Redis("redis")
def get_random_bytes():
r = requests.get("http://rng/32")
return r.content
def hash_bytes(data):
r = requests.post("http://hasher/",
data=data,
headers={"Content-Type": "application/octet-stream"})

(Full source code available here)

shared/sampleapp.md

303/1692

DockerCoins at work

  • worker will log HTTP requests to rng and hasher

  • rng and hasher will log incoming HTTP requests

  • webui will give us a graph on coins mined per second

shared/sampleapp.md

304/1692

Check out the app in Docker Compose

  • Compose is (still) great for local development

  • You can test this app if you have Docker and Compose installed

  • If not, remember play-with-docker.com

  • Download the compose file somewhere and run it
    curl -o docker-compose.yml https://k8smastery.com/dockercoins-compose.yml
    docker-compose up
305/1692

Why does the speed seem irregular?

  • It looks like the speed is approximately 4 hashes/second

  • Or more precisely: 4 hashes/second, with regular dips down to zero

  • Why?

306/1692

Why does the speed seem irregular?

  • It looks like the speed is approximately 4 hashes/second

  • Or more precisely: 4 hashes/second, with regular dips down to zero

  • Why?

  • The app actually has a constant, steady speed: 3.33 hashes/second
    (which corresponds to 1 hash every 0.3 seconds, for reasons)

  • Yes, and?

shared/sampleapp.md

307/1692

The reason why this graph is not awesome

  • The worker doesn't update the counter after every loop, but up to once per second

  • The speed is computed by the browser, checking the counter about once per second

  • Between two consecutive updates, the counter will increase either by 4, or by 0

  • The perceived speed will therefore be 4 - 4 - 4 - 0 - 4 - 4 - 0 etc.

  • What can we conclude from this?

308/1692

The reason why this graph is not awesome

  • The worker doesn't update the counter after every loop, but up to once per second

  • The speed is computed by the browser, checking the counter about once per second

  • Between two consecutive updates, the counter will increase either by 4, or by 0

  • The perceived speed will therefore be 4 - 4 - 4 - 0 - 4 - 4 - 0 etc.

  • What can we conclude from this?

  • "I'm clearly incapable of writing good frontend code!" 😀 — Jérôme

shared/sampleapp.md

309/1692

Stopping the application

  • If we interrupt Compose (with ^C), it will politely ask the Docker Engine to stop the app

  • The Docker Engine will send a TERM signal to the containers

  • If the containers do not exit in a timely manner, the Engine sends a KILL signal

  • Stop the application by hitting ^C
310/1692

Stopping the application

  • If we interrupt Compose (with ^C), it will politely ask the Docker Engine to stop the app

  • The Docker Engine will send a TERM signal to the containers

  • If the containers do not exit in a timely manner, the Engine sends a KILL signal

  • Stop the application by hitting ^C

Some containers exit immediately, others take longer.

The containers that do not handle SIGTERM end up being killed after a 10s timeout. If we are very impatient, we can hit ^C a second time!

shared/sampleapp.md

311/1692

Clean up

  • Before moving on, let's remove those containers

  • Or if using PWD for compose, just hit "close session" button

  • Tell Compose to remove everything:
    docker-compose down

shared/composedown.md

312/1692

Image separating from the next chapter

313/1692

Shipping images with a registry

(automatically generated title slide)

314/1692

Shipping images with a registry

  • For development using Docker, it has build, ship, and run features

  • Now that we want to run on a cluster, things are different

  • Kubernetes doesn't have a build feature built-in

  • The way to ship (pull) images to Kubernetes is to use a registry

k8s/shippingimages.md

315/1692

How Docker registries work (a reminder)

  • What happens when we execute docker run alpine ?

  • If the Engine needs to pull the alpine image, it expands it into library/alpine

  • library/alpine is expanded into index.docker.io/library/alpine

  • The Engine communicates with index.docker.io to retrieve library/alpine:latest

  • To use something else than index.docker.io, we specify it in the image name

  • Examples:

    docker pull gcr.io/google-containers/alpine-with-bash:1.0
    docker build -t registry.mycompany.io:5000/myimage:awesome .
    docker push registry.mycompany.io:5000/myimage:awesome

k8s/shippingimages.md

316/1692

Building and shipping images

  • There are many options!

  • Manually:

    • build locally (with docker build or otherwise)

    • push to the registry

  • Automatically:

    • build and test locally

    • when ready, commit and push a code repository

    • the code repository notifies an automated build system

    • that system gets the code, builds it, pushes the image to the registry

k8s/shippingimages.md

317/1692

Which registry do we want to use?

  • There are SAAS products like Docker Hub, Quay, GitLab ...

  • Each major cloud provider has an option as well

    (ACR on Azure, ECR on AWS, GCR on Google Cloud...)

318/1692

Which registry do we want to use?

  • There are SAAS products like Docker Hub, Quay, GitLab ...

  • Each major cloud provider has an option as well

    (ACR on Azure, ECR on AWS, GCR on Google Cloud...)

  • There are also commercial products to run our own registry

    (Docker Enterprise DTR, Quay, GitLab, JFrog Artifactory...)

319/1692

Which registry do we want to use?

  • There are SAAS products like Docker Hub, Quay, GitLab ...

  • Each major cloud provider has an option as well

    (ACR on Azure, ECR on AWS, GCR on Google Cloud...)

  • There are also commercial products to run our own registry

    (Docker Enterprise DTR, Quay, GitLab, JFrog Artifactory...)

  • And open source options, too!

    (Quay, Portus, OpenShift OCR, GitLab, Harbor, Kraken...)

    (I don't mention Docker Distribution here because it's too basic)

320/1692

Which registry do we want to use?

  • There are SAAS products like Docker Hub, Quay, GitLab ...

  • Each major cloud provider has an option as well

    (ACR on Azure, ECR on AWS, GCR on Google Cloud...)

  • There are also commercial products to run our own registry

    (Docker Enterprise DTR, Quay, GitLab, JFrog Artifactory...)

  • And open source options, too!

    (Quay, Portus, OpenShift OCR, GitLab, Harbor, Kraken...)

    (I don't mention Docker Distribution here because it's too basic)

  • When picking a registry, pay attention to:

    • Its build system
    • Multi-user auth and mgmt (RBAC)
    • Storage features (replication, caching, garbage collection)

k8s/shippingimages.md

321/1692

Running DockerCoins on Kubernetes

  • Create one deployment for each component

    (hasher, redis, rng, webui, worker)

  • Expose deployments that need to accept connections

    (hasher, redis, rng, webui)

  • For redis, we can use the official redis image

  • For the 4 others, we need to build images and push them to some registry

k8s/shippingimages.md

322/1692

Using images from the Docker Hub

  • For everyone's convenience, we took care of building DockerCoins images

  • We pushed these images to the DockerHub, under the dockercoins user

  • These images are tagged with a version number, v0.1

  • The full image names are therefore:

    • dockercoins/hasher:v0.1

    • dockercoins/rng:v0.1

    • dockercoins/webui:v0.1

    • dockercoins/worker:v0.1

k8s/buildshiprun-dockerhub.md

323/1692

Image separating from the next chapter

324/1692

Running DockerCoins on Kubernetes

(automatically generated title slide)

325/1692

Running DockerCoins on Kubernetes

  • We can now deploy our code (as well as a redis instance)
  • Deploy redis:

    kubectl create deployment redis --image=redis
  • Deploy everything else:

    kubectl create deployment hasher --image=dockercoins/hasher:v0.1
    kubectl create deployment rng --image=dockercoins/rng:v0.1
    kubectl create deployment webui --image=dockercoins/webui:v0.1
    kubectl create deployment worker --image=dockercoins/worker:v0.1

k8s/ourapponkube.md

326/1692

Is this working?

  • After waiting for the deployment to complete, let's look at the logs!

    (Hint: use kubectl get deploy -w to watch deployment events)

  • Look at some logs:
    kubectl logs deploy/rng
    kubectl logs deploy/worker
327/1692

Is this working?

  • After waiting for the deployment to complete, let's look at the logs!

    (Hint: use kubectl get deploy -w to watch deployment events)

  • Look at some logs:
    kubectl logs deploy/rng
    kubectl logs deploy/worker

🤔 rng is fine ... But not worker.

328/1692

Is this working?

  • After waiting for the deployment to complete, let's look at the logs!

    (Hint: use kubectl get deploy -w to watch deployment events)

  • Look at some logs:
    kubectl logs deploy/rng
    kubectl logs deploy/worker

🤔 rng is fine ... But not worker.

💡 Oh right! We forgot to expose.

k8s/ourapponkube.md

329/1692

Connecting containers together

  • Three deployments need to be reachable by others: hasher, redis, rng

  • worker doesn't need to be exposed

  • webui will be dealt with later

  • Expose each deployment, specifying the right port:
    kubectl expose deployment redis --port 6379
    kubectl expose deployment rng --port 80
    kubectl expose deployment hasher --port 80

k8s/ourapponkube.md

330/1692

Is this working yet?

  • The worker has an infinite loop, that retries 10 seconds after an error
  • Stream the worker's logs:

    kubectl logs deploy/worker --follow

    (Give it about 10 seconds to recover)

331/1692

Is this working yet?

  • The worker has an infinite loop, that retries 10 seconds after an error
  • Stream the worker's logs:

    kubectl logs deploy/worker --follow

    (Give it about 10 seconds to recover)

We should now see the worker, well, working happily.

k8s/ourapponkube.md

332/1692

Exposing services for external access

  • Now we would like to access the Web UI

  • We will expose it with a NodePort

    (just like we did for the registry)

  • Create a NodePort service for the Web UI:

    kubectl expose deploy/webui --type=NodePort --port=80
  • Check the port that was allocated:

    kubectl get svc

k8s/ourapponkube.md

333/1692

Accessing the web UI

  • We can now connect to any node, on the allocated node port, to view the web UI
334/1692

Accessing the web UI

  • We can now connect to any node, on the allocated node port, to view the web UI

Yes, this may take a little while to update. (Narrator: it was DNS.)

335/1692

Accessing the web UI

  • We can now connect to any node, on the allocated node port, to view the web UI

Yes, this may take a little while to update. (Narrator: it was DNS.)

Alright, we're back to where we started, when we were running on a single node!

k8s/ourapponkube.md

336/1692

Image separating from the next chapter

337/1692

Assignment 3: deploy wordsmith

(automatically generated title slide)

338/1692

Assignment 3: deploy wordsmith

  • Let's deploy another application called wordsmith

  • Wordsmith has 3 components:

    • a web frontend: bretfisher/wordsmith-web

    • a API backend: bretfisher/wordsmith-words (NOTE: won't run on Raspberry Pi's arm/v7 yet GH Issue)

    • a postgres database: bretfisher/wordsmith-db

  • We have built images for these components, and pushed them on the Docker Hub

  • We want to deploy all 3 components on Kubernetes

  • We want to be able to connect to the web frontend with our browser

assignments/03deploywordsmith.md

339/1692

Wordsmith details

  • Here are all the network flows in the app:

    • the web frontend listens on port 80

    • the web frontend connects to the API at the address http://words:8080

    • the API backend listens on port 8080

    • the API connects to the database with the connection string pgsql://db:5432

    • the database listens on port 5432

assignments/03deploywordsmith.md

340/1692

Winning conditions

  • After deploying and connecting everything together, open the web frontend

  • This is what we should see:

    Screen capture of the wordsmith app, with lego bricks showing the text "The nørdic whale smokes the nørdic whale"

    (You will probably see a different sentence, though.)

  • If you see empty LEGO bricks, something's wrong ...

assignments/03deploywordsmith.md

341/1692

Scaling things up

  • If we reload that page, we get the same sentence

  • And that sentence repeats the same adjective and noun anyway

  • Can we do better?

  • Yes, if we scale up the API backend!

  • Try to scale up the API backend and see what happens

Wondering what this app is all about?
It was a demo app showecased at DockerCon

assignments/03deploywordsmith.md

342/1692

Answers

First, we need to create deployments for all three components:

kubectl create deployment db --image=bretfisher/wordsmith-db
kubectl create deployment web --image=bretfisher/wordsmith-web
kubectl create deployment words --image=bretfisher/wordsmith-words

Note: we need to use these exact names, because these names will be used for the service that we will create and their DNS entries as well. To put it differently: if our code connects to words then the service should be named words and the deployment should also be named words (unless we want to write our own service YAML manifest by hand; but we won't do that yet).

assignments/03deploywordsmith.md

343/1692

Answers

Then, we need to create the services for these deployments:

kubectl expose deployment db --port=5432
kubectl expose deployment web --port=80 --type=NodePort
kubectl expose deployment words --port=8080

or

kubectl create service clusterip db --tcp=5432
kubectl create service nodeport web --tcp=80
kubectl create service clusterip words --tcp=8080

Find out the node port allocated to web: kubectl get service web

Open it in your browser. If you hit "reload", you always see the same sentence.

assignments/03deploywordsmith.md

344/1692

Answers

Finally, scale up the API for more words on refresh:

kubectl scale deployment words --replicas=5

If you hit "reload", you should now see different sentences each time.

assignments/03deploywordsmith.md

345/1692

Image separating from the next chapter

346/1692

Scaling our demo app

(automatically generated title slide)

347/1692

Scaling our demo app

  • Our ultimate goal is to get more DockerCoins

    (i.e. increase the number of loops per second shown on the web UI)

  • Let's look at the architecture again:

    DockerCoins architecture

348/1692

Scaling our demo app

  • Our ultimate goal is to get more DockerCoins

    (i.e. increase the number of loops per second shown on the web UI)

  • Let's look at the architecture again:

    DockerCoins architecture

  • We're at 4 hashes a second. Let's ramp this up!

  • The loop is done in the worker; perhaps we could try adding more workers?

k8s/scalingdockercoins.md

349/1692

Adding another worker

  • All we have to do is scale the worker Deployment
  • Open a new terminal to keep an eye on our pods:
    kubectl get pods -w
  • Now, create more worker replicas:
    kubectl scale deployment worker --replicas=2
350/1692

Adding another worker

  • All we have to do is scale the worker Deployment
  • Open a new terminal to keep an eye on our pods:
    kubectl get pods -w
  • Now, create more worker replicas:
    kubectl scale deployment worker --replicas=2

After a few seconds, the graph in the web UI should show up.

k8s/scalingdockercoins.md

351/1692

Adding more workers

  • If 2 workers give us 2x speed, what about 3 workers?
  • Scale the worker Deployment further:
    kubectl scale deployment worker --replicas=3
352/1692

Adding more workers

  • If 2 workers give us 2x speed, what about 3 workers?
  • Scale the worker Deployment further:
    kubectl scale deployment worker --replicas=3

The graph in the web UI should go up again.

(This is looking great! We're gonna be RICH!)

k8s/scalingdockercoins.md

353/1692

Adding even more workers

  • Let's see if 10 workers give us 10x speed!
  • Scale the worker Deployment to a bigger number:
    kubectl scale deployment worker --replicas=10
354/1692

Adding even more workers

  • Let's see if 10 workers give us 10x speed!
  • Scale the worker Deployment to a bigger number:
    kubectl scale deployment worker --replicas=10

The graph will peak at 10-12 hashes/second.

(We can add as many workers as we want: we will never go past 10-12 hashes/second.)

k8s/scalingdockercoins.md

355/1692

Didn't we briefly exceed 10 hashes/second?

  • It may look like it, because the web UI shows instant speed

  • The instant speed can briefly exceed 10 hashes/second

  • The average speed cannot

  • The instant speed can be biased because of how it's computed

k8s/scalingdockercoins.md

356/1692

Why are we stuck at 10-12 hashes per second?

  • If this was high-quality, production code, we would have instrumentation

    (Datadog, Honeycomb, New Relic, statsd, Sumologic, ...)

  • It's not!

  • Perhaps we could benchmark our web services?

    (with tools like ab, or even simpler, httping)

k8s/scalingdockercoins.md

357/1692

Benchmarking our web services

  • We want to check hasher and rng

  • We are going to use httping

  • It's just like ping, but using HTTP GET requests

    (it measures how long it takes to perform one GET request)

  • It's used like this:

    httping [-c count] http://host:port/path
  • Or even simpler:

    httping ip.ad.dr.ess
  • We will use httping on the ClusterIP addresses of our services

k8s/scalingdockercoins.md

358/1692

Obtaining ClusterIP addresses

  • We can simply check the output of kubectl get services

  • Or do it programmatically, as in the example below

  • Retrieve the IP addresses:
    HASHER=$(kubectl get svc hasher -o go-template={{.spec.clusterIP}})
    RNG=$(kubectl get svc rng -o go-template={{.spec.clusterIP}})

Now we can access the IP addresses of our services through $HASHER and $RNG.

k8s/scalingdockercoins.md

359/1692

Checking hasher and rng response times

  • Remember to use shpod on macOS and Windows:

    kubectl attach --namespace=shpod -ti shpod
  • Check the response times for both services:

    httping -c 3 $HASHER
    httping -c 3 $RNG
360/1692

Checking hasher and rng response times

  • Remember to use shpod on macOS and Windows:

    kubectl attach --namespace=shpod -ti shpod
  • Check the response times for both services:

    httping -c 3 $HASHER
    httping -c 3 $RNG
  • hasher is fine (it should take a few milliseconds to reply)

  • rng is not (it should take about 700 milliseconds if there are 10 workers)

  • Something is wrong with rng, but ... what?

k8s/scalingdockercoins.md

361/1692

Let's draw hasty conclusions

  • The bottleneck seems to be rng

  • What if we don't have enough entropy and can't generate enough random numbers?

  • We need to scale out the rng service on multiple machines!

Note: this is a fiction! We have enough entropy. But we need a pretext to scale out.

(In fact, the code of rng uses /dev/urandom, which never runs out of entropy...
...and is just as good as /dev/random.)

362/1692

Let's draw hasty conclusions

  • The bottleneck seems to be rng

  • What if we don't have enough entropy and can't generate enough random numbers?

  • We need to scale out the rng service on multiple machines!

Note: this is a fiction! We have enough entropy. But we need a pretext to scale out.

(In fact, the code of rng uses /dev/urandom, which never runs out of entropy...
...and is just as good as /dev/random.)

  • Oops we only have one node for learning. 🤔
363/1692

Let's draw hasty conclusions

  • The bottleneck seems to be rng

  • What if we don't have enough entropy and can't generate enough random numbers?

  • We need to scale out the rng service on multiple machines!

Note: this is a fiction! We have enough entropy. But we need a pretext to scale out.

(In fact, the code of rng uses /dev/urandom, which never runs out of entropy...
...and is just as good as /dev/random.)

  • Oops we only have one node for learning. 🤔

  • Let's pretend and I'll explain along the way

shared/hastyconclusions.md

364/1692

Image separating from the next chapter

365/1692

Deploying with YAML

(automatically generated title slide)

366/1692

Deploying with YAML

  • So far, we created resources with the following commands:

    • kubectl run

    • kubectl create deployment

    • kubectl expose

  • We can also create resources directly with YAML manifests

k8s/yamldeploy.md

367/1692

kubectl apply vs create

  • kubectl create -f whatever.yaml

    • creates resources if they don't exist

    • if resources already exist, don't alter them
      (and display error message)

  • kubectl apply -f whatever.yaml

    • creates resources if they don't exist

    • if resources already exist, update them
      (to match the definition provided by the YAML file)

    • stores the manifest as an annotation in the resource

k8s/yamldeploy.md

368/1692

Creating multiple resources

  • The manifest can contain multiple resources separated by ---
kind: ...
apiVersion: ...
metadata:
name: ...
...
spec:
...
---
kind: ...
apiVersion: ...
metadata:
name: ...
...
spec:
...

k8s/yamldeploy.md

369/1692

Creating multiple resources

  • The manifest can also contain a list of resources
apiVersion: v1
kind: List
items:
- kind: ...
apiVersion: ...
...
- kind: ...
apiVersion: ...
...

k8s/yamldeploy.md

370/1692

Deploying DockerCoins with YAML

  • Here's a YAML manifest with all the resources for DockerCoins

    (Deployments and Services)

  • We can use it if we need to deploy or redeploy DockerCoins

  • Yes YAML file commands can use URL's!

  • Deploy or redeploy DockerCoins:
    kubectl apply -f https://k8smastery.com/dockercoins.yaml

k8s/yamldeploy.md

371/1692

Apply errors for create or run resources

  • Note the warnings if you already had the resources created

  • This is because we didn't use apply before

  • This is OK for us learning, so ignore the warnings

  • Generally in production you want to stick with one method or the other

k8s/yamldeploy.md

372/1692

Deleting resources

  • We can also use a YAML file to delete resources

  • kubectl delete -f ... will delete all the resources mentioned in a YAML file

    (useful to clean up everything that was created by kubectl apply -f ...)

  • The definitions of the resources don't matter

    (just their kind, apiVersion, and name)

k8s/yamldeploy.md

373/1692

Pruning¹ resources

  • We can also tell kubectl to remove old resources

  • This is done with kubectl apply -f ... --prune

  • It will remove resources that don't exist in the YAML file(s)

  • But only if they were created with kubectl apply in the first place

    (technically, if they have an annotation kubectl.kubernetes.io/last-applied-configuration)

¹If English is not your first language: to prune means to remove dead or overgrown branches in a tree, to help it to grow.

k8s/yamldeploy.md

374/1692

YAML as source of truth

  • Imagine the following workflow:

    • do not use kubectl run, kubectl create deployment, kubectl expose ...

    • define everything with YAML

    • kubectl apply -f ... --prune --all that YAML

    • keep that YAML under version control

    • enforce all changes to go through that YAML (e.g. with pull requests)

  • Our version control system now has a full history of what we deploy

  • Compares to "Infrastructure-as-Code", but for app deployments

k8s/yamldeploy.md

375/1692

Specifying the namespace

  • When creating resources from YAML manifests, the namespace is optional

  • If we specify a namespace:

    • resources are created in the specified namespace

    • this is typical for things deployed only once per cluster

    • example: system components, cluster add-ons ...

  • If we don't specify a namespace:

    • resources are created in the current namespace

    • this is typical for things that may be deployed multiple times

    • example: applications (production, staging, feature branches ...)

k8s/yamldeploy.md

376/1692

Image separating from the next chapter

377/1692

The Kubernetes Dashboard

(automatically generated title slide)

378/1692

The Kubernetes Dashboard

  • Kubernetes resources can also be viewed with an official web UI

  • That dashboard is usually exposed over HTTPS

    (this requires obtaining a proper TLS certificate)

  • Dashboard users need to authenticate

  • We are going to take a dangerous shortcut

k8s/dashboard.md

379/1692

The insecure method

  • We could (and should) use Let's Encrypt ...

  • ... but we don't want to deal with TLS certificates

  • We could (and should) learn how authentication and authorization work ...

  • ... but we will use a guest account with admin access instead

Yes, this will open our cluster to all kinds of shenanigans. Don't do this at home.

k8s/dashboard.md

380/1692

Running a very insecure dashboard

  • We are going to deploy that dashboard with one single command

  • This command will create all the necessary resources

    (the dashboard itself, the HTTP wrapper, the admin/guest account)

  • All these resources are defined in a YAML file

  • All we have to do is load that YAML file with with kubectl apply -f

  • Create all the dashboard resources, with the following command:
    kubectl apply -f https://k8smastery.com/insecure-dashboard.yaml

k8s/dashboard.md

381/1692

Connecting to the dashboard

  • Check which port the dashboard is on:
    kubectl get svc dashboard

You'll want the 3xxxx port.

The dashboard will then ask you which authentication you want to use.

k8s/dashboard.md

382/1692

Dashboard authentication

  • We have three authentication options at this point:

    • token (associated with a role that has appropriate permissions)

    • kubeconfig (e.g. using the ~/.kube/config file)

    • "skip" (use the dashboard "service account")

  • Let's use "skip": we're logged in!

383/1692

Dashboard authentication

  • We have three authentication options at this point:

    • token (associated with a role that has appropriate permissions)

    • kubeconfig (e.g. using the ~/.kube/config file)

    • "skip" (use the dashboard "service account")

  • Let's use "skip": we're logged in!

By the way, we just added a backdoor to our Kubernetes cluster!

k8s/dashboard.md

384/1692

Running the Kubernetes Dashboard securely

385/1692

Running the Kubernetes Dashboard securely

386/1692

Other dashboards

387/1692

Other dashboards

  • Kube Web View

    • read-only dashboard

    • optimized for "troubleshooting and incident response"

    • see vision and goals for details

  • Kube Ops View

    • "provides a common operational picture for multiple Kubernetes clusters"
  • Your Kubernetes distro comes with one!

388/1692

Other dashboards

  • Kube Web View

    • read-only dashboard

    • optimized for "troubleshooting and incident response"

    • see vision and goals for details

  • Kube Ops View

    • "provides a common operational picture for multiple Kubernetes clusters"
  • Your Kubernetes distro comes with one!

  • Cloud-provided control-planes often don't come with one

k8s/dashboard.md

389/1692

Image separating from the next chapter

390/1692

Security implications of kubectl apply

(automatically generated title slide)

391/1692

Security implications of kubectl apply

  • When we do kubectl apply -f <URL>, we create arbitrary resources

  • Resources can be evil; imagine a deployment that ...

392/1692

Security implications of kubectl apply

  • When we do kubectl apply -f <URL>, we create arbitrary resources

  • Resources can be evil; imagine a deployment that ...

    • starts bitcoin miners on the whole cluster
393/1692

Security implications of kubectl apply

  • When we do kubectl apply -f <URL>, we create arbitrary resources

  • Resources can be evil; imagine a deployment that ...

    • starts bitcoin miners on the whole cluster

    • hides in a non-default namespace

394/1692

Security implications of kubectl apply

  • When we do kubectl apply -f <URL>, we create arbitrary resources

  • Resources can be evil; imagine a deployment that ...

    • starts bitcoin miners on the whole cluster

    • hides in a non-default namespace

    • bind-mounts our nodes' filesystem

395/1692

Security implications of kubectl apply

  • When we do kubectl apply -f <URL>, we create arbitrary resources

  • Resources can be evil; imagine a deployment that ...

    • starts bitcoin miners on the whole cluster

    • hides in a non-default namespace

    • bind-mounts our nodes' filesystem

    • inserts SSH keys in the root account (on the node)

396/1692

Security implications of kubectl apply

  • When we do kubectl apply -f <URL>, we create arbitrary resources

  • Resources can be evil; imagine a deployment that ...

    • starts bitcoin miners on the whole cluster

    • hides in a non-default namespace

    • bind-mounts our nodes' filesystem

    • inserts SSH keys in the root account (on the node)

    • encrypts our data and ransoms it

397/1692

Security implications of kubectl apply

  • When we do kubectl apply -f <URL>, we create arbitrary resources

  • Resources can be evil; imagine a deployment that ...

    • starts bitcoin miners on the whole cluster

    • hides in a non-default namespace

    • bind-mounts our nodes' filesystem

    • inserts SSH keys in the root account (on the node)

    • encrypts our data and ransoms it

    • ☠️☠️☠️

k8s/dashboard.md

398/1692

kubectl apply is the new curl | sh

  • curl | sh is convenient

  • It's safe if you use HTTPS URLs from trusted sources

399/1692

kubectl apply is the new curl | sh

  • curl | sh is convenient

  • It's safe if you use HTTPS URLs from trusted sources

  • kubectl apply -f is convenient

  • It's safe if you use HTTPS URLs from trusted sources

  • Example: the official setup instructions for most pod networks

400/1692

kubectl apply is the new curl | sh

  • curl | sh is convenient

  • It's safe if you use HTTPS URLs from trusted sources

  • kubectl apply -f is convenient

  • It's safe if you use HTTPS URLs from trusted sources

  • Example: the official setup instructions for most pod networks

  • It introduces new failure modes

    (for instance, if you try to apply YAML from a link that's no longer valid)

k8s/dashboard.md

401/1692

Image separating from the next chapter

402/1692

Daemon sets

(automatically generated title slide)

403/1692

Daemon sets

  • We want to scale rng in a way that is different from how we scaled worker

  • We want one (and exactly one) instance of rng per node

  • We do not want two instances of rng on the same node

  • We will do that with a daemon set

k8s/daemonset.md

404/1692

Why not a deployment?

  • Can't we just do kubectl scale deployment rng --replicas=...?
405/1692

Why not a deployment?

  • Can't we just do kubectl scale deployment rng --replicas=...?

  • Nothing guarantees that the rng containers will be distributed evenly

406/1692

Why not a deployment?

  • Can't we just do kubectl scale deployment rng --replicas=...?

  • Nothing guarantees that the rng containers will be distributed evenly

  • If we add nodes later, they will not automatically run a copy of rng

407/1692

Why not a deployment?

  • Can't we just do kubectl scale deployment rng --replicas=...?

  • Nothing guarantees that the rng containers will be distributed evenly

  • If we add nodes later, they will not automatically run a copy of rng

  • If we remove (or reboot) a node, one rng container will restart elsewhere

    (and we will end up with two instances rng on the same node)

408/1692

Why not a deployment?

  • Can't we just do kubectl scale deployment rng --replicas=...?

  • Nothing guarantees that the rng containers will be distributed evenly

  • If we add nodes later, they will not automatically run a copy of rng

  • If we remove (or reboot) a node, one rng container will restart elsewhere

    (and we will end up with two instances rng on the same node)

  • By contrast, a daemon set will start one pod per node and keep it that way

    (as nodes are added or removed)

k8s/daemonset.md

409/1692

Daemon sets in practice

  • Daemon sets are great for cluster-wide, per-node processes:

    • kube-proxy

    • CNI network plugins

    • monitoring agents

    • hardware management tools (e.g. SCSI/FC HBA agents)

    • etc.

  • They can also be restricted to run only on some nodes

k8s/daemonset.md

410/1692

Creating a daemon set

  • Unfortunately, as of Kubernetes 1.17, the CLI cannot create daemon sets
411/1692

Creating a daemon set

  • Unfortunately, as of Kubernetes 1.17, the CLI cannot create daemon sets

  • More precisely: it doesn't have a subcommand to create a daemon set

412/1692

Creating a daemon set

  • Unfortunately, as of Kubernetes 1.17, the CLI cannot create daemon sets

  • More precisely: it doesn't have a subcommand to create a daemon set

  • But any kind of resource can always be created by providing a YAML description:

    kubectl apply -f foo.yaml
413/1692

Creating a daemon set

  • Unfortunately, as of Kubernetes 1.17, the CLI cannot create daemon sets

  • More precisely: it doesn't have a subcommand to create a daemon set

  • But any kind of resource can always be created by providing a YAML description:

    kubectl apply -f foo.yaml
  • How do we create the YAML file for our daemon set?
414/1692

Creating a daemon set

  • Unfortunately, as of Kubernetes 1.17, the CLI cannot create daemon sets

  • More precisely: it doesn't have a subcommand to create a daemon set

  • But any kind of resource can always be created by providing a YAML description:

    kubectl apply -f foo.yaml
  • How do we create the YAML file for our daemon set?

415/1692

Creating a daemon set

  • Unfortunately, as of Kubernetes 1.17, the CLI cannot create daemon sets

  • More precisely: it doesn't have a subcommand to create a daemon set

  • But any kind of resource can always be created by providing a YAML description:

    kubectl apply -f foo.yaml
  • How do we create the YAML file for our daemon set?

k8s/daemonset.md

416/1692

Creating the YAML file for our daemon set

  • Let's start with the YAML file for the current rng resource
  • Dump the rng resource in YAML:

    kubectl get deploy/rng -o yaml >rng.yml
  • Edit rng.yml

k8s/daemonset.md

417/1692

"Casting" a resource to another

  • What if we just changed the kind field?

    (It can't be that easy, right?)

  • Change kind: Deployment to kind: DaemonSet
  • Save, quit

  • Try to create our new resource:

    kubectl apply -f rng.yml
418/1692

"Casting" a resource to another

  • What if we just changed the kind field?

    (It can't be that easy, right?)

  • Change kind: Deployment to kind: DaemonSet
  • Save, quit

  • Try to create our new resource:

    kubectl apply -f rng.yml

We all knew this couldn't be that easy, right!

k8s/daemonset.md

419/1692

Understanding the problem

  • The core of the error is:
    error validating data:
    [ValidationError(DaemonSet.spec):
    unknown field "replicas" in io.k8s.api.extensions.v1beta1.DaemonSetSpec,
    ...
420/1692

Understanding the problem

  • The core of the error is:
    error validating data:
    [ValidationError(DaemonSet.spec):
    unknown field "replicas" in io.k8s.api.extensions.v1beta1.DaemonSetSpec,
    ...
  • Obviously, it doesn't make sense to specify a number of replicas for a daemon set
421/1692

Understanding the problem

  • The core of the error is:
    error validating data:
    [ValidationError(DaemonSet.spec):
    unknown field "replicas" in io.k8s.api.extensions.v1beta1.DaemonSetSpec,
    ...
  • Obviously, it doesn't make sense to specify a number of replicas for a daemon set

  • Workaround: fix the YAML

    • remove the replicas field
    • remove the strategy field (which defines the rollout mechanism for a deployment)
    • remove the progressDeadlineSeconds field (also used by the rollout mechanism)
    • remove the status: {} line at the end
422/1692

Understanding the problem

  • The core of the error is:
    error validating data:
    [ValidationError(DaemonSet.spec):
    unknown field "replicas" in io.k8s.api.extensions.v1beta1.DaemonSetSpec,
    ...
  • Obviously, it doesn't make sense to specify a number of replicas for a daemon set

  • Workaround: fix the YAML

    • remove the replicas field
    • remove the strategy field (which defines the rollout mechanism for a deployment)
    • remove the progressDeadlineSeconds field (also used by the rollout mechanism)
    • remove the status: {} line at the end
  • Or, we could also ...

k8s/daemonset.md

423/1692

Use the --force, Luke

  • We could also tell Kubernetes to ignore these errors and try anyway

  • The --force flag's actual name is --validate=false

  • Try to load our YAML file and ignore errors:
    kubectl apply -f rng.yml --validate=false
424/1692

Use the --force, Luke

  • We could also tell Kubernetes to ignore these errors and try anyway

  • The --force flag's actual name is --validate=false

  • Try to load our YAML file and ignore errors:
    kubectl apply -f rng.yml --validate=false

🎩✨🐇

425/1692

Use the --force, Luke

  • We could also tell Kubernetes to ignore these errors and try anyway

  • The --force flag's actual name is --validate=false

  • Try to load our YAML file and ignore errors:
    kubectl apply -f rng.yml --validate=false

🎩✨🐇

Wait ... Now, can it be that easy?

k8s/daemonset.md

426/1692

Checking what we've done

  • Did we transform our deployment into a daemonset?
  • Look at the resources that we have now:
    kubectl get all
427/1692

Checking what we've done

  • Did we transform our deployment into a daemonset?
  • Look at the resources that we have now:
    kubectl get all

We have two resources called rng:

  • the deployment that was existing before

  • the daemon set that we just created

We also have one too many pods.
(The pod corresponding to the deployment still exists.)

k8s/daemonset.md

428/1692

deploy/rng and ds/rng

  • You can have different resource types with the same name

    (i.e. a deployment and a daemon set both named rng)

  • We still have the old rng deployment

    NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
    deployment.apps/rng 1 1 1 1 18m
  • But now we have the new rng daemon set as well

    NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
    daemonset.apps/rng 2 2 2 2 2 <none> 9s

k8s/daemonset.md

429/1692

Too many pods

  • If we check with kubectl get pods, we see:

    • one pod for the deployment (named rng-xxxxxxxxxx-yyyyy)

    • one pod per node for the daemon set (named rng-zzzzz)

    NAME READY STATUS RESTARTS AGE
    rng-54f57d4d49-7pt82 1/1 Running 0 11m
    rng-b85tm 1/1 Running 0 25s
    rng-hfbrr 1/1 Running 0 25s
    [...]
430/1692

Too many pods

  • If we check with kubectl get pods, we see:

    • one pod for the deployment (named rng-xxxxxxxxxx-yyyyy)

    • one pod per node for the daemon set (named rng-zzzzz)

    NAME READY STATUS RESTARTS AGE
    rng-54f57d4d49-7pt82 1/1 Running 0 11m
    rng-b85tm 1/1 Running 0 25s
    rng-hfbrr 1/1 Running 0 25s
    [...]

The daemon set created one pod per node.

In a multi-node setup, masters usually have taints preventing pods from running there.

(To schedule a pod on this node anyway, the pod will require appropriate tolerations.)

k8s/daemonset.md

431/1692

Is this working?

  • Look at the web UI
432/1692

Is this working?

  • Look at the web UI

  • The graph should now go above 10 hashes per second!

433/1692

Is this working?

  • Look at the web UI

  • The graph should now go above 10 hashes per second!

  • It looks like the newly created pods are serving traffic correctly

  • How and why did this happen?

    (We didn't do anything special to add them to the rng service load balancer!)

k8s/daemonset.md

434/1692

Image separating from the next chapter

435/1692

Labels and selectors

(automatically generated title slide)

436/1692

Labels and selectors

  • The rng service is load balancing requests to a set of pods

  • That set of pods is defined by the selector of the rng service

  • Check the selector in the rng service definition:
    kubectl describe service rng
  • The selector is app=rng

  • It means "all the pods having the label app=rng"

    (They can have additional labels as well, that's OK!)

k8s/daemonset.md

437/1692

Selector evaluation

  • We can use selectors with many kubectl commands

  • For instance, with kubectl get, kubectl logs, kubectl delete ... and more

  • Get the list of pods matching selector app=rng:
    kubectl get pods -l app=rng
    kubectl get pods --selector app=rng

But ... why do these pods (in particular, the new ones) have this app=rng label?

k8s/daemonset.md

438/1692

Where do labels come from?

  • When we create a deployment with kubectl create deployment rng,
    this deployment gets the label app=rng

  • The replica sets created by this deployment also get the label app=rng

  • The pods created by these replica sets also get the label app=rng

  • When we created the daemon set from the deployment, we re-used the same spec

  • Therefore, the pods created by the daemon set get the same labels

  • When we use kubectl run stuff, the label is run=stuff instead

k8s/daemonset.md

439/1692

Updating load balancer configuration

  • We would like to remove a pod from the load balancer

  • What would happen if we removed that pod, with kubectl delete pod ...?

440/1692

Updating load balancer configuration

  • We would like to remove a pod from the load balancer

  • What would happen if we removed that pod, with kubectl delete pod ...?

    It would be re-created immediately (by the replica set or the daemon set)

441/1692

Updating load balancer configuration

  • We would like to remove a pod from the load balancer

  • What would happen if we removed that pod, with kubectl delete pod ...?

    It would be re-created immediately (by the replica set or the daemon set)

  • What would happen if we removed the app=rng label from that pod?

442/1692

Updating load balancer configuration

  • We would like to remove a pod from the load balancer

  • What would happen if we removed that pod, with kubectl delete pod ...?

    It would be re-created immediately (by the replica set or the daemon set)

  • What would happen if we removed the app=rng label from that pod?

    It would also be re-created immediately

443/1692

Updating load balancer configuration

  • We would like to remove a pod from the load balancer

  • What would happen if we removed that pod, with kubectl delete pod ...?

    It would be re-created immediately (by the replica set or the daemon set)

  • What would happen if we removed the app=rng label from that pod?

    It would also be re-created immediately

    Why?!?

k8s/daemonset.md

444/1692

Selectors for replica sets and daemon sets

  • The "mission" of a replica set is:

    "Make sure that there is the right number of pods matching this spec!"

  • The "mission" of a daemon set is:

    "Make sure that there is a pod matching this spec on each node!"

445/1692

Selectors for replica sets and daemon sets

  • The "mission" of a replica set is:

    "Make sure that there is the right number of pods matching this spec!"

  • The "mission" of a daemon set is:

    "Make sure that there is a pod matching this spec on each node!"

  • In fact, replica sets and daemon sets do not check pod specifications

  • They merely have a selector, and they look for pods matching that selector

  • Yes, we can fool them by manually creating pods with the "right" labels

  • Bottom line: if we remove our app=rng label ...

    ... The pod "disappears" for its parent, which re-creates another pod to replace it

k8s/daemonset.md

446/1692

Isolation of replica sets and daemon sets

  • Since both the rng daemon set and the rng replica set use app=rng ...

    ... Why don't they "find" each other's pods?

447/1692

Isolation of replica sets and daemon sets

  • Since both the rng daemon set and the rng replica set use app=rng ...

    ... Why don't they "find" each other's pods?

  • Replica sets have a more specific selector, visible with kubectl describe

    (It looks like app=rng,pod-template-hash=abcd1234)

  • Daemon sets also have a more specific selector, but it's invisible

    (It looks like app=rng,controller-revision-hash=abcd1234)

  • As a result, each controller only "sees" the pods it manages

k8s/daemonset.md

448/1692

Removing a pod from the load balancer

  • Currently, the rng service is defined by the app=rng selector

  • The only way to remove a pod is to remove or change the app label

  • ... But that will cause another pod to be created instead!

  • What's the solution?

449/1692

Removing a pod from the load balancer

  • Currently, the rng service is defined by the app=rng selector

  • The only way to remove a pod is to remove or change the app label

  • ... But that will cause another pod to be created instead!

  • What's the solution?

  • We need to change the selector of the rng service!

  • Let's add another label to that selector (e.g. active=yes)

k8s/daemonset.md

450/1692

Complex selectors

  • If a selector specifies multiple labels, they are understood as a logical AND

    (In other words: the pods must match all the labels)

  • Kubernetes has support for advanced, set-based selectors

    (But these cannot be used with services, at least not yet!)

k8s/daemonset.md

451/1692

The plan

  1. Add the label active=yes to all our rng pods

  2. Update the selector for the rng service to also include active=yes

  3. Toggle traffic to a pod by manually adding/removing the active label

  4. Profit!

Note: if we swap steps 1 and 2, it will cause a short service disruption, because there will be a period of time during which the service selector won't match any pod. During that time, requests to the service will time out. By doing things in the order above, we guarantee that there won't be any interruption.

k8s/daemonset.md

452/1692

Adding labels to pods

  • We want to add the label active=yes to all pods that have app=rng

  • We could edit each pod one by one with kubectl edit ...

  • ... Or we could use kubectl label to label them all

  • kubectl label can use selectors itself

  • Add active=yes to all pods that have app=rng:
    kubectl label pods -l app=rng active=yes

k8s/daemonset.md

453/1692

Updating the service selector

  • We need to edit the service specification

  • Reminder: in the service definition, we will see app: rng in two places

    • the label of the service itself (we don't need to touch that one)

    • the selector of the service (that's the one we want to change)

  • Update the service to add active: yes to its selector:
    kubectl edit service rng
454/1692

Updating the service selector

  • We need to edit the service specification

  • Reminder: in the service definition, we will see app: rng in two places

    • the label of the service itself (we don't need to touch that one)

    • the selector of the service (that's the one we want to change)

  • Update the service to add active: yes to its selector:
    kubectl edit service rng

... And then we get the weirdest error ever. Why?

k8s/daemonset.md

455/1692

When the YAML parser is being too smart

  • YAML parsers try to help us:

    • xyz is the string "xyz"

    • 42 is the integer 42

    • yes is the boolean value true

  • If we want the string "42" or the string "yes", we have to quote them

  • So we have to use active: "yes"

For a good laugh: if we had used "ja", "oui", "si" ... as the value, it would have worked!

k8s/daemonset.md

456/1692

Updating the service selector, take 2

  • Update the YAML manifest of the service

  • Add active: "yes" to its selector

This time it should work!

If we did everything correctly, the web UI shouldn't show any change.

k8s/daemonset.md

457/1692

Updating labels

  • We want to disable the pod that was created by the deployment

  • All we have to do, is remove the active label from that pod

  • To identify that pod, we can use its name

  • ... Or rely on the fact that it's the only one with a pod-template-hash label

  • Good to know:

    • kubectl label ... foo= doesn't remove a label (it sets it to an empty string)

    • to remove label foo, use kubectl label ... foo-

    • to change an existing label, we would need to add --overwrite

k8s/daemonset.md

458/1692

Removing a pod from the load balancer

  • In one window, check the logs of that pod:
    POD=$(kubectl get pod -l app=rng,pod-template-hash -o name)
    kubectl logs --tail 1 --follow $POD
    (We should see a steady stream of HTTP logs)
  • In another window, remove the label from the pod:
    kubectl label pod -l app=rng,pod-template-hash active-
    (The stream of HTTP logs should stop immediately)

There might be a slight change in the web UI (since we removed a bit of capacity from the rng service). If we remove more pods, the effect should be more visible.

k8s/daemonset.md

459/1692

Updating the daemon set

  • If we scale up our cluster by adding new nodes, the daemon set will create more pods

  • These pods won't have the active=yes label

  • If we want these pods to have that label, we need to edit the daemon set spec

  • We can do that with e.g. kubectl edit daemonset rng

k8s/daemonset.md

460/1692

We've put resources in your resources

  • Reminder: a daemon set is a resource that creates more resources!

  • There is a difference between:

    • the label(s) of a resource (in the metadata block in the beginning)

    • the selector of a resource (in the spec block)

    • the label(s) of the resource(s) created by the first resource (in the template block)

  • We would need to update the selector and the template

    (metadata labels are not mandatory)

  • The template must match the selector

    (i.e. the resource will refuse to create resources that it will not select)

k8s/daemonset.md

461/1692

Labels and debugging

  • When a pod is misbehaving, we can delete it: another one will be recreated

  • But we can also change its labels

  • It will be removed from the load balancer (it won't receive traffic anymore)

  • Another pod will be recreated immediately

  • But the problematic pod is still here, and we can inspect and debug it

  • We can even re-add it to the rotation if necessary

    (Very useful to troubleshoot intermittent and elusive bugs)

k8s/daemonset.md

462/1692

Labels and advanced rollout control

  • Conversely, we can add pods matching a service's selector

  • These pods will then receive requests and serve traffic

  • Examples:

    • one-shot pod with all debug flags enabled, to collect logs

    • pods created automatically, but added to rotation in a second step
      (by setting their label accordingly)

  • This gives us building blocks for canary and blue/green deployments

k8s/daemonset.md

463/1692

Cleanup

Let's cleanup before we start the next lecture!

  • remove our DockerCoin resources (for now):
    kubectl delete -f https://k8smastery.com/dockercoins.yaml
    kubectl delete daemonset/rng

k8smastery/cleanup-dockercoins-daemonset.md

464/1692

Image separating from the next chapter

465/1692

Assignment 4: custom load balancing

(automatically generated title slide)

466/1692

Assignment 4: custom load balancing

Our goal here will be to create a service that load balances connections to two different deployments. You might use this as a simplistic way to run two versions of your apps in parallel.

In the real world, you'll likely use a 3rd party load balancer to provide advanced blue/green or canary-style deployments, but this assignment will help further your understanding of how service selectors are used to find pods to use as service endpoints.

For simplicity, version 1 of our application will be using the NGINX image, and version 2 of our application will be using the Apache image. They both listen on port 80 by default.

When we connect to the service, we expect to see some requests being served by NGINX, and some requests being served by Apache.

assignments/04customlb.md

467/1692

Hints

We need to create two deployments: one for v1 (NGINX), another for v2 (Apache).

468/1692

Hints

We need to create two deployments: one for v1 (NGINX), another for v2 (Apache).

They will be exposed through a single service.

469/1692

Hints

We need to create two deployments: one for v1 (NGINX), another for v2 (Apache).

They will be exposed through a single service.

The selector of that service will need to match the pods created by both deployments.

470/1692

Hints

We need to create two deployments: one for v1 (NGINX), another for v2 (Apache).

They will be exposed through a single service.

The selector of that service will need to match the pods created by both deployments.

For that, we will need to change the deployment specification to add an extra label, to be used solely by the service.

471/1692

Hints

We need to create two deployments: one for v1 (NGINX), another for v2 (Apache).

They will be exposed through a single service.

The selector of that service will need to match the pods created by both deployments.

For that, we will need to change the deployment specification to add an extra label, to be used solely by the service.

That label should be different from the pre-existing labels of our deployments, otherwise our deployments will step on each other's toes.

472/1692

Hints

We need to create two deployments: one for v1 (NGINX), another for v2 (Apache).

They will be exposed through a single service.

The selector of that service will need to match the pods created by both deployments.

For that, we will need to change the deployment specification to add an extra label, to be used solely by the service.

That label should be different from the pre-existing labels of our deployments, otherwise our deployments will step on each other's toes.

We're not at the point of writing our own YAML from scratch, so you'll need to use the kubectl edit command to modify existing resources.

assignments/04customlb.md

473/1692

Deploying version 1

1.1. Create a deployment running one pod using the official NGINX image.

1.2. Expose that deployment.

1.3. Check that you can successfully connect to the exposed service.

assignments/04customlb.md

474/1692

Setting up the service

2.1. Use a custom label/value to be used by the service. How about myapp: web.

2.2. Change (edit) the service definition to use that label/value.

2.3. Check that you cannot connect to the exposed service anymore.

2.4. Change (edit) the deployment definition to add that label/value to the pods.

2.5. Check that you can connect to the exposed service again.

assignments/04customlb.md

475/1692

Deploying version 2

3.1. Create a deployment running one pod using the official Apache image.

3.2. Change (edit) the deployment definition to add the label/value picked previously.

3.3. Connect to the exposed service again.

(It should now yield responses from both Apache and NGINX.)

assignments/04customlb.md

476/1692

Answers

1.1. kubectl create deployment v1-nginx --image=nginx

1.2. kubectl expose deployment v1-nginx --port=80 or kubectl create service v1-nginx --tcp=80

1.3.A If you are using shpod, or if you are running directly on the cluster:

### Obtain the ClusterIP that was allocated to the service
kubectl get svc v1-nginx
curl http://A.B.C.D

1.3.B You can also run a program like curl in a container:

kubectl run --restart=Never --image=alpine -ti --rm testcontainer
### Then, once you get a prompt, install curl
apk add curl
### Then, connect to the service
curl v1-nginx

assignments/04customlb.md

477/1692

Answers

2.1. Edit the YAML manifest of the service with kubectl edit service v1-nginx. Look for the selector: section, and change app: v1-nginx to myapp: web. Make sure to change the selector: section, not the labels: section! After making the change, save and quit.

2.2. The curl command (see previous slide) should now time out.

2.3. Edit the YAML manifest of the deployment with kubectl edit deployment v1-nginx. Look for the labels: section within the template: section, as we want to change the labels of the pods created by the deployment, not of the deployment itself. Make sure to change the labels: section, not the matchLabels: one. Add myapp: web just below app: v1-nginx, with the same indentation level. After making the change, save and quit. We need both labels here, unlike the service selector. The app label keeps the pod "linked" to the deployment/replicaset, and the new one will cause the service to match to this pod.

2.4. The curl command should now work again. (It might need a minute, since changing the label will trigger a rolling update and create a new pod.)

assignments/04customlb.md

478/1692

Answers

3.1. kubectl create deployment v2-apache --image=httpd

3.2. Same as previously: kubectl edit deployment v2-apache, then add the label myapp: web below app: v2-apache. Again, make sure to change the labels in the pod template, not of the deployment itself.

3.3. The curl command show now yield responses from NGINX and Apache.

(Note: you won't see a perfect round-robin, i.e. NGINX/Apache/NGINX/Apache etc., but on average, Apache and NGINX should serve approximately 50% of the requests each.)

assignments/04customlb.md

479/1692

Image separating from the next chapter

480/1692

Authoring YAML

(automatically generated title slide)

481/1692

Authoring YAML

  • To use Kubernetes is to "live in YAML"!

  • It's more important to learn the foundations then to memorize all YAML keys (hundreds+)

482/1692

Authoring YAML

  • To use Kubernetes is to "live in YAML"!

  • It's more important to learn the foundations then to memorize all YAML keys (hundreds+)

  • There are various ways to generate YAML with Kubernetes, e.g.:

    • kubectl run

    • kubectl create deployment (and a few other kubectl create variants)

    • kubectl expose

483/1692

Authoring YAML

  • To use Kubernetes is to "live in YAML"!

  • It's more important to learn the foundations then to memorize all YAML keys (hundreds+)

  • There are various ways to generate YAML with Kubernetes, e.g.:

    • kubectl run

    • kubectl create deployment (and a few other kubectl create variants)

    • kubectl expose

  • These commands use "generators" because the API only accepts YAML (actually JSON)

484/1692

Authoring YAML

  • To use Kubernetes is to "live in YAML"!

  • It's more important to learn the foundations then to memorize all YAML keys (hundreds+)

  • There are various ways to generate YAML with Kubernetes, e.g.:

    • kubectl run

    • kubectl create deployment (and a few other kubectl create variants)

    • kubectl expose

  • These commands use "generators" because the API only accepts YAML (actually JSON)

  • Pro: They are easy to use

  • Con: They have limits

485/1692

Authoring YAML

  • To use Kubernetes is to "live in YAML"!

  • It's more important to learn the foundations then to memorize all YAML keys (hundreds+)

  • There are various ways to generate YAML with Kubernetes, e.g.:

    • kubectl run

    • kubectl create deployment (and a few other kubectl create variants)

    • kubectl expose

  • These commands use "generators" because the API only accepts YAML (actually JSON)

  • Pro: They are easy to use

  • Con: They have limits

  • When and why do we need to write our own YAML?

  • How do we write YAML from scratch?

  • And maybe, what is YAML?

k8smastery/authoringyaml.md

486/1692

YAML Basics (just in case you need a refresher)

  • It's technically a superset of JSON, designed for humans

  • JSON was good for machines, but not for humans

  • Spaces set the structure. One space off and game over

  • Remember spaces not tabs, Ever!

  • Two spaces is standard, but four spaces works too

  • You don't have to learn all YAML features, but key concepts you need:

    • Key/Value Pairs
    • Array/Lists
    • Dictionary/Maps
  • Good online tutorials exist here, here, here, and YouTube here

k8smastery/authoringyaml.md

487/1692

Basic parts of any Kubernetes resource manifest

  • Can be in YAML or JSON, but YAML is 💯
488/1692

Basic parts of any Kubernetes resource manifest

  • Can be in YAML or JSON, but YAML is 💯

  • Each file contains one or more manifests

489/1692

Basic parts of any Kubernetes resource manifest

  • Can be in YAML or JSON, but YAML is 💯

  • Each file contains one or more manifests

  • Each manifest describes an API object (deployment, service, etc.)

490/1692

Basic parts of any Kubernetes resource manifest

  • Can be in YAML or JSON, but YAML is 💯

  • Each file contains one or more manifests

  • Each manifest describes an API object (deployment, service, etc.)

  • Each manifest needs four parts (root key:values in the file)

apiVersion:
kind:
metadata:
spec:

k8smastery/authoringyaml.md

491/1692

A simple Pod in YAML

  • This is a single manifest that creates one Pod
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx:1.17.3

k8smastery/authoringyaml.md

492/1692

Deployment and Service manifests in one YAML file

apiVersion: v1
kind: Service
metadata:
name: mynginx
spec:
type: NodePort
ports:
- port: 80
selector:
app: mynginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mynginx
spec:
replicas: 3
selector:
matchLabels:
app: mynginx
template:
metadata:
labels:
app: mynginx
spec:
containers:
- name: nginx
image: nginx:1.17.3

k8smastery/authoringyaml.md

493/1692

The limits of generated YAML

  • Advanced (and even not-so-advanced) features require us to write YAML:

    • pods with multiple containers

    • resource limits

    • healthchecks

    • many other resource options

494/1692

The limits of generated YAML

  • Advanced (and even not-so-advanced) features require us to write YAML:

    • pods with multiple containers

    • resource limits

    • healthchecks

    • many other resource options

  • Other resource types don't have their own commands!

    • DaemonSets

    • StatefulSets

    • and more!

  • How do we access these features?

k8smastery/authoringyaml.md

495/1692

We don't have to start from scratch

  • Output YAML from existing resources

    • Create a resource (e.g. Deployment)

    • Dump its YAML with kubectl get -o yaml ...

    • Edit the YAML

    • Use kubectl apply -f ... with the YAML file to:

    • update the resource (if it's the same kind)

    • create a new resource (if it's a different kind)

496/1692

We don't have to start from scratch

  • Output YAML from existing resources

    • Create a resource (e.g. Deployment)

    • Dump its YAML with kubectl get -o yaml ...

    • Edit the YAML

    • Use kubectl apply -f ... with the YAML file to:

    • update the resource (if it's the same kind)

    • create a new resource (if it's a different kind)

497/1692

We don't have to start from scratch

  • Output YAML from existing resources

    • Create a resource (e.g. Deployment)

    • Dump its YAML with kubectl get -o yaml ...

    • Edit the YAML

    • Use kubectl apply -f ... with the YAML file to:

    • update the resource (if it's the same kind)

    • create a new resource (if it's a different kind)

k8smastery/authoringyaml.md

498/1692

Generating YAML without creating resources

  • We can use the -o yaml --dry-run option combo with run and create
  • Generate the YAML for a Deployment without creating it:

    kubectl create deployment web --image nginx -o yaml --dry-run
  • Generate the YAML for a Namespace without creating it:

    kubectl create namespace awesome-app -o yaml --dry-run
  • We can clean up the YAML even more if we want

    (for instance, we can remove the creationTimestamp and empty dicts)

k8smastery/authoringyaml.md

499/1692

Try -o yaml --dry-run with other create commands

clusterrole # Create a ClusterRole.
clusterrolebinding # Create a ClusterRoleBinding for a particular ClusterRole.
configmap # Create a configmap from a local file, directory or literal.
cronjob # Create a cronjob with the specified name.
deployment # Create a deployment with the specified name.
job # Create a job with the specified name.
namespace # Create a namespace with the specified name.
poddisruptionbudget # Create a pod disruption budget with the specified name.
priorityclass # Create a priorityclass with the specified name.
quota # Create a quota with the specified name.
role # Create a role with single rule.
rolebinding # Create a RoleBinding for a particular Role or ClusterRole.
secret # Create a secret using specified subcommand.
service # Create a service using specified subcommand.
serviceaccount # Create a service account with the specified name.
  • Ensure you use valid create commands with required options for each

k8smastery/authoringyaml.md

500/1692

Writing YAML from scratch, "YAML The Hard Way"

501/1692

Writing YAML from scratch, "YAML The Hard Way"

  • Paying homage to Kelsey Hightower's "Kubernetes The Hard Way"

  • A reminder about manifests:

    • Each file contains one or more manifests

    • Each manifest describes an API object (deployment, service, etc.)

    • Each manifest needs four parts (root key:values in the file)

apiVersion: # find with "kubectl api-versions"
kind: # find with "kubectl api-resources"
metadata:
spec: # find with "kubectl describe pod"
502/1692

Writing YAML from scratch, "YAML The Hard Way"

  • Paying homage to Kelsey Hightower's "Kubernetes The Hard Way"

  • A reminder about manifests:

    • Each file contains one or more manifests

    • Each manifest describes an API object (deployment, service, etc.)

    • Each manifest needs four parts (root key:values in the file)

apiVersion: # find with "kubectl api-versions"
kind: # find with "kubectl api-resources"
metadata:
spec: # find with "kubectl describe pod"
  • Those three kubectl commands, plus the API docs, is all we'll need

k8smastery/authoringyaml.md

503/1692

General workflow of YAML from scratch

  • Find the resource kind you want to create (api-resources)
504/1692

General workflow of YAML from scratch

  • Find the resource kind you want to create (api-resources)

  • Find the latest apiVersion your cluster supports for kind (api-versions)

505/1692

General workflow of YAML from scratch

  • Find the resource kind you want to create (api-resources)

  • Find the latest apiVersion your cluster supports for kind (api-versions)

  • Give it a name in metadata (minimum)

506/1692

General workflow of YAML from scratch

  • Find the resource kind you want to create (api-resources)

  • Find the latest apiVersion your cluster supports for kind (api-versions)

  • Give it a name in metadata (minimum)

  • Dive into the spec of that kind

    • kubectl explain <kind>.spec
    • kubectl explain <kind> --recursive
507/1692

General workflow of YAML from scratch

  • Find the resource kind you want to create (api-resources)

  • Find the latest apiVersion your cluster supports for kind (api-versions)

  • Give it a name in metadata (minimum)

  • Dive into the spec of that kind

    • kubectl explain <kind>.spec
    • kubectl explain <kind> --recursive
  • Browse the docs API Reference for your cluster version to supplement

508/1692

General workflow of YAML from scratch

  • Find the resource kind you want to create (api-resources)

  • Find the latest apiVersion your cluster supports for kind (api-versions)

  • Give it a name in metadata (minimum)

  • Dive into the spec of that kind

    • kubectl explain <kind>.spec
    • kubectl explain <kind> --recursive
  • Browse the docs API Reference for your cluster version to supplement

  • Use --dry-run and --server-dry-run for testing

  • kubectl create and delete until you get it right k8smastery/authoringyaml.md

509/1692

Advantage of YAML

  • Using YAML (instead of kubectl run/create/etc.) allows to be declarative

  • The YAML describes the desired state of our cluster and applications

  • YAML can be stored, versioned, archived (e.g. in git repositories)

  • To change resources, change the YAML files

    (instead of using kubectl edit/scale/label/etc.)

  • Changes can be reviewed before being applied

    (with code reviews, pull requests ...)

  • This workflow is sometimes called "GitOps"

    (there are tools like Weave Flux or GitKube to facilitate it)

k8smastery/authoringyaml.md

510/1692

YAML in practice

  • Get started with kubectl run/create/expose/etc.

  • Dump the YAML with kubectl get -o yaml

  • Tweak that YAML and kubectl apply it back

  • Store that YAML for reference (for further deployments)

  • Feel free to clean up the YAML:

    • remove fields you don't know

    • check that it still works!

  • That YAML will be useful later when using e.g. Kustomize or Helm

k8smastery/authoringyaml.md

511/1692

YAML linting and validation

k8smastery/authoringyaml.md

512/1692

Image separating from the next chapter

513/1692

Using server-dry-run and diff

(automatically generated title slide)

514/1692

Using server-dry-run and diff

  • We already talked about using --dry-run for building YAML

  • Let's talk more about options for testing YAML

  • Including testing against the live cluster API!

k8smastery/dryrun.md

515/1692

Using --dry-run with kubectl apply

  • The --dry-run option can also be used with kubectl apply

  • However, it can be misleading (it doesn't do a "real" dry run)

  • Let's see what happens in the following scenario:

    • generate the YAML for a Deployment

    • tweak the YAML to transform it into a DaemonSet

    • apply that YAML to see what would actually be created

k8smastery/dryrun.md

516/1692

The limits of kubectl apply --dry-run

  • Generate the YAML for a deployment:

    kubectl create deployment web --image=nginx -o yaml > web.yaml
  • Change the kind in the YAML to make it a DaemonSet

  • Ask kubectl what would be applied:

    kubectl apply -f web.yaml --dry-run --validate=false -o yaml

The resulting YAML doesn't represent a valid DaemonSet.

k8smastery/dryrun.md

517/1692

Server-side dry run

  • Since Kubernetes 1.13, we can use server-side dry run and diffs

  • Server-side dry run will do all the work, but not persist to etcd

    (all validation and mutation hooks will be executed)

  • Try the same YAML file as earlier, with server-side dry run:
    kubectl apply -f web.yaml --server-dry-run --validate=false -o yaml

The resulting YAML doesn't have the replicas field anymore.

Instead, it has the fields expected in a DaemonSet.

k8smastery/dryrun.md

518/1692

Advantages of server-side dry run

  • The YAML is verified much more extensively

  • The only step that is skipped is "write to etcd"

  • YAML that passes server-side dry run should apply successfully

    (unless the cluster state changes by the time the YAML is actually applied)

  • Validating or mutating hooks that have side effects can also be an issue

k8smastery/dryrun.md

519/1692

kubectl diff

  • Kubernetes 1.13 also introduced kubectl diff

  • kubectl diff does a server-side dry run, and shows differences

  • Try kubectl diff on a simple Pod YAML:
    curl -O https://k8smastery.com/just-a-pod.yaml
    kubectl apply -f just-a-pod.yaml
    # edit the image tag to :1.17
    kubectl diff -f just-a-pod.yaml

Note: we don't need to specify --validate=false here.

k8smastery/dryrun.md

520/1692

Cleanup

Let's cleanup before we start the next lecture!

  • remove our "hello" pod:
    kubectl delete -f just-a-pod.yaml

k8smastery/cleanup-hello.md

521/1692

Re-deploying DockerCoins with YAML

  • OK back to DockerCoins! Let's deploy all the resources:
  • Deploy or redeploy DockerCoins:
    kubectl apply -f https://k8smastery.com/dockercoins.yaml

k8smastery/dockercoins-apply.md

522/1692

Image separating from the next chapter

523/1692

Rolling updates

(automatically generated title slide)

524/1692

Rolling updates

  • By default (without rolling updates), when a scaled resource is updated:

    • new pods are created

    • old pods are terminated

    • ... all at the same time

    • if something goes wrong, ¯\_(ツ)_/¯

k8s/rollout.md

525/1692

Rolling updates

  • With rolling updates, when a Deployment is updated, it happens progressively

  • The Deployment controls multiple ReplicaSets

526/1692

Rolling updates

  • With rolling updates, when a Deployment is updated, it happens progressively

  • The Deployment controls multiple ReplicaSets

  • Each ReplicaSet is a group of identical Pods

    (with the same image, arguments, parameters ...)

527/1692

Rolling updates

  • With rolling updates, when a Deployment is updated, it happens progressively

  • The Deployment controls multiple ReplicaSets

  • Each ReplicaSet is a group of identical Pods

    (with the same image, arguments, parameters ...)

  • During the rolling update, we have at least two ReplicaSets:

    • the "new" set (corresponding to the "target" version)

    • at least one "old" set

528/1692

Rolling updates

  • With rolling updates, when a Deployment is updated, it happens progressively

  • The Deployment controls multiple ReplicaSets

  • Each ReplicaSet is a group of identical Pods

    (with the same image, arguments, parameters ...)

  • During the rolling update, we have at least two ReplicaSets:

    • the "new" set (corresponding to the "target" version)

    • at least one "old" set

  • We can have multiple "old" sets

    (if we start another update before the first one is done)

k8s/rollout.md

529/1692

Update strategy

  • Two parameters determine the pace of the rollout: maxUnavailable and maxSurge
530/1692

Update strategy

  • Two parameters determine the pace of the rollout: maxUnavailable and maxSurge

  • They can be specified in absolute number of pods, or percentage of the replicas count

531/1692

Update strategy

  • Two parameters determine the pace of the rollout: maxUnavailable and maxSurge

  • They can be specified in absolute number of pods, or percentage of the replicas count

  • At any given time ...

    • there will always be at least replicas-maxUnavailable pods available

    • there will never be more than replicas+maxSurge pods in total

    • there will therefore be up to maxUnavailable+maxSurge pods being updated

532/1692

Update strategy

  • Two parameters determine the pace of the rollout: maxUnavailable and maxSurge

  • They can be specified in absolute number of pods, or percentage of the replicas count

  • At any given time ...

    • there will always be at least replicas-maxUnavailable pods available

    • there will never be more than replicas+maxSurge pods in total

    • there will therefore be up to maxUnavailable+maxSurge pods being updated

  • We have the possibility of rolling back to the previous version
    (if the update fails or is unsatisfactory in any way)

k8s/rollout.md

533/1692

Checking current rollout parameters

  • Recall how we build custom reports with kubectl and jq:
  • Show the rollout plan for our deployments:
    kubectl get deploy -o json |
    jq ".items[] | {name:.metadata.name} + .spec.strategy.rollingUpdate"

k8s/rollout.md

534/1692

Rolling updates in practice

  • As of Kubernetes 1.8, we can do rolling updates with:

    deployments, daemonsets, statefulsets

  • Editing one of these resources will automatically result in a rolling update

  • Rolling updates can be monitored with the kubectl rollout subcommand

k8s/rollout.md

535/1692

Rolling out the new worker service

  • Let's monitor what's going on by opening a few terminals, and run:
    kubectl get pods -w
    kubectl get replicasets -w
    kubectl get deployments -w
  • Update worker either with kubectl edit, or by running:
    kubectl set image deploy worker worker=dockercoins/worker:v0.2
536/1692

Rolling out the new worker service

  • Let's monitor what's going on by opening a few terminals, and run:
    kubectl get pods -w
    kubectl get replicasets -w
    kubectl get deployments -w
  • Update worker either with kubectl edit, or by running:
    kubectl set image deploy worker worker=dockercoins/worker:v0.2

That rollout should be pretty quick. What shows in the web UI?

k8s/rollout.md

537/1692

Give it some time

  • At first, it looks like nothing is happening (the graph remains at the same level)

  • According to kubectl get deploy -w, the deployment was updated really quickly

  • But kubectl get pods -w tells a different story

  • The old pods are still here, and they stay in Terminating state for a while

  • Eventually, they are terminated; and then the graph decreases significantly

  • This delay is due to the fact that our worker doesn't handle signals

  • Kubernetes sends a "polite" shutdown request to the worker, which ignores it

  • After a grace period, Kubernetes gets impatient and kills the container

    (The grace period is 30 seconds, but can be changed if needed)

k8s/rollout.md

538/1692

Rolling out something invalid

  • What happens if we make a mistake?
  • Update worker by specifying a non-existent image:

    kubectl set image deploy worker worker=dockercoins/worker:v0.3
  • Check what's going on:

    kubectl rollout status deploy worker
539/1692

Rolling out something invalid

  • What happens if we make a mistake?
  • Update worker by specifying a non-existent image:

    kubectl set image deploy worker worker=dockercoins/worker:v0.3
  • Check what's going on:

    kubectl rollout status deploy worker

Our rollout is stuck. However, the app is not dead.

(After a minute, it will stabilize to be 20-25% slower.)

k8s/rollout.md

540/1692

What's going on with our rollout?

  • Why is our app a bit slower?

  • Because MaxUnavailable=25%

    ... So the rollout terminated 2 replicas out of 10 available

  • Okay, but why do we see 5 new replicas being rolled out?

  • Because MaxSurge=25%

    ... So in addition to replacing 2 replicas, the rollout is also starting 3 more

  • It rounded down the number of MaxUnavailable pods conservatively,
    but the total number of pods being rolled out is allowed to be 25+25=50%

k8s/rollout.md

541/1692

The nitty-gritty details

  • We start with 10 pods running for the worker deployment

  • Current settings: MaxUnavailable=25% and MaxSurge=25%

  • When we start the rollout:

    • two replicas are taken down (as per MaxUnavailable=25%)
    • two others are created (with the new version) to replace them
    • three others are created (with the new version) per MaxSurge=25%)
  • Now we have 8 replicas up and running, and 5 being deployed

  • Our rollout is stuck at this point!

k8s/rollout.md

542/1692

Checking the dashboard during the bad rollout

If you didn't deploy the Kubernetes dashboard earlier, just skip this slide.

  • Connect to the dashboard that we deployed earlier

  • Check that we have failures in Deployments, Pods, and Replica Sets

  • Can we see the reason for the failure?

k8s/rollout.md

543/1692

Recovering from a bad rollout

  • We could push some v0.3 image

    (the pod retry logic will eventually catch it and the rollout will proceed)

  • Or we could invoke a manual rollback

  • Cancel the deployment and wait for the dust to settle:
    kubectl rollout undo deploy worker
    kubectl rollout status deploy worker

k8s/rollout.md

544/1692

Rolling back to an older version

  • We reverted to v0.2

  • But this version still has a performance problem

  • How can we get back to the previous version?

k8s/rollout.md

545/1692

Multiple "undos"

  • What happens if we try kubectl rollout undo again?
  • Try it:

    kubectl rollout undo deployment worker
  • Check the web UI, the list of pods ...

🤔 That didn't work.

k8s/rollout.md

546/1692

Multiple "undos" don't work

  • If we see successive versions as a stack:

    • kubectl rollout undo doesn't "pop" the last element from the stack

    • it copies the N-1th element to the top

  • Multiple "undos" just swap back and forth between the last two versions!

  • Go back to v0.2 again:
    kubectl rollout undo deployment worker

k8s/rollout.md

547/1692

In this specific scenario

  • Our version numbers are easy to guess

  • What if we had used git hashes?

  • What if we had changed other parameters in the Pod spec?

k8s/rollout.md

548/1692

Listing versions

  • We can list successive versions of a Deployment with kubectl rollout history
  • Look at our successive versions:
    kubectl rollout history deployment worker

We don't see all revisions.

We might see something like 1, 4, 5.

(Depending on how many "undos" we did before.)

k8s/rollout.md

549/1692

Explaining deployment revisions

  • These revisions correspond to our ReplicaSets

  • This information is stored in the ReplicaSet annotations

  • Check the annotations for our replica sets:
    kubectl describe replicasets -l app=worker | grep -A3 ^Annotations

k8s/rollout.md

550/1692

What about the missing revisions?

  • The missing revisions are stored in another annotation:

    deployment.kubernetes.io/revision-history

  • These are not shown in kubectl rollout history

  • We could easily reconstruct the full list with a script

    (if we wanted to!)

k8s/rollout.md

551/1692

Rolling back to an older version

  • kubectl rollout undo can work with a revision number
  • Roll back to the "known good" deployment version:

    kubectl rollout undo deployment worker --to-revision=1
  • Check the web UI or the list of pods

k8s/rollout.md

552/1692

Changing rollout parameters

  • What if we wanted to, all at once:

    • change image to v0.1
    • be conservative on availability (always have desired number of available workers)
    • go slow on rollout speed (update only one pod at a time)
    • give some time to our workers to "warm up" before starting more

The corresponding changes can be expressed in the following YAML snippet:

spec:
template:
spec:
containers:
- name: worker
image: dockercoins/worker:v0.1
strategy:
rollingUpdate:
maxUnavailable: 0
maxSurge: 1
minReadySeconds: 10

k8s/rollout.md

553/1692

Applying changes through a YAML patch

  • We could use kubectl edit deployment worker

  • But we could also use kubectl patch with the exact YAML shown before

  • Apply all our changes and wait for them to take effect:
    kubectl patch deployment worker -p "
    spec:
    template:
    spec:
    containers:
    - name: worker
    image: dockercoins/worker:v0.1
    strategy:
    rollingUpdate:
    maxUnavailable: 0
    maxSurge: 1
    minReadySeconds: 10
    "
    kubectl rollout status deployment worker
    kubectl get deploy -o json worker |
    jq "{name:.metadata.name} + .spec.strategy.rollingUpdate"

k8s/rollout.md

554/1692

Image separating from the next chapter

555/1692

Healthchecks

(automatically generated title slide)

556/1692

Healthchecks

  • Healthchecks are key to providing built-in lifecycle automation
557/1692

Healthchecks

  • Healthchecks are key to providing built-in lifecycle automation

  • Healthchecks are probes that apply to containers (not to pods)

  • Kubernetes will take action on containers that fail healthchecks

558/1692

Healthchecks

  • Healthchecks are key to providing built-in lifecycle automation

  • Healthchecks are probes that apply to containers (not to pods)

  • Kubernetes will take action on containers that fail healthchecks

  • Each container can have three (optional) probes:

    • liveness = is this container dead or alive? (most important probe)

    • readiness = is this container ready to serve traffic? (only needed if a service)

    • startup = is this container still starting up? (alpha in 1.16)

559/1692

Healthchecks

  • Healthchecks are key to providing built-in lifecycle automation

  • Healthchecks are probes that apply to containers (not to pods)

  • Kubernetes will take action on containers that fail healthchecks

  • Each container can have three (optional) probes:

    • liveness = is this container dead or alive? (most important probe)

    • readiness = is this container ready to serve traffic? (only needed if a service)

    • startup = is this container still starting up? (alpha in 1.16)

  • Different probe handlers are available (HTTP, TCP, program execution)

560/1692

Healthchecks

  • Healthchecks are key to providing built-in lifecycle automation

  • Healthchecks are probes that apply to containers (not to pods)

  • Kubernetes will take action on containers that fail healthchecks

  • Each container can have three (optional) probes:

    • liveness = is this container dead or alive? (most important probe)

    • readiness = is this container ready to serve traffic? (only needed if a service)

    • startup = is this container still starting up? (alpha in 1.16)

  • Different probe handlers are available (HTTP, TCP, program execution)

  • They don't replace a full monitoring solution

  • Let's see the difference and how to use them!

k8s/healthchecks.md

561/1692

Liveness probe

  • Indicates if the container is dead or alive

  • A dead container cannot come back to life

  • If the liveness probe fails, the container is killed

    (to make really sure that it's really dead; no zombies or undeads!)

  • What happens next depends on the pod's restartPolicy:

    • Never: the container is not restarted

    • OnFailure or Always: the container is restarted

k8s/healthchecks.md

562/1692

When to use a liveness probe

  • To indicate failures that can't be recovered

    • deadlocks (causing all requests to time out)

    • internal corruption (causing all requests to error)

  • Anything where our incident response would be "just restart/reboot it"

Do not use liveness probes for problems that can't be fixed by a restart

  • Otherwise we just restart our pods for no reason, creating useless load

k8s/healthchecks.md

563/1692

Readiness probe

  • Indicates if the container is ready to serve traffic

  • If a container becomes "unready" it might be ready again soon

  • If the readiness probe fails:

    • the container is not killed

    • if the pod is a member of a service, it is temporarily removed

    • it is re-added as soon as the readiness probe passes again

k8s/healthchecks.md

564/1692

When to use a readiness probe

  • To indicate failure due to an external cause

    • database is down or unreachable

    • mandatory auth or other backend service unavailable

  • To indicate temporary failure or unavailability

    • application can only service N parallel connections

    • runtime is busy doing garbage collection or initial data load

k8s/healthchecks.md

565/1692

Startup probe

  • Kubernetes 1.16 introduces a third type of probe: startupProbe

    (it is in alpha in Kubernetes 1.16)

  • It can be used to indicate "container not ready yet"

    • process is still starting

    • loading external data, priming caches

  • Before Kubernetes 1.16, we had to use the initialDelaySeconds parameter

    (available for both liveness and readiness probes)

  • initialDelaySeconds is a rigid delay (always wait X before running probes)

  • startupProbe works better when a container start time can vary a lot

k8s/healthchecks.md

566/1692

Benefits of using probes

  • Rolling updates proceed when containers are actually ready

    (as opposed to merely started)

  • Containers in a broken state get killed and restarted

    (instead of serving errors or timeouts)

  • Unavailable backends get removed from load balancer rotation

    (thus improving response times across the board)

  • If a probe is not defined, it's as if there was an "always successful" probe

k8s/healthchecks.md

567/1692

Different types of probe handlers

  • HTTP request

    • specify URL of the request (and optional headers)

    • any status code between 200 and 399 indicates success

  • TCP connection

    • the probe succeeds if the TCP port is open
  • arbitrary exec

    • a command is executed in the container

    • exit status of zero indicates success

k8s/healthchecks.md

568/1692

Timing and thresholds

  • Probes are executed at intervals of periodSeconds (default: 10)

  • The timeout for a probe is set with timeoutSeconds (default: 1)

If a probe takes longer than that, it is considered as a FAIL

  • A probe is considered successful after successThreshold successes (default: 1)

  • A probe is considered failing after failureThreshold failures (default: 3)

  • A probe can have an initialDelaySeconds parameter (default: 0)

  • Kubernetes will wait that amount of time before running the probe for the first time

    (this is important to avoid killing services that take a long time to start)

k8s/healthchecks.md

569/1692

Example: HTTP probe

Here is a pod template for the rng web service of the DockerCoins app:

apiVersion: v1
kind: Pod
metadata:
name: rng-with-liveness
spec:
containers:
- name: rng
image: dockercoins/rng:v0.1
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 10
periodSeconds: 1

If the backend serves an error, or takes longer than 1s, 3 times in a row, it gets killed.

k8s/healthchecks.md

570/1692

Example: exec probe

Here is a pod template for a Redis server:

apiVersion: v1
kind: Pod
metadata:
name: redis-with-liveness
spec:
containers:
- name: redis
image: redis
livenessProbe:
exec:
command: ["redis-cli", "ping"]

If the Redis process becomes unresponsive, it will be killed.

k8s/healthchecks.md

571/1692

Should probes check container Dependencies?

  • A HTTP/TCP probe can't check an external dependency
572/1692

Should probes check container Dependencies?

  • A HTTP/TCP probe can't check an external dependency

  • But a HTTP URL could kick off code to validate a remote dependency

573/1692

Should probes check container Dependencies?

  • A HTTP/TCP probe can't check an external dependency

  • But a HTTP URL could kick off code to validate a remote dependency

  • If a web server depends on a database to function, and the database is down:

    • the web server's liveness probe should succeed

    • the web server's readiness probe should fail

574/1692

Should probes check container Dependencies?

  • A HTTP/TCP probe can't check an external dependency

  • But a HTTP URL could kick off code to validate a remote dependency

  • If a web server depends on a database to function, and the database is down:

    • the web server's liveness probe should succeed

    • the web server's readiness probe should fail

  • Same thing for any hard dependency (without which the container can't work)

575/1692

Should probes check container Dependencies?

  • A HTTP/TCP probe can't check an external dependency

  • But a HTTP URL could kick off code to validate a remote dependency

  • If a web server depends on a database to function, and the database is down:

    • the web server's liveness probe should succeed

    • the web server's readiness probe should fail

  • Same thing for any hard dependency (without which the container can't work)

Do not fail liveness probes for problems that are external to the container

k8s/healthchecks.md

576/1692

Healthchecks for workers

(In that context, worker = process that doesn't accept connections)

  • Readiness isn't useful

    (because workers aren't backends for a service)

577/1692

Healthchecks for workers

(In that context, worker = process that doesn't accept connections)

  • Readiness isn't useful

    (because workers aren't backends for a service)

  • Liveness may help us restart a broken worker, but how can we check it?

  • Embedding an HTTP server is a (potentially expensive) option

578/1692

Healthchecks for workers

(In that context, worker = process that doesn't accept connections)

  • Readiness isn't useful

    (because workers aren't backends for a service)

  • Liveness may help us restart a broken worker, but how can we check it?

  • Embedding an HTTP server is a (potentially expensive) option

  • Using a "lease" file can be relatively easy:

    • touch a file during each iteration of the main loop

    • check the timestamp of that file from an exec probe

  • Writing logs (and checking them from the probe) also works

k8s/healthchecks.md

579/1692

Questions to ask before adding healthchecks

  • Do we want liveness, readiness, both?

    (sometimes, we can use the same check, but with different failure thresholds)

580/1692

Questions to ask before adding healthchecks

  • Do we want liveness, readiness, both?

    (sometimes, we can use the same check, but with different failure thresholds)

  • Do we have existing HTTP endpoints that we can use?

  • Do we need to add new endpoints, or perhaps use something else?

581/1692

Questions to ask before adding healthchecks

  • Do we want liveness, readiness, both?

    (sometimes, we can use the same check, but with different failure thresholds)

  • Do we have existing HTTP endpoints that we can use?

  • Do we need to add new endpoints, or perhaps use something else?

  • Are our healthchecks likely to use resources and/or slow down the app?

582/1692

Questions to ask before adding healthchecks

  • Do we want liveness, readiness, both?

    (sometimes, we can use the same check, but with different failure thresholds)

  • Do we have existing HTTP endpoints that we can use?

  • Do we need to add new endpoints, or perhaps use something else?

  • Are our healthchecks likely to use resources and/or slow down the app?

  • Do they depend on additional services?

    (this can be particularly tricky)

k8s/healthchecks.md

583/1692

Adding healthchecks to an app

  • Let's add healthchecks to DockerCoins!

  • We will examine the questions of the previous slide

  • Then we will review each component individually to add healthchecks

k8s/healthchecks-more.md

584/1692

Liveness, readiness, or both?

  • To answer that question, we need to see the app run for a while

  • Do we get temporary, recoverable glitches?

    → then use readiness

  • Or do we get hard lock-ups requiring a restart?

    → then use liveness

  • In the case of DockerCoins, we don't know yet!

  • Let's pick liveness

k8s/healthchecks-more.md

585/1692

Do we have HTTP endpoints that we can use?

  • Each of the 3 web services (hasher, rng, webui) has a trivial route on /

  • These routes:

    • don't seem to perform anything complex or expensive

    • don't seem to call other services

  • Perfect!

    (See next slides for individual details)

k8s/healthchecks-more.md

586/1692
  • hasher.rb

    get '/' do
    "HASHER running on #{Socket.gethostname}\n"
    end
  • rng.py

    @app.route("/")
    def index():
    return "RNG running on {}\n".format(hostname)
  • webui.js

    app.get('/', function (req, res) {
    res.redirect('/index.html');
    });

k8s/healthchecks-more.md

587/1692

Retrieving DockerCoins manifests

  • I've split up the previous dockercoins.yaml into one-resource-per-file

  • This works with the apply command, and is easier for humans to manage

  • Clone them locally so we can add healthchecks and re-apply

  • Clone that repository:

    git clone https://github.com/bretfisher/kubercoins
  • Change directory to the repository:

    cd kubercoins

k8s/healthchecks-more.md

588/1692

A simple HTTP liveness probe

This is what our liveness probe should look like:

containers:
- name: ...
image: ...
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 30
periodSeconds: 5

This will give 30 seconds to the service to start. (Way more than necessary!)
It will run the probe every 5 seconds.
It will use the default timeout (1 second).
It will use the default failure threshold (3 failed attempts = dead).
It will use the default success threshold (1 successful attempt = alive).

k8s/healthchecks-more.md

589/1692

Adding the liveness probe

  • Let's add the liveness probe, then deploy DockerCoins

  • Remember if you don't have DockerCoins running, this will create

  • If you already have DockerCoins running, this will update rng

  • Edit rng-deployment.yaml and add the liveness probe

    vim rng-deployment.yaml
  • Load the YAML for all the resources of DockerCoins

    kubectl apply -f .

k8s/healthchecks-more.md

590/1692

Testing the liveness probe

  • The rng service needs 100ms to process a request

    (because it is single-threaded and sleeps 0.1s in each request)

  • The probe timeout is set to 1 second

  • If we send more than 10 requests per second per backend, it will break

  • Let's generate traffic and see what happens!

  • Get the ClusterIP address of the rng service:
    kubectl get svc rng

k8s/healthchecks-more.md

591/1692

Monitoring the rng service

  • Each command below will show us what's happening on a different level
  • In one window, monitor cluster events:

    kubectl get events -w
  • In another window, monitor pods status:

    kubectl get pods -w

k8s/healthchecks-more.md

592/1692

Generating traffic

  • Let's use ab (Apache Bench) to send concurrent requests to rng
  • In yet another window, generate traffic using shpod:

    kubectl attach --namespace=shpod -ti shpod
    ab -c 10 -n 1000 http://<ClusterIP>/1
  • Experiment with higher values of -c and see what happens

  • The -c parameter indicates the number of concurrent requests

  • The final /1 is important to generate actual traffic

    (otherwise we would use the ping endpoint, which doesn't sleep 0.1s per request)

k8s/healthchecks-more.md

593/1692

Discussion

  • Above a given threshold, the liveness probe starts failing

    (about 10 concurrent requests per backend should be plenty enough)

  • When the liveness probe fails 3 times in a row, the container is restarted

  • During the restart, there is less capacity available

  • ... Meaning that the other backends are likely to timeout as well

  • ... Eventually causing all backends to be restarted

  • ... And each fresh backend gets restarted, too

  • This goes on until the load goes down, or we add capacity

This wouldn't be a good healthcheck in a real application!

k8s/healthchecks-more.md

594/1692

Better healthchecks

  • We need to make sure that the healthcheck doesn't trip when performance degrades due to external pressure

  • Using a readiness check would have fewer effects

    (but it would still be an imperfect solution)

  • A possible combination:

    • readiness check with a short timeout / low failure threshold

    • liveness check with a longer timeout / higher failure threshold

k8s/healthchecks-more.md

595/1692

Healthchecks for redis

  • A liveness probe is enough

    (it's not useful to remove a backend from rotation when it's the only one)

  • We could use an exec probe running redis-cli ping

k8s/healthchecks-more.md

596/1692

Exec probes and zombies

  • When using exec probes, we should make sure that we have a zombie reaper

    🤔🧐🧟 Wait, what?

  • When a process terminates, its parent must call wait()/waitpid()

    (this is how the parent process retrieves the child's exit status)

  • In the meantime, the process is in zombie state

    (the process state will show as Z in ps, top ...)

  • When a process is killed, its children are orphaned and attached to PID 1

  • PID 1 has the responsibility of reaping these processes when they terminate

  • OK, but how does that affect us?

k8s/healthchecks-more.md

597/1692

PID 1 in containers

k8s/healthchecks-more.md

598/1692

Tini and redis ping in a liveness probe

  1. Add tini to your own custom redis image

  2. Change the kubercoins YAML to use your own image

  3. Create a liveness probe in kubercoins YAML

  4. Use exec handeler and run tini -s -- redis-cli ping

  5. Example repo here: github.com/BretFisher/redis-tini

containers:
- name: redis
image: custom-redis-image
livenessProbe:
exec:
command:
- /tini
- -s
- --
- redis-cli
- ping
initialDelaySeconds: 30
periodSeconds: 5

k8s/healthchecks-more.md

599/1692

Cleanup

Let's cleanup before we start the next lecture!

  • remove our DockerCoin resources (for now):
    kubectl delete -f https://k8smastery.com/dockercoins.yaml

k8smastery/cleanup-dockercoins.md

600/1692

Image separating from the next chapter

601/1692

Managing configuration

(automatically generated title slide)

602/1692

Managing configuration

  • Some applications need to be configured (obviously!)

  • There are many ways for our code to pick up configuration:

    • command-line arguments

    • environment variables

    • configuration files

    • configuration servers (getting configuration from a database, an API...)

    • ... and more (because programmers can be very creative!)

  • How can we do these things with containers and Kubernetes?

k8s/configuration.md

603/1692

Passing configuration to containers

  • There are many ways to pass configuration to code running in a container:

    • baking it into a custom image

    • command-line arguments

    • environment variables

    • injecting configuration files

    • exposing it over the Kubernetes API

    • configuration servers

  • Let's review these different strategies!

k8s/configuration.md

604/1692

Baking custom images

  • Put the configuration in the image

    (it can be in a configuration file, but also ENV or CMD actions)

  • It's easy! It's simple!

  • Unfortunately, it also has downsides:

    • multiplication of images

    • different images for dev, staging, prod ...

    • minor reconfigurations require a whole build/push/pull cycle

  • Avoid doing it unless you don't have the time to figure out other options

k8s/configuration.md

605/1692

Command-line arguments

  • Pass options to args array in the container specification

  • Example (source):

    args:
    - "--data-dir=/var/lib/etcd"
    - "--advertise-client-urls=http://127.0.0.1:2379"
    - "--listen-client-urls=http://127.0.0.1:2379"
    - "--listen-peer-urls=http://127.0.0.1:2380"
    - "--name=etcd"
  • The options can be passed directly to the program that we run ...

    ... or to a wrapper script that will use them to e.g. generate a config file

k8s/configuration.md

606/1692

Command-line arguments, pros & cons

  • Works great when options are passed directly to the running program

    (otherwise, a wrapper script can work around the issue)

  • Works great when there aren't too many parameters

    (to avoid a 20-lines args array)

  • Requires documentation and/or understanding of the underlying program

    ("which parameters and flags do I need, again?")

  • Well-suited for mandatory parameters (without default values)

  • Not ideal when we need to pass a real configuration file anyway

k8s/configuration.md

607/1692

Environment variables

  • Pass options through the env map in the container specification

  • Example:

    env:
    - name: ADMIN_PORT
    value: "8080"
    - name: ADMIN_AUTH
    value: Basic
    - name: ADMIN_CRED
    value: "admin:0pensesame!"

value must be a string! Make sure that numbers and fancy strings are quoted.

🤔 Why this weird {name: xxx, value: yyy} scheme? It will be revealed soon!

k8s/configuration.md

608/1692

The Downward API

  • In the previous example, environment variables have fixed values

  • We can also use a mechanism called the Downward API

  • The Downward API allows exposing pod or container information

    • either through special files (we won't show that for now)

    • or through environment variables

  • The value of these environment variables is computed when the container is started

  • Remember: environment variables won't (can't) change after container start

  • Let's see a few concrete examples!

k8s/configuration.md

609/1692

Exposing the pod's namespace

- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
  • Useful to generate FQDN of services

    (in some contexts, a short name is not enough)

  • For instance, the two commands should be equivalent:

    curl api-backend
    curl api-backend.$MY_POD_NAMESPACE.svc.cluster.local

k8s/configuration.md

610/1692

Exposing the pod's IP address

- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
  • Useful if we need to know our IP address

    (we could also read it from eth0, but this is more solid)

k8s/configuration.md

611/1692

Exposing the container's resource limits

- name: MY_MEM_LIMIT
valueFrom:
resourceFieldRef:
containerName: test-container
resource: limits.memory
  • Useful for runtimes where memory is garbage collected

  • Example: the JVM

    (the memory available to the JVM should be set with the -Xmx flag)

  • Best practice: set a memory limit, and pass it to the runtime

  • Note: recent versions of the JVM can do this automatically

    (see JDK-8146115) and this blog post for detailed examples)

k8s/configuration.md

612/1692

More about the Downward API

  • This documentation page tells more about these environment variables

  • And this one explains the other way to use the Downward API

    (through files that get created in the container filesystem)

k8s/configuration.md

613/1692

Environment variables, pros and cons

  • Works great when the running program expects these variables

  • Works great for optional parameters with reasonable defaults

    (since the container image can provide these defaults)

  • Sort of auto-documented

    (we can see which environment variables are defined in the image, and their values)

  • Can be (ab)used with longer values ...

  • ... You can put an entire Tomcat configuration file in an environment ...

  • ... But should you?

(Do it if you really need to, we're not judging! But we'll see better ways.)

k8s/configuration.md

614/1692

Injecting configuration files with ConfigMaps

  • Sometimes, there is no way around it: we need to inject a full config file

  • Kubernetes provides a mechanism for that purpose: ConfigMaps

  • A ConfigMap is a Kubernetes resource that exists in a namespace

  • Conceptually, it's a key/value map

    (values are arbitrary strings)

  • We can think about them in (at least) two different ways:

    • as holding entire configuration file(s)

    • as holding individual configuration parameters

Note: to hold sensitive information, we can use "Secrets", which are another type of resource behaving very much like ConfigMaps. We'll cover them just after!

k8s/configuration.md

615/1692

ConfigMaps storing entire files

  • In this case, each key/value pair corresponds to a configuration file

  • Key = name of the file

  • Value = content of the file

  • There can be one key/value pair, or as many as necessary

    (for complex apps with multiple configuration files)

  • Examples:

    # Create a ConfigMap with a single key, "app.conf"
    kubectl create configmap my-app-config --from-file=app.conf
    # Create a ConfigMap with a single key, "app.conf" but another file
    kubectl create configmap my-app-config --from-file=app.conf=app-prod.conf
    # Create a ConfigMap with multiple keys (one per file in the config.d directory)
    kubectl create configmap my-app-config --from-file=config.d/

k8s/configuration.md

616/1692

ConfigMaps storing individual parameters

  • In this case, each key/value pair corresponds to a parameter

  • Key = name of the parameter

  • Value = value of the parameter

  • Examples:

    # Create a ConfigMap with two keys
    kubectl create cm my-app-config \
    --from-literal=foreground=red \
    --from-literal=background=blue
    # Create a ConfigMap from a file containing key=val pairs
    kubectl create cm my-app-config \
    --from-env-file=app.conf

k8s/configuration.md

617/1692

Exposing ConfigMaps to containers

  • ConfigMaps can be exposed as plain files in the filesystem of a container

    • this is achieved by declaring a volume and mounting it in the container

    • this is particularly effective for ConfigMaps containing whole files

  • ConfigMaps can be exposed as environment variables in the container

    • this is achieved with the Downward API

    • this is particularly effective for ConfigMaps containing individual parameters

  • Let's see how to do both!

k8s/configuration.md

618/1692

Passing a configuration file with a ConfigMap

  • We will start a load balancer powered by HAProxy

  • We will use the official haproxy image

  • It expects to find its configuration in /usr/local/etc/haproxy/haproxy.cfg

  • We will provide a simple HAProxy configuration

  • It listens on port 80, and load balances connections between IBM and Google

k8s/configuration.md

619/1692

Creating the ConfigMap

  • Download our simple HAProxy config:

    curl -O https://k8smastery.com/haproxy.cfg
  • Create a ConfigMap named haproxy and holding the configuration file:

    kubectl create configmap haproxy --from-file=haproxy.cfg
  • Check what our ConfigMap looks like:

    kubectl get configmap haproxy -o yaml

k8s/configuration.md

620/1692

Using the ConfigMap

We are going to use the following pod definition:

apiVersion: v1
kind: Pod
metadata:
name: haproxy
spec:
volumes:
- name: config
configMap:
name: haproxy
containers:
- name: haproxy
image: haproxy
volumeMounts:
- name: config
mountPath: /usr/local/etc/haproxy/

k8s/configuration.md

621/1692

Using the ConfigMap

  • Apply the resource definition from the previous slide
  • Create the HAProxy pod:

    kubectl apply -f https://k8smastery.com/haproxy.yaml
  • Check the IP address allocated to the pod, inside shpod:

    kubectl attach --namespace=shpod -ti shpod
    kubectl get pod haproxy -o wide
    IP=$(kubectl get pod haproxy -o json | jq -r .status.podIP)

k8s/configuration.md

622/1692

Testing our load balancer

  • The load balancer will send:

    • half of the connections to Google

    • the other half to IBM

  • Access the load balancer a few times:
    curl $IP
    curl $IP
    curl $IP

We should see connections served by Google, and others served by IBM.
(Each server sends us a redirect page. Look at the URL that they send us to!)

k8s/configuration.md

623/1692

Exposing ConfigMaps with the Downward API

  • We are going to run a Docker registry on a custom port

  • By default, the registry listens on port 5000

  • This can be changed by setting environment variable REGISTRY_HTTP_ADDR

  • We are going to store the port number in a ConfigMap

  • Then we will expose that ConfigMap as a container environment variable

k8s/configuration.md

624/1692

Creating the ConfigMap

  • Our ConfigMap will have a single key, http.addr:

    kubectl create configmap registry --from-literal=http.addr=0.0.0.0:80
  • Check our ConfigMap:

    kubectl get configmap registry -o yaml

k8s/configuration.md

625/1692

Using the ConfigMap

We are going to use the following pod definition:

apiVersion: v1
kind: Pod
metadata:
name: registry
spec:
containers:
- name: registry
image: registry
env:
- name: REGISTRY_HTTP_ADDR
valueFrom:
configMapKeyRef:
name: registry
key: http.addr

k8s/configuration.md

626/1692

Using the ConfigMap

  • The resource definition from the previous slide:
  • Create the registry pod:

    kubectl apply -f https://k8smastery.com/registry.yaml
  • Check the IP address allocated to the pod:

    kubectl attach --namespace=shpod -ti shpod
    kubectl get pod registry -o wide
    IP=$(kubectl get pod registry -o json | jq -r .status.podIP)
  • Confirm that the registry is available on port 80:

    curl $IP/v2/_catalog

k8s/configuration.md

627/1692

Passwords, tokens, sensitive information

  • For sensitive information, there is another special resource: Secrets

  • Secrets and Configmaps work almost the same way

    (we'll expose the differences on the next slide)

  • The intent is different, though:

    "You should use secrets for things which are actually secret like API keys, credentials, etc., and use config map for not-secret configuration data."

    "In the future there will likely be some differentiators for secrets like rotation or support for backing the secret API w/ HSMs, etc."

    (Source: the author of both features)

k8s/configuration.md

628/1692

Differences between ConfigMaps and Secrets

k8s/configuration.md

629/1692

Cleanup

Let's cleanup before we start the next lecture!

  • remove our pods:
    kubectl delete pod/haproxy pod/registry

k8smastery/cleanup-haproxy-registry.md

630/1692

Image separating from the next chapter

631/1692

Exposing HTTP services with Ingress resources

(automatically generated title slide)

632/1692

Image separating from the next chapter

633/1692

Exposing HTTP services with Ingress resources

(automatically generated title slide)

634/1692

Exposing HTTP services with Ingress resources

  • Services give us a way to access a pod or a set of pods

  • Services can be exposed to the outside world:

    • with type NodePort (on a port >30000)

    • with type LoadBalancer (allocating an external load balancer)

635/1692

Exposing HTTP services with Ingress resources

  • Services give us a way to access a pod or a set of pods

  • Services can be exposed to the outside world:

    • with type NodePort (on a port >30000)

    • with type LoadBalancer (allocating an external load balancer)

  • What about HTTP services?

    • how can we expose webui, rng, hasher?

    • the Kubernetes dashboard?

    • all on the same IP and port?

k8smastery/ingress.md

636/1692

Exposing HTTP services

637/1692

Exposing HTTP services

  • If we use NodePort services, clients have to specify port numbers

    (i.e. http://xxxxx:31234 instead of just http://xxxxx)

  • LoadBalancer services are nice, but:

    • they are not available in all environments

    • they often carry an additional cost (e.g. they provision an ELB)

    • They often work at OSI Layer 4 (IP+Port) and not Layer 7 (HTTP/S)

    • they require one extra step for DNS integration
      (waiting for the LoadBalancer to be provisioned; then adding it to DNS)

638/1692

Exposing HTTP services

  • If we use NodePort services, clients have to specify port numbers

    (i.e. http://xxxxx:31234 instead of just http://xxxxx)

  • LoadBalancer services are nice, but:

    • they are not available in all environments

    • they often carry an additional cost (e.g. they provision an ELB)

    • They often work at OSI Layer 4 (IP+Port) and not Layer 7 (HTTP/S)

    • they require one extra step for DNS integration
      (waiting for the LoadBalancer to be provisioned; then adding it to DNS)

  • We could build our own reverse proxy

k8smastery/ingress.md

639/1692

Building a custom reverse proxy

  • There are many options available:

    Apache, HAProxy, Envoy Proxy, Gloo, NGINX, Traefik,...

  • Most of these options require us to update/edit configuration files after each change

  • Some of them can pick up virtual hosts and backends from a configuration store

640/1692

Building a custom reverse proxy

  • There are many options available:

    Apache, HAProxy, Envoy Proxy, Gloo, NGINX, Traefik,...

  • Most of these options require us to update/edit configuration files after each change

  • Some of them can pick up virtual hosts and backends from a configuration store

  • Wouldn't it be nice if this configuration could be managed with the Kubernetes API?

  • Enter¹ Ingress resources!

¹ Pun maybe intended.

k8smastery/ingress.md

641/1692

ingress vs. Ingress

ingress

  • ingress definition: Going in, entering. The opposite of egress (leaving)

  • In networking terms, ingress refers to handling incoming connections

  • Could imply incoming to firewall, network, or in this case, a server cluster

642/1692

ingress vs. Ingress

ingress

  • ingress definition: Going in, entering. The opposite of egress (leaving)

  • In networking terms, ingress refers to handling incoming connections

  • Could imply incoming to firewall, network, or in this case, a server cluster

Ingress

  • Ingress (capital I) in these slides means the Kubernetes Ingress resource

  • Specific to HTTP/S

k8smastery/ingress.md

643/1692

Ingress resources

  • Kubernetes API resource (kubectl get ingress/ingresses/ing)

  • Designed to expose HTTP services

644/1692

Ingress resources

  • Kubernetes API resource (kubectl get ingress/ingresses/ing)

  • Designed to expose HTTP services

  • Basic features:

    • load balancing
    • SSL termination
    • name-based virtual hosting
645/1692

Ingress resources

  • Kubernetes API resource (kubectl get ingress/ingresses/ing)

  • Designed to expose HTTP services

  • Basic features:

    • load balancing
    • SSL termination
    • name-based virtual hosting
  • Can also route to different services depending on:

    • URI path (e.g. /apiapi-service, /staticassets-service)
    • Client headers, including cookies (for A/B testing, canary deployment...)
    • and more!

k8smastery/ingress.md

646/1692

Principle of operation

  • Step 1: deploy an Ingress controller

    • Ingress controller = load balancing proxy + control loop

    • the control loop watches over Ingress resources, and configures the LB accordingly

    • these might be two separate processes (NGINX sever + NGINX Ingress controller)

    • or a single app that knows how to speak to Kubernetes API (Traefik)

647/1692

Principle of operation

  • Step 1: deploy an Ingress controller

    • Ingress controller = load balancing proxy + control loop

    • the control loop watches over Ingress resources, and configures the LB accordingly

    • these might be two separate processes (NGINX sever + NGINX Ingress controller)

    • or a single app that knows how to speak to Kubernetes API (Traefik)

  • Step 2: set up DNS (usually)

    • associate external DNS entries with the load balancer or host address
648/1692

Principle of operation

  • Step 1: deploy an Ingress controller

    • Ingress controller = load balancing proxy + control loop

    • the control loop watches over Ingress resources, and configures the LB accordingly

    • these might be two separate processes (NGINX sever + NGINX Ingress controller)

    • or a single app that knows how to speak to Kubernetes API (Traefik)

  • Step 2: set up DNS (usually)

    • associate external DNS entries with the load balancer or host address
  • Step 3: create Ingress resources for our Service resources

    • these resources contain rules for handling HTTP/S connections

    • the Ingress controller picks up these resources and configures the LB

    • connections to the Ingress LB will be processed by the rules

k8smastery/ingress.md

649/1692

Ingress Diagram

k8smastery/ingress.md

650/1692

Image separating from the next chapter

651/1692

Ingress in action: NGINX

(automatically generated title slide)

652/1692

Ingress in action: NGINX

  • We will deploy the NGINX Ingress controller first

    • this is a popular, yet arbitrary choice, the docs list over a dozen options
653/1692

Ingress in action: NGINX

  • We will deploy the NGINX Ingress controller first

    • this is a popular, yet arbitrary choice, the docs list over a dozen options
  • For DNS, we will use nip.io

    • *.127.0.0.1.nip.io resolves to 127.0.0.1

    • we do this so we can use various FQDN's without editing our hosts file

654/1692

Ingress in action: NGINX

  • We will deploy the NGINX Ingress controller first

    • this is a popular, yet arbitrary choice, the docs list over a dozen options
  • For DNS, we will use nip.io

    • *.127.0.0.1.nip.io resolves to 127.0.0.1

    • we do this so we can use various FQDN's without editing our hosts file

  • We will create Ingress resources for various HTTP-based Services

k8smastery/ingress.md

655/1692

Deploying pods listening on port 80

  • We want our Ingress load balancer to be available on port 80
656/1692

Deploying pods listening on port 80

  • We want our Ingress load balancer to be available on port 80

  • We could do that with a LoadBalancer service

657/1692

Deploying pods listening on port 80

  • We want our Ingress load balancer to be available on port 80

  • We could do that with a LoadBalancer service

    ... but it requires support from the underlying infrastructure

    minikube and MicroK8s don't work with it

    ... but Docker Desktop supports it for localhost!

658/1692

Deploying pods listening on port 80

  • We want our Ingress load balancer to be available on port 80

  • We could do that with a LoadBalancer service

    ... but it requires support from the underlying infrastructure

    minikube and MicroK8s don't work with it

    ... but Docker Desktop supports it for localhost!

  • We could use pods specifying hostPort: 80

    ... but with most CNI plugins, this doesn't work or requires additional setup

659/1692

Deploying pods listening on port 80

660/1692

Deploying pods listening on port 80

  • We want our Ingress load balancer to be available on port 80

  • We could do that with a LoadBalancer service

    ... but it requires support from the underlying infrastructure

    minikube and MicroK8s don't work with it

    ... but Docker Desktop supports it for localhost!

  • We could use pods specifying hostPort: 80

    ... but with most CNI plugins, this doesn't work or requires additional setup

  • We could use a NodePort service

    ... but that requires changing the --service-node-port-range flag in the API server

  • Last resort: the hostNetwork mode

k8smastery/ingress.md

661/1692

Without hostNetwork

  • Normally, each pod gets its own network namespace

    (sometimes called sandbox or network sandbox)

662/1692

Without hostNetwork

  • Normally, each pod gets its own network namespace

    (sometimes called sandbox or network sandbox)

  • An IP address is assigned to the pod

663/1692

Without hostNetwork

  • Normally, each pod gets its own network namespace

    (sometimes called sandbox or network sandbox)

  • An IP address is assigned to the pod

  • This IP address is routed/connected to the cluster network

664/1692

Without hostNetwork

  • Normally, each pod gets its own network namespace

    (sometimes called sandbox or network sandbox)

  • An IP address is assigned to the pod

  • This IP address is routed/connected to the cluster network

  • All containers of that pod are sharing that network namespace

    (and therefore using the same IP address)

k8smastery/ingress.md

665/1692

With hostNetwork: true

  • No network namespace gets created
666/1692

With hostNetwork: true

  • No network namespace gets created

  • The pod is using the network namespace of the host

  • It "sees" (and can use) the interfaces (and IP addresses) of the host (VM on macOS/Win)

667/1692

With hostNetwork: true

  • No network namespace gets created

  • The pod is using the network namespace of the host

  • It "sees" (and can use) the interfaces (and IP addresses) of the host (VM on macOS/Win)

  • The pod can receive outside traffic directly, on any port

668/1692

With hostNetwork: true

  • No network namespace gets created

  • The pod is using the network namespace of the host

  • It "sees" (and can use) the interfaces (and IP addresses) of the host (VM on macOS/Win)

  • The pod can receive outside traffic directly, on any port

  • Downside: with most network plugins, network policies won't work for that pod

    • most network policies work at the IP address level

    • filtering that pod = filtering traffic from the node

k8smastery/ingress.md

669/1692

What you will use now

  • Docker Desktop:

    • no built-in Ingress installer, we'll provide you YAML

    • Ignores hostNetwork, but Service type: LoadBalancer works with localhost!

670/1692

What you will use now

  • Docker Desktop:

    • no built-in Ingress installer, we'll provide you YAML

    • Ignores hostNetwork, but Service type: LoadBalancer works with localhost!

  • minikube:

    • has a built-in NGINX installer minikube addons enable ingress

    • But, let's use YAML we provide for learning purposes

    • hostNetwork: true enabled on pod works for minikube IP

671/1692

What you will use now

  • Docker Desktop:

    • no built-in Ingress installer, we'll provide you YAML

    • Ignores hostNetwork, but Service type: LoadBalancer works with localhost!

  • minikube:

    • has a built-in NGINX installer minikube addons enable ingress

    • But, let's use YAML we provide for learning purposes

    • hostNetwork: true enabled on pod works for minikube IP

  • MicroK8s:

    • has a built-in NGINX installer microk8s enable ingress

    • let's use YAML we provide anyway for learning purposes

    • hostNetwork: true enabled on pod works for MicroK8s host IP

k8smastery/ingress.md

672/1692

First steps with NGINX

  • Remember the three parts of Ingress:

    • Ingress controller pod(s) to monitor the API and run the LB/proxy

    • Ingress Resources that tell the LB where to route traffic

    • Services for your apps so the Ingress LB/proxy can route to your pods

  • First, lets apply YAML to create the Ingress controller

k8smastery/ingress.md

673/1692

Deploying the NGINX Ingress controller

The two main sections in the YAML are:

674/1692

Deploying the NGINX Ingress controller

The two main sections in the YAML are:

  • NGINX Deployment (or DaemonSet) and all its required resources

    • Namespace
    • ConfigMaps (storing NGINX configs)
    • ServiceAccount (authenticate to Kubernetes API)
    • Role/ClusterRole/RoleBindings (authorization to API parts)
    • LimitRange (limit cpu/memory of NGINX)
675/1692

Deploying the NGINX Ingress controller

The two main sections in the YAML are:

  • NGINX Deployment (or DaemonSet) and all its required resources

    • Namespace
    • ConfigMaps (storing NGINX configs)
    • ServiceAccount (authenticate to Kubernetes API)
    • Role/ClusterRole/RoleBindings (authorization to API parts)
    • LimitRange (limit cpu/memory of NGINX)
  • Service to expose NGINX on 80/443

    • different for each Kubernetes distribution

k8smastery/ingress.md

676/1692

Running NGINX on our cluster

  • Now let's deploy the NGINX controller. Pick your distro:
  • Apply the YAML

    # for Docker Desktop, create Service with LoadBalancer
    kubectl apply -f https://k8smastery.com/ic-nginx-lb.yaml
    # for minikube/MicroK8s, create Service with hostNetwork
    kubectl apply -f https://k8smastery.com/ic-nginx-hn.yaml
  • Check the pod Status
    kubectl describe -n ingress-nginx deploy/ingress-nginx-controller

k8smastery/ingress.md

677/1692

Checking that NGINX runs correctly

  • If NGINX started correctly, we now have a web server listening on each node
  • Direct your browser to your Kubernetes IP on port 80

We should get a 404 page not found error.

This is normal: we haven't provided any Ingress rule yet.

k8smastery/ingress.md

678/1692

Setting up DNS

  • To make our lives easier, we will use nip.io

  • Check out http://cheddar.A.B.C.D.nip.io

    (replacing A.B.C.D with the IP address of your Kubernetes IP)

  • We should get the same 404 page not found error

    (meaning that our DNS is "set up properly", so to speak!)

k8smastery/ingress.md

679/1692

Setting up host-based routing ingress rules

  • We are going to use bretfisher/cheese images

    (there are 3 tags available: wensleydale, cheddar, stilton)

680/1692

Setting up host-based routing ingress rules

  • We are going to use bretfisher/cheese images

    (there are 3 tags available: wensleydale, cheddar, stilton)

  • These images contain a simple static HTTP server sending a picture of cheese

681/1692

Setting up host-based routing ingress rules

  • We are going to use bretfisher/cheese images

    (there are 3 tags available: wensleydale, cheddar, stilton)

  • These images contain a simple static HTTP server sending a picture of cheese

  • We will run 3 deployments (one for each cheese)

  • We will create 3 services (one for each deployment)

  • Then we will create 3 ingress rules (one for each service)

682/1692

Setting up host-based routing ingress rules

  • We are going to use bretfisher/cheese images

    (there are 3 tags available: wensleydale, cheddar, stilton)

  • These images contain a simple static HTTP server sending a picture of cheese

  • We will run 3 deployments (one for each cheese)

  • We will create 3 services (one for each deployment)

  • Then we will create 3 ingress rules (one for each service)

  • We will route <name-of-cheese>.A.B.C.D.nip.io to the corresponding deployment

k8smastery/ingress.md

683/1692

Running cheesy web servers

  • Run all three deployments:

    kubectl create deployment cheddar --image=bretfisher/cheese:cheddar
    kubectl create deployment stilton --image=bretfisher/cheese:stilton
    kubectl create deployment wensleydale --image=bretfisher/cheese:wensleydale
  • Create a service for each of them:

    kubectl expose deployment cheddar --port=80
    kubectl expose deployment stilton --port=80
    kubectl expose deployment wensleydale --port=80

k8smastery/ingress.md

684/1692

What does an ingress resource look like?

Here is a minimal host-based ingress resource:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: cheddar
spec:
rules:
- host: cheddar.A.B.C.D.nip.io
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: cheddar
port:
number: 80

k8smastery/ingress.md

685/1692

Creating our first ingress resources

  • Download our YAML curl -O https://k8smastery.com/ingress.yaml

  • Edit the file ingress.yaml which has three Ingress resources

  • Replace the A.B.C.D with your Kubernetes IP (127.0.0.1 for localhost)

  • Apply the file kubectl apply -f ingress.yaml

  • Open http://cheddar.A.B.C.D.nip.io

(An image of a piece of cheese should show up.)

k8smastery/ingress.md

686/1692

Bring up the other Ingress resources

  • Different cheeses should show up for each URL

k8smastery/ingress.md

687/1692

Adding features to a Ingress resource

  • Reverse proxies have lots of features

  • Let's add a 301 redirect to a new Ingress resource using annotations

  • It will apply when any other path is used in URL that we didn't already add

  • Create a redirect
kubectl apply -f https://k8smastery.com/redirect.yaml
  • Open http://< anything >.A.B.C.D.nip.io or localhost or A.B.C.D
  • It should immediately redirect to google.com

k8smastery/ingress.md

688/1692

Annotations can get weird and complex

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-google
annotations: # Notice this annotation is NGINX specific
nginx.ingress.kubernetes.io/permanent-redirect: https://www.google.com
spec:
rules: # Ingress requires a rule and backend, even though it's not needed here
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: doesntmatter
port:
number: 80

k8smastery/ingress.md

689/1692

View Ingress resources

  • Let's inspect some Ingress resources
  • List all Ingress resources in the default namespace
kubectl get ingress
  • Get the details on the cheddar Ingress resource
kubectl describe ingress cheddar
  • Get the details on the my-google Ingress resource
kubectl describe ingress my-google
  • Output stilton in YAML
kubectl get ingress/stilton -o yaml

k8smastery/ingress.md

690/1692

Image separating from the next chapter

691/1692

Swapping NGINX for Traefik

(automatically generated title slide)

692/1692

Swapping NGINX for Traefik

  • Traefik is a proxy with built-in Kubernetes Ingress support

  • It has a web dashboard, built-in Let's Encrypt, full TCP support, and more

  • Most importantly: Traefik releases are named after cheeses 🧀🎉

693/1692

Swapping NGINX for Traefik

  • Traefik is a proxy with built-in Kubernetes Ingress support

  • It has a web dashboard, built-in Let's Encrypt, full TCP support, and more

  • Most importantly: Traefik releases are named after cheeses 🧀🎉

  • The Traefik documentation tells us to pick between Deployment and DaemonSet

  • We are going to use a DaemonSet so that each node can accept connections

694/1692

Swapping NGINX for Traefik

  • Traefik is a proxy with built-in Kubernetes Ingress support

  • It has a web dashboard, built-in Let's Encrypt, full TCP support, and more

  • Most importantly: Traefik releases are named after cheeses 🧀🎉

  • The Traefik documentation tells us to pick between Deployment and DaemonSet

  • We are going to use a DaemonSet so that each node can accept connections

  • We provide a YAML file which is essentially the sum of:

  • We will make a minor change to the YAML provided by Traefik to enable hostNetwork for MicroK8s/minikube

  • For Docker Desktop we'll add a type: LoadBalancer to the Service

k8smastery/ingress.md

695/1692

Removing NGINX from our cluster

  • Before starting Traefik, let's remove the NGINX controller

  • This won't remove Services or Ingress resources

  • But it will make them unavailable from outside the cluster

  • Delete our NGINX controller and related resources:

    # for Docker Desktop with LoadBalancer
    kubectl delete -f https://k8smastery.com/ic-nginx-lb.yaml
    # for minikube/MicroK8s with hostNetwork
    kubectl delete -f https://k8smastery.com/ic-nginx-hn.yaml
  • Also remove the redirect Ingress resource. It only worked in NGINX

    kubectl delete -f https://k8smastery.com/redirect.yaml

k8smastery/ingress.md

696/1692

Running Traefik on our cluster

  • Now let's deploy the Traefik Ingress controller
  • Apply the YAML:

    # for Docker Desktop with LoadBalancer
    kubectl apply -f https://k8smastery.com/ic-traefik-lb.yaml
    # for minikube/MicroK8s with hostNetwork
    kubectl apply -f https://k8smastery.com/ic-traefik-hn.yaml
  • Check the pod Status
    kubectl describe -n kube-system ds/traefik-ingress-controller

k8smastery/ingress.md

697/1692

Checking that Traefik runs correctly

  • If Traefik started correctly, we can refresh a cheese and it still works

k8smastery/ingress.md

698/1692

Traefik web UI

  • Traefik provides a web dashboard on container port 8080

  • For those using the LoadBalancer method (Docker Desktop), it's enabled

  • If using Docker Desktop, go to http://localhost:8080
699/1692

Traefik web UI

  • Traefik provides a web dashboard on container port 8080

  • For those using the LoadBalancer method (Docker Desktop), it's enabled

  • If using Docker Desktop, go to http://localhost:8080
  • For those using hostNetwork, this could be a problem

  • The container won't start if anything is listening on < host IP >:8080

  • On MicroK8s, Kubernetes API runs on 8080 😢

700/1692

Traefik web UI

  • Traefik provides a web dashboard on container port 8080

  • For those using the LoadBalancer method (Docker Desktop), it's enabled

  • If using Docker Desktop, go to http://localhost:8080
  • For those using hostNetwork, this could be a problem

  • The container won't start if anything is listening on < host IP >:8080

  • On MicroK8s, Kubernetes API runs on 8080 😢

  • For those using minikube, you can un-comment the YAML and re-apply

701/1692

Traefik web UI

  • Traefik provides a web dashboard on container port 8080

  • For those using the LoadBalancer method (Docker Desktop), it's enabled

  • If using Docker Desktop, go to http://localhost:8080
  • For those using hostNetwork, this could be a problem

  • The container won't start if anything is listening on < host IP >:8080

  • On MicroK8s, Kubernetes API runs on 8080 😢

  • For those using minikube, you can un-comment the YAML and re-apply

  • You could also edit the resource(s) and manually add the details, e.g.

    • kubectl edit -n kube-system ds/traefik-ingress-controller

k8smastery/ingress.md

702/1692

What about Traefik 2.x IngressRoute resources?

  • We've been using Traefik 2.x as the Ingress controller

  • Traefik released 2.0 in late 2019

  • Their documentation talks about IngressRoute resource

703/1692

What about Traefik 2.x IngressRoute resources?

  • We've been using Traefik 2.x as the Ingress controller

  • Traefik released 2.0 in late 2019

  • Their documentation talks about IngressRoute resource

  • But IngressRoute is not a built-in resource of Kubernetes

  • Traefik 2.x now supports a custom CRD (Custom Resource Definition)

  • We'll explore why in a bit

k8smastery/ingress.md

704/1692

Using multiple ingress controllers

  • You can have multiple ingress controllers active simultaneously

    (e.g. Traefik, Gloo, and NGINX)

705/1692

Using multiple ingress controllers

  • You can have multiple ingress controllers active simultaneously

    (e.g. Traefik, Gloo, and NGINX)

  • You can even have multiple instances of the same controller

    (e.g. one for internal, another for external traffic)

706/1692

Using multiple ingress controllers

  • You can have multiple ingress controllers active simultaneously

    (e.g. Traefik, Gloo, and NGINX)

  • You can even have multiple instances of the same controller

    (e.g. one for internal, another for external traffic)

  • Since K8s 1.18, ingressClassName can be used to tell which one to use

707/1692

Using multiple ingress controllers

  • You can have multiple ingress controllers active simultaneously

    (e.g. Traefik, Gloo, and NGINX)

  • You can even have multiple instances of the same controller

    (e.g. one for internal, another for external traffic)

  • Since K8s 1.18, ingressClassName can be used to tell which one to use

  • It's OK if multiple ingress controllers configure the same resource

    (it just means that the service will be accessible through multiple paths)

708/1692

Using multiple ingress controllers

  • You can have multiple ingress controllers active simultaneously

    (e.g. Traefik, Gloo, and NGINX)

  • You can even have multiple instances of the same controller

    (e.g. one for internal, another for external traffic)

  • Since K8s 1.18, ingressClassName can be used to tell which one to use

  • It's OK if multiple ingress controllers configure the same resource

    (it just means that the service will be accessible through multiple paths)

  • TCP/IP IP:PORT rules still apply: Only one can bind to 80 on host IP

k8smastery/ingress.md

709/1692

Ingress resources: the good

  • The traffic flows directly from the ingress load balancer to the backends

    • it doesn't need to go through the ClusterIP

    • in fact, we don't even need a ClusterIP (we can use a headless service)

710/1692

Ingress resources: the good

  • The traffic flows directly from the ingress load balancer to the backends

    • it doesn't need to go through the ClusterIP

    • in fact, we don't even need a ClusterIP (we can use a headless service)

  • The load balancer can be outside of Kubernetes

    (as long as it has access to the cluster subnet)

  • This allows the use of external (hardware, physical machines...) load balancers

711/1692

Ingress resources: the good

  • The traffic flows directly from the ingress load balancer to the backends

    • it doesn't need to go through the ClusterIP

    • in fact, we don't even need a ClusterIP (we can use a headless service)

  • The load balancer can be outside of Kubernetes

    (as long as it has access to the cluster subnet)

  • This allows the use of external (hardware, physical machines...) load balancers

  • Annotations can encode special features

    (rate-limiting, A/B testing, session stickiness, etc.)

k8smastery/ingress.md

712/1692

Ingress resources: the bad (cough Annotations cough)

  • Aforementioned "special features" are not standardized yet

  • Some controllers will support them; some won't

713/1692

Ingress resources: the bad (cough Annotations cough)

714/1692

Ingress resources: the bad (cough Annotations cough)

715/1692

Ingress resources: the bad (cough Annotations cough)

k8smastery/ingress.md

716/1692

When not to use built-in Ingress resources

  • You need features beyond Ingress including:

    • TCP support, traffic splitting, mTLS, egress, service mesh
    • response transformation, routing to 2+ services
717/1692

When not to use built-in Ingress resources

  • You need features beyond Ingress including:

    • TCP support, traffic splitting, mTLS, egress, service mesh
    • response transformation, routing to 2+ services
  • You have external load balancers (like AWS ELBs) which route to NodePorts

718/1692

When not to use built-in Ingress resources

  • You need features beyond Ingress including:

    • TCP support, traffic splitting, mTLS, egress, service mesh
    • response transformation, routing to 2+ services
  • You have external load balancers (like AWS ELBs) which route to NodePorts

  • You don't need externally available HTTP services on the default ports

719/1692

When not to use built-in Ingress resources

  • You need features beyond Ingress including:

    • TCP support, traffic splitting, mTLS, egress, service mesh
    • response transformation, routing to 2+ services
  • You have external load balancers (like AWS ELBs) which route to NodePorts

  • You don't need externally available HTTP services on the default ports

  • Your proxy of choice uses a CRD rather then a Ingress Resource

k8smastery/ingress.md

720/1692

Using CRD's as alternatives to Ingress resources

  • Due to the limits of the built-in Ingress, many projects are moving to CRD's

  • For example, Traefik 2.x has a IngressRoute CRD option

  • Ambassador, a controller for Envoy proxy, uses a Mapping CRD

721/1692

Using CRD's as alternatives to Ingress resources

  • Due to the limits of the built-in Ingress, many projects are moving to CRD's

  • For example, Traefik 2.x has a IngressRoute CRD option

  • Ambassador, a controller for Envoy proxy, uses a Mapping CRD

  • These CRD proxy options do ingress plus more (sometimes called API Gateways):

    • TCP Support (anything beyond HTTP/HTTPS)
    • Traffic splitting, rate limiting, circuit breaking, etc
    • Complex traffic routing, request and response transformation
722/1692

Using CRD's as alternatives to Ingress resources

  • Due to the limits of the built-in Ingress, many projects are moving to CRD's

  • For example, Traefik 2.x has a IngressRoute CRD option

  • Ambassador, a controller for Envoy proxy, uses a Mapping CRD

  • These CRD proxy options do ingress plus more (sometimes called API Gateways):

    • TCP Support (anything beyond HTTP/HTTPS)
    • Traffic splitting, rate limiting, circuit breaking, etc
    • Complex traffic routing, request and response transformation
  • Once we consider CRD's, many more proxy options are available:

    • Envoy Proxy based (Gloo, Ambassador, Contour)
    • Other Proxies (Tyk, Traefik, Kong, KrakenD)
723/1692

Using CRD's as alternatives to Ingress resources

  • Due to the limits of the built-in Ingress, many projects are moving to CRD's

  • For example, Traefik 2.x has a IngressRoute CRD option

  • Ambassador, a controller for Envoy proxy, uses a Mapping CRD

  • These CRD proxy options do ingress plus more (sometimes called API Gateways):

    • TCP Support (anything beyond HTTP/HTTPS)
    • Traffic splitting, rate limiting, circuit breaking, etc
    • Complex traffic routing, request and response transformation
  • Once we consider CRD's, many more proxy options are available:

    • Envoy Proxy based (Gloo, Ambassador, Contour)
    • Other Proxies (Tyk, Traefik, Kong, KrakenD)
  • Eventually, some more advanced features might be added to "Ingress Resource 2.0"

  • We'll cover more after we learn about CRD's and Operators

k8smastery/ingress.md

724/1692

Cleanup

Let's cleanup before we start the next lecture!

  • remove our ingress controller:

    # for Docker Desktop with LoadBalancer
    kubectl delete -f https://k8smastery.com/ic-traefik-lb.yaml
    # for minikube/MicroK8s with hostNetwork
    kubectl delete -f https://k8smastery.com/ic-traefik-hn.yaml
  • remove our ingress resources:

    kubectl delete -f ingress.yaml
    kubectl delete -f https://k8smastery.com/redirect.yaml
  • remove our cheeses:

    kubectl delete svc/cheddar svc/stilton svc/wensleydale
    kubectl delete deploy/cheddar deploy/stilton deploy/wensleydale

k8smastery/cleanup-ingress.md

725/1692

Image separating from the next chapter

726/1692

Volumes

(automatically generated title slide)

727/1692

Volumes

  • Volumes are special directories that are mounted in containers
728/1692

Volumes

  • Volumes are special directories that are mounted in containers

  • Volumes can have many different purposes:

    • share files and directories between containers running on the same machine
729/1692

Volumes

  • Volumes are special directories that are mounted in containers

  • Volumes can have many different purposes:

    • share files and directories between containers running on the same machine

    • share files and directories between containers and their host

730/1692

Volumes

  • Volumes are special directories that are mounted in containers

  • Volumes can have many different purposes:

    • share files and directories between containers running on the same machine

    • share files and directories between containers and their host

    • centralize configuration information in Kubernetes and expose it to containers

731/1692

Volumes

  • Volumes are special directories that are mounted in containers

  • Volumes can have many different purposes:

    • share files and directories between containers running on the same machine

    • share files and directories between containers and their host

    • centralize configuration information in Kubernetes and expose it to containers

    • manage credentials and secrets and expose them securely to containers

    • access storage systems (like Ceph, EBS, NFS, Portworx, and many others)

k8s/volumes.md

732/1692

Kubernetes volumes vs. Docker volumes

  • Kubernetes and Docker volumes are very similar

    (the Kubernetes documentation says otherwise ...
    but it refers to Docker 1.7, which was released in 2015!)

733/1692

Kubernetes volumes vs. Docker volumes

  • Kubernetes and Docker volumes are very similar

    (the Kubernetes documentation says otherwise ...
    but it refers to Docker 1.7, which was released in 2015!)

  • Docker volumes allow us to share data between containers running on the same host

734/1692

Kubernetes volumes vs. Docker volumes

  • Kubernetes and Docker volumes are very similar

    (the Kubernetes documentation says otherwise ...
    but it refers to Docker 1.7, which was released in 2015!)

  • Docker volumes allow us to share data between containers running on the same host

  • Kubernetes volumes allow us to share data between containers in the same pod

735/1692

Kubernetes volumes vs. Docker volumes

  • Kubernetes and Docker volumes are very similar

    (the Kubernetes documentation says otherwise ...
    but it refers to Docker 1.7, which was released in 2015!)

  • Docker volumes allow us to share data between containers running on the same host

  • Kubernetes volumes allow us to share data between containers in the same pod

  • Both Docker and Kubernetes volumes enable access to storage systems

736/1692

Kubernetes volumes vs. Docker volumes

  • Kubernetes and Docker volumes are very similar

    (the Kubernetes documentation says otherwise ...
    but it refers to Docker 1.7, which was released in 2015!)

  • Docker volumes allow us to share data between containers running on the same host

  • Kubernetes volumes allow us to share data between containers in the same pod

  • Both Docker and Kubernetes volumes enable access to storage systems

  • Kubernetes volumes can also be used to expose configuration and secrets

737/1692

Kubernetes volumes vs. Docker volumes

  • Kubernetes and Docker volumes are very similar

    (the Kubernetes documentation says otherwise ...
    but it refers to Docker 1.7, which was released in 2015!)

  • Docker volumes allow us to share data between containers running on the same host

  • Kubernetes volumes allow us to share data between containers in the same pod

  • Both Docker and Kubernetes volumes enable access to storage systems

  • Kubernetes volumes can also be used to expose configuration and secrets

  • Docker has specific concepts for configuration and secrets
    (but under the hood, the technical implementation is similar)

k8s/volumes.md

738/1692

Volumes ≠ Persistent Volumes

  • Volumes and Persistent Volumes are related, but very different!
739/1692

Volumes ≠ Persistent Volumes

  • Volumes and Persistent Volumes are related, but very different!

  • Volumes:

    • appear in Pod specifications (we'll see that in a few slides)

    • do not exist as API resources (cannot do kubectl get volumes)

740/1692

Volumes ≠ Persistent Volumes

  • Volumes and Persistent Volumes are related, but very different!

  • Volumes:

    • appear in Pod specifications (we'll see that in a few slides)

    • do not exist as API resources (cannot do kubectl get volumes)

  • Persistent Volumes:

    • are API resources (can do kubectl get persistentvolumes)

    • correspond to concrete volumes (e.g. on a SAN, EBS, etc.)

    • cannot be associated with a Pod directly
      (they need a Persistent Volume Claim)

k8s/volumes.md

741/1692

Adding a volume to a Pod

  • We will start with the simplest Pod manifest we can find
742/1692

Adding a volume to a Pod

  • We will start with the simplest Pod manifest we can find

  • We will add a volume to that Pod manifest

743/1692

Adding a volume to a Pod

  • We will start with the simplest Pod manifest we can find

  • We will add a volume to that Pod manifest

  • We will mount that volume in a container in the Pod

744/1692

Adding a volume to a Pod

  • We will start with the simplest Pod manifest we can find

  • We will add a volume to that Pod manifest

  • We will mount that volume in a container in the Pod

  • By default, this volume will be an emptyDir

    (an empty directory)

745/1692

Adding a volume to a Pod

  • We will start with the simplest Pod manifest we can find

  • We will add a volume to that Pod manifest

  • We will mount that volume in a container in the Pod

  • By default, this volume will be an emptyDir

    (an empty directory)

  • It will hide ("shadow") the image directory where it's mounted

k8s/volumes.md

746/1692

Our basic Pod

apiVersion: v1
kind: Pod
metadata:
name: nginx-without-volume
spec:
containers:
- name: nginx
image: nginx

This is a MVP! (Minimum Viable Pod😉)

It runs a single NGINX container.

k8s/volumes.md

747/1692

Trying the basic pod

  • Create the Pod:
    kubectl create -f https://k8smastery.com/nginx-1-without-volume.yaml
  • Get its IP address:

    IPADDR=$(kubectl get pod nginx-without-volume -o jsonpath={.status.podIP})
  • Send a request with curl:

    curl $IPADDR

(We should see the "Welcome to NGINX" page.)

k8s/volumes.md

748/1692

Adding a volume

  • We need to add the volume in two places:

    • at the Pod level (to declare the volume)

    • at the container level (to mount the volume)

  • We will declare a volume named www

  • No type is specified, so it will default to emptyDir

    (as the name implies, it will be initialized as an empty directory at pod creation)

  • In that pod, there is also a container named nginx

  • That container mounts the volume www to path /usr/share/nginx/html/

k8s/volumes.md

749/1692

The Pod with a volume

apiVersion: v1
kind: Pod
metadata:
name: nginx-with-volume
spec:
volumes:
- name: www
containers:
- name: nginx
image: nginx
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html/

k8s/volumes.md

750/1692

Trying the Pod with a volume

  • Create the Pod:
    kubectl create -f https://k8smastery.com/nginx-2-with-volume.yaml
  • Get its IP address:

    IPADDR=$(kubectl get pod nginx-with-volume -o jsonpath={.status.podIP})
  • Send a request with curl:

    curl $IPADDR

(We should now see a "403 Forbidden" error page.)

k8s/volumes.md

751/1692

Populating the volume with another container

  • Let's add another container to the Pod

  • Let's mount the volume in both containers

  • That container will populate the volume with static files

  • NGINX will then serve these static files

  • To populate the volume, we will clone the Spoon-Knife repository

k8s/volumes.md

752/1692

Sharing a volume between two containers

apiVersion: v1
kind: Pod
metadata:
name: nginx-with-git
spec:
volumes:
- name: www
containers:
- name: nginx
image: nginx
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html/
- name: git
image: alpine
command: [ "sh", "-c", "apk add git && git clone https://github.com/octocat/Spoon-Knife /www" ]
volumeMounts:
- name: www
mountPath: /www/
restartPolicy: OnFailure

k8s/volumes.md

753/1692

Sharing a volume, explained

  • We added another container to the pod

  • That container mounts the www volume on a different path (/www)

  • It uses the alpine image

  • When started, it installs git and clones the octocat/Spoon-Knife repository

    (that repository contains a tiny HTML website)

  • As a result, NGINX now serves this website

k8s/volumes.md

754/1692

Trying the shared volume

  • This one will be time-sensitive!

  • We need to catch the Pod IP address as soon as it's created

  • Then send a request to it as fast as possible

  • Watch the pods (so that we can catch the Pod IP address)
    kubectl get pods -o wide --watch

k8s/volumes.md

755/1692

Shared volume in action

  • Create the pod:
    kubectl create -f https://k8smastery.com/nginx-3-with-git.yaml
  • As soon as we see its IP address, access it:
    curl $IP
  • A few seconds later, the state of the pod will change; access it again:
    curl $IP

The first time, we should see "403 Forbidden".

The second time, we should see the HTML file from the Spoon-Knife repository.

k8s/volumes.md

756/1692

Explanations

  • Both containers are started at the same time

  • NGINX starts very quickly

    (it can serve requests immediately)

  • But at this point, the volume is empty

    (NGINX serves "403 Forbidden")

  • The other containers installs git and clones the repository

    (this takes a bit longer)

  • When the other container is done, the volume holds the repository

    (NGINX serves the HTML file)

k8s/volumes.md

757/1692

The devil is in the details

  • The default restartPolicy is Always

  • This would cause our git container to run again ... and again ... and again

    (with an exponential back-off delay, as explained in the documentation)

  • That's why we specified restartPolicy: OnFailure

k8s/volumes.md

758/1692

Inconsistencies

  • There is a short period of time during which the website is not available

    (because the git container hasn't done its job yet)

  • With a bigger website, we could get inconsistent results

    (where only a part of the content is ready)

  • In real applications, this could cause incorrect results

  • How can we avoid that?

k8s/volumes.md

759/1692

Init Containers

  • We can define containers that should execute before the main ones

  • They will be executed in order

    (instead of in parallel)

  • They must all succeed before the main containers are started

  • This is exactly what we need here!

  • Let's see one in action

See Init Containers documentation for all the details.

k8s/volumes.md

760/1692

Defining Init Containers

apiVersion: v1
kind: Pod
metadata:
name: nginx-with-init
spec:
volumes:
- name: www
containers:
- name: nginx
image: nginx
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html/
initContainers:
- name: git
image: alpine
command: [ "sh", "-c", "apk add --no-cache git && git clone https://github.com/octocat/Spoon-Knife /www" ]
volumeMounts:
- name: www
mountPath: /www/

k8s/volumes.md

761/1692

Trying the init container

  • Repeat the same operation as earlier

    (try to send HTTP requests as soon as the pod comes up)

  • This time, instead of "403 Forbidden" we get a "connection refused"

  • NGINX doesn't start until the git container has done its job

  • We never get inconsistent results

    (a "half-ready" container)

k8s/volumes.md

762/1692

Other uses of init containers

  • Load content

  • Generate configuration (or certificates)

  • Database migrations

  • Waiting for other services to be up

    (to avoid flurry of connection errors in main container)

  • etc.

k8s/volumes.md

763/1692

Volume lifecycle

  • The lifecycle of a volume is linked to the pod's lifecycle

  • This means that a volume is created when the pod is created

  • This is mostly relevant for emptyDir volumes

    (other volumes, like remote storage, are not "created" but rather "attached" )

  • A volume survives across container restarts

  • A volume is destroyed (or, for remote storage, detached) when the pod is destroyed

k8s/volumes.md

764/1692

Image separating from the next chapter

765/1692

Stateful sets

(automatically generated title slide)

766/1692

Stateful sets

  • Stateful sets are a type of resource in the Kubernetes API

    (like pods, deployments, services...)

  • They offer mechanisms to deploy scaled stateful applications

  • At a first glance, they look like deployments:

    • a stateful set defines a pod spec and a number of replicas R

    • it will make sure that R copies of the pod are running

    • that number can be changed while the stateful set is running

    • updating the pod spec will cause a rolling update to happen

  • But they also have some significant differences

k8s/statefulsets.md

767/1692

Stateful sets unique features

  • Pods in a stateful set are numbered (from 0 to R-1) and ordered

  • They are started and updated in order (from 0 to R-1)

  • A pod is started (or updated) only when the previous one is ready

  • They are stopped in reverse order (from R-1 to 0)

  • Each pod know its identity (i.e. which number it is in the set)

  • Each pod can discover the IP address of the others easily

  • The pods can persist data on attached volumes

🤔 Wait a minute ... Can't we already attach volumes to pods and deployments?

k8s/statefulsets.md

768/1692

Revisiting volumes

  • Volumes are used for many purposes:

    • sharing data between containers in a pod

    • exposing configuration information and secrets to containers

    • accessing storage systems

  • Let's see examples of the latter usage

k8s/statefulsets.md

769/1692

Volumes types

  • There are many types of volumes available:

    • public cloud storage (GCEPersistentDisk, AWSElasticBlockStore, AzureDisk...)

    • private cloud storage (Cinder, VsphereVolume...)

    • traditional storage systems (NFS, iSCSI, FC...)

    • distributed storage (Ceph, Glusterfs, Portworx...)

  • Using a persistent volume requires:

    • creating the volume out-of-band (outside of the Kubernetes API)

    • referencing the volume in the pod description, with all its parameters

k8s/statefulsets.md

770/1692

Using a cloud volume

Here is a pod definition using an AWS EBS volume (that has to be created first):

apiVersion: v1
kind: Pod
metadata:
name: pod-using-my-ebs-volume
spec:
containers:
- image: ...
name: container-using-my-ebs-volume
volumeMounts:
- mountPath: /my-ebs
name: my-ebs-volume
volumes:
- name: my-ebs-volume
awsElasticBlockStore:
volumeID: vol-049df61146c4d7901
fsType: ext4

k8s/statefulsets.md

771/1692

Using an NFS volume

Here is another example using a volume on an NFS server:

apiVersion: v1
kind: Pod
metadata:
name: pod-using-my-nfs-volume
spec:
containers:
- image: ...
name: container-using-my-nfs-volume
volumeMounts:
- mountPath: /my-nfs
name: my-nfs-volume
volumes:
- name: my-nfs-volume
nfs:
server: 192.168.0.55
path: "/exports/assets"

k8s/statefulsets.md

772/1692

Shortcomings of volumes

  • Their lifecycle (creation, deletion...) is managed outside of the Kubernetes API

    (we can't just use kubectl apply/create/delete/... to manage them)

  • If a Deployment uses a volume, all replicas end up using the same volume

  • That volume must then support concurrent access

    • some volumes do (e.g. NFS servers support multiple read/write access)

    • some volumes support concurrent reads

    • some volumes support concurrent access for colocated pods

  • What we really need is a way for each replica to have its own volume

k8s/statefulsets.md

773/1692

Individual volumes

  • The Pods of a Stateful set can have individual volumes

    (i.e. in a Stateful set with 3 replicas, there will be 3 volumes)

  • These volumes can be either:

    • allocated from a pool of pre-existing volumes (disks, partitions ...)

    • created dynamically using a storage system

  • This introduces a bunch of new Kubernetes resource types:

    Persistent Volumes, Persistent Volume Claims, Storage Classes

    (and also volumeClaimTemplates, that appear within Stateful Set manifests!)

k8s/statefulsets.md

774/1692

Stateful set recap

  • A Stateful sets manages a number of identical pods

    (like a Deployment)

  • These pods are numbered, and started/upgraded/stopped in a specific order

  • These pods are aware of their number

    (e.g., #0 can decide to be the primary, and #1 can be secondary)

  • These pods can find the IP addresses of the other pods in the set

    (through a headless service)

  • These pods can each have their own persistent storage

    (Deployments cannot do that)

k8s/statefulsets.md

775/1692

Image separating from the next chapter

776/1692

Running a Consul cluster

(automatically generated title slide)

777/1692

Running a Consul cluster

  • Here is a good use-case for Stateful sets!

  • We are going to deploy a Consul cluster with 3 nodes

  • Consul is a highly-available key/value store

    (like etcd or Zookeeper)

  • One easy way to bootstrap a cluster is to tell each node:

    • the addresses of other nodes

    • how many nodes are expected (to know when quorum is reached)

k8s/statefulsets.md

778/1692

Bootstrapping a Consul cluster

After reading the Consul documentation carefully (and/or asking around), we figure out the minimal command-line to run our Consul cluster.

consul agent -data-dir=/consul/data -client=0.0.0.0 -server -ui \
-bootstrap-expect=3 \
-retry-join=X.X.X.X \
-retry-join=Y.Y.Y.Y
  • Replace X.X.X.X and Y.Y.Y.Y with the addresses of other nodes

  • The same command-line can be used on all nodes (convenient!)

k8s/statefulsets.md

779/1692

Cloud Auto-join

  • Since version 1.4.0, Consul can use the Kubernetes API to find its peers

  • This is called Cloud Auto-join

  • Instead of passing an IP address, we need to pass a parameter like this:

    consul agent -retry-join "provider=k8s label_selector=\"app=consul\""
  • Consul needs to be able to talk to the Kubernetes API

  • We can provide a kubeconfig file

  • If Consul runs in a pod, it will use the service account of the pod k8s/statefulsets.md

780/1692

Setting up Cloud auto-join

  • We need to create a service account for Consul

  • We need to create a role that can list and get pods

  • We need to bind that role to the service account

  • And of course, we need to make sure that Consul pods use that service account

k8s/statefulsets.md

781/1692

Putting it all together

  • The file k8s/consul.yaml defines the required resources

    (service account, cluster role, cluster role binding, service, stateful set)

  • It has a few extra touches:

    • a podAntiAffinity prevents two pods from running on the same node

    • a preStop hook makes the pod leave the cluster when shutdown gracefully

This was inspired by this excellent tutorial by Kelsey Hightower. Some features from the original tutorial (TLS authentication between nodes and encryption of gossip traffic) were removed for simplicity.

k8s/statefulsets.md

782/1692

Running our Consul cluster

  • We'll use the provided YAML file
  • Create the stateful set and associated service:

    kubectl apply -f ~/container.training/k8s/consul.yaml
  • Check the logs as the pods come up one after another:

    stern consul
  • Check the health of the cluster:
    kubectl exec consul-0 consul members

k8s/statefulsets.md

783/1692

Caveats

  • We aren't using actual persistence yet

    (no volumeClaimTemplate, Persistent Volume, etc.)

  • What happens if we lose a pod?

    • a new pod gets rescheduled (with an empty state)

    • the new pod tries to connect to the two others

    • it will be accepted (after 1-2 minutes of instability)

    • and it will retrieve the data from the other pods

k8s/statefulsets.md

784/1692

Failure modes

  • What happens if we lose two pods?

    • manual repair will be required

    • we will need to instruct the remaining one to act solo

    • then rejoin new pods

  • What happens if we lose three pods? (aka all of them)

    • we lose all the data (ouch)
  • If we run Consul without persistent storage, backups are a good idea!

k8s/statefulsets.md

785/1692

Image separating from the next chapter

786/1692

Persistent Volumes Claims

(automatically generated title slide)

787/1692

Persistent Volumes Claims

  • Our Pods can use a special volume type: a Persistent Volume Claim

  • A Persistent Volume Claim (PVC) is also a Kubernetes resource

    (visible with kubectl get persistentvolumeclaims or kubectl get pvc)

  • A PVC is not a volume; it is a request for a volume

  • It should indicate at least:

    • the size of the volume (e.g. "5 GiB")

    • the access mode (e.g. "read-write by a single pod")

k8s/statefulsets.md

788/1692

What's in a PVC?

  • A PVC contains at least:

    • a list of access modes (ReadWriteOnce, ReadOnlyMany, ReadWriteMany)

    • a size (interpreted as the minimal storage space needed)

  • It can also contain optional elements:

    • a selector (to restrict which actual volumes it can use)

    • a storage class (used by dynamic provisioning, more on that later)

k8s/statefulsets.md

789/1692

What does a PVC look like?

Here is a manifest for a basic PVC:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: my-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi

k8s/statefulsets.md

790/1692

Using a Persistent Volume Claim

Here is a Pod definition like the ones shown earlier, but using a PVC:

apiVersion: v1
kind: Pod
metadata:
name: pod-using-a-claim
spec:
containers:
- image: ...
name: container-using-a-claim
volumeMounts:
- mountPath: /my-vol
name: my-volume
volumes:
- name: my-volume
persistentVolumeClaim:
claimName: my-claim

k8s/statefulsets.md

791/1692

Creating and using Persistent Volume Claims

  • PVCs can be created manually and used explicitly

    (as shown on the previous slides)

  • They can also be created and used through Stateful Sets

    (this will be shown later)

k8s/statefulsets.md

792/1692

Lifecycle of Persistent Volume Claims

  • When a PVC is created, it starts existing in "Unbound" state

    (without an associated volume)

  • A Pod referencing an unbound PVC will not start

    (the scheduler will wait until the PVC is bound to place it)

  • A special controller continuously monitors PVCs to associate them with PVs

  • If no PV is available, one must be created:

    • manually (by operator intervention)

    • using a dynamic provisioner (more on that later)

k8s/statefulsets.md

793/1692

Which PV gets associated to a PVC?

  • The PV must satisfy the PVC constraints

    (access mode, size, optional selector, optional storage class)

  • The PVs with the closest access mode are picked

  • Then the PVs with the closest size

  • It is possible to specify a claimRef when creating a PV

    (this will associate it to the specified PVC, but only if the PV satisfies all the requirements of the PVC; otherwise another PV might end up being picked)

  • For all the details about the PersistentVolumeClaimBinder, check this doc

k8s/statefulsets.md

794/1692

Persistent Volume Claims and Stateful sets

  • A Stateful set can define one (or more) volumeClaimTemplate

  • Each volumeClaimTemplate will create one Persistent Volume Claim per pod

  • Each pod will therefore have its own individual volume

  • These volumes are numbered (like the pods)

  • Example:

    • a Stateful set is named db
    • it is scaled to replicas
    • it has a volumeClaimTemplate named data
    • then it will create pods db-0, db-1, db-2
    • these pods will have volumes named data-db-0, data-db-1, data-db-2

k8s/statefulsets.md

795/1692

Persistent Volume Claims are sticky

  • When updating the stateful set (e.g. image upgrade), each pod keeps its volume

  • When pods get rescheduled (e.g. node failure), they keep their volume

    (this requires a storage system that is not node-local)

  • These volumes are not automatically deleted

    (when the stateful set is scaled down or deleted)

  • If a stateful set is scaled back up later, the pods get their data back

k8s/statefulsets.md

796/1692

Dynamic provisioners

  • A dynamic provisioner monitors unbound PVCs

  • It can create volumes (and the corresponding PV) on the fly

  • This requires the PVCs to have a storage class

    (annotation volume.beta.kubernetes.io/storage-provisioner)

  • A dynamic provisioner only acts on PVCs with the right storage class

    (it ignores the other ones)

  • Just like LoadBalancer services, dynamic provisioners are optional

    (i.e. our cluster may or may not have one pre-installed)

k8s/statefulsets.md

797/1692

What's a Storage Class?

  • A Storage Class is yet another Kubernetes API resource

    (visible with e.g. kubectl get storageclass or kubectl get sc)

  • It indicates which provisioner to use

    (which controller will create the actual volume)

  • And arbitrary parameters for that provisioner

    (replication levels, type of disk ... anything relevant!)

  • Storage Classes are required if we want to use dynamic provisioning

    (but we can also create volumes manually, and ignore Storage Classes)

k8s/statefulsets.md

798/1692

The default storage class

  • At most one storage class can be marked as the default class

    (by annotating it with storageclass.kubernetes.io/is-default-class=true)

  • When a PVC is created, it will be annotated with the default storage class

    (unless it specifies an explicit storage class)

  • This only happens at PVC creation

    (existing PVCs are not updated when we mark a class as the default one)

k8s/statefulsets.md

799/1692

Dynamic provisioning setup

This is how we can achieve fully automated provisioning of persistent storage.

  1. Configure a storage system.

    (It needs to have an API, or be capable of automated provisioning of volumes.)

  2. Install a dynamic provisioner for this storage system.

    (This is some specific controller code.)

  3. Create a Storage Class for this system.

    (It has to match what the dynamic provisioner is expecting.)

  4. Annotate the Storage Class to be the default one.

k8s/statefulsets.md

800/1692

Dynamic provisioning usage

After setting up the system (previous slide), all we need to do is:

Create a Stateful Set that makes use of a volumeClaimTemplate.

This will trigger the following actions.

  1. The Stateful Set creates PVCs according to the volumeClaimTemplate.

  2. The Stateful Set creates Pods using these PVCs.

  3. The PVCs are automatically annotated with our Storage Class.

  4. The dynamic provisioner provisions volumes and creates the corresponding PVs.

  5. The PersistentVolumeClaimBinder associates the PVs and the PVCs together.

  6. PVCs are now bound, the Pods can start.

k8s/statefulsets.md

801/1692

Image separating from the next chapter

802/1692

Local Persistent Volumes

(automatically generated title slide)

803/1692

Local Persistent Volumes

  • We want to run that Consul cluster and actually persist data

  • But we don't have a distributed storage system

  • We are going to use local volumes instead

    (similar conceptually to hostPath volumes)

  • We can use local volumes without installing extra plugins

  • However, they are tied to a node

  • If that node goes down, the volume becomes unavailable

k8s/local-persistent-volumes.md

804/1692

With or without dynamic provisioning

  • We will deploy a Consul cluster with persistence

  • That cluster's StatefulSet will create PVCs

  • These PVCs will remain unbound¹, until we will create local volumes manually

    (we will basically do the job of the dynamic provisioner)

  • Then, we will see how to automate that with a dynamic provisioner

¹Unbound = without an associated Persistent Volume.

k8s/local-persistent-volumes.md

805/1692

If we have a dynamic provisioner ...

  • The labs in this section assume that we do not have a dynamic provisioner

  • If we do have one, we need to disable it

  • Check if we have a dynamic provisioner:

    kubectl get storageclass
  • If the output contains a line with (default), run this command:

    kubectl annotate sc storageclass.kubernetes.io/is-default-class- --all
  • Check again that it is no longer marked as (default)

k8s/local-persistent-volumes.md

806/1692

Deploying Consul

  • We will use a slightly different YAML file

  • The only differences between that file and the previous one are:

    • volumeClaimTemplate defined in the Stateful Set spec

    • the corresponding volumeMounts in the Pod spec

    • the label consul has been changed to persistentconsul
      (to avoid conflicts with the other Stateful Set)

  • Apply the persistent Consul YAML file:
    kubectl apply -f ~/container.training/k8s/persistent-consul.yaml

k8s/local-persistent-volumes.md

807/1692

Observing the situation

  • Let's look at Persistent Volume Claims and Pods
  • Check that we now have an unbound Persistent Volume Claim:

    kubectl get pvc
  • We don't have any Persistent Volume:

    kubectl get pv
  • The Pod persistentconsul-0 is not scheduled yet:

    kubectl get pods -o wide

Hint: leave these commands running with -w in different windows.

k8s/local-persistent-volumes.md

808/1692

Explanations

  • In a Stateful Set, the Pods are started one by one

  • persistentconsul-1 won't be created until persistentconsul-0 is running

  • persistentconsul-0 has a dependency on an unbound Persistent Volume Claim

  • The scheduler won't schedule the Pod until the PVC is bound

    (because the PVC might be bound to a volume that is only available on a subset of nodes; for instance EBS are tied to an availability zone)

k8s/local-persistent-volumes.md

809/1692

Creating Persistent Volumes

  • Let's create 3 local directories (/mnt/consul) on node2, node3, node4

  • Then create 3 Persistent Volumes corresponding to these directories

  • Create the local directories:

    for NODE in node2 node3 node4; do
    ssh $NODE sudo mkdir -p /mnt/consul
    done
  • Create the PV objects:

    kubectl apply -f ~/container.training/k8s/volumes-for-consul.yaml

k8s/local-persistent-volumes.md

810/1692

Check our Consul cluster

  • The PVs that we created will be automatically matched with the PVCs

  • Once a PVC is bound, its pod can start normally

  • Once the pod persistentconsul-0 has started, persistentconsul-1 can be created, etc.

  • Eventually, our Consul cluster is up, and backend by "persistent" volumes

  • Check that our Consul clusters has 3 members indeed:
    kubectl exec persistentconsul-0 consul members

k8s/local-persistent-volumes.md

811/1692

Devil is in the details (1/2)

  • The size of the Persistent Volumes is bogus

    (it is used when matching PVs and PVCs together, but there is no actual quota or limit)

k8s/local-persistent-volumes.md

812/1692

Devil is in the details (2/2)

  • This specific example worked because we had exactly 1 free PV per node:

    • if we had created multiple PVs per node ...

    • we could have ended with two PVCs bound to PVs on the same node ...

    • which would have required two pods to be on the same node ...

    • which is forbidden by the anti-affinity constraints in the StatefulSet

  • To avoid that, we need to associated the PVs with a Storage Class that has:

    volumeBindingMode: WaitForFirstConsumer

    (this means that a PVC will be bound to a PV only after being used by a Pod)

  • See this blog post for more details

k8s/local-persistent-volumes.md

813/1692

Bulk provisioning

  • It's not practical to manually create directories and PVs for each app

  • We could pre-provision a number of PVs across our fleet

  • We could even automate that with a Daemon Set:

    • creating a number of directories on each node

    • creating the corresponding PV objects

  • We also need to recycle volumes

  • ... This can quickly get out of hand

k8s/local-persistent-volumes.md

814/1692

Dynamic provisioning

  • We could also write our own provisioner, which would:

    • watch the PVCs across all namespaces

    • when a PVC is created, create a corresponding PV on a node

  • Or we could use one of the dynamic provisioners for local persistent volumes

    (for instance the Rancher local path provisioner)

k8s/local-persistent-volumes.md

815/1692

Strategies for local persistent volumes

  • Remember, when a node goes down, the volumes on that node become unavailable

  • High availability will require another layer of replication

    (like what we've just seen with Consul; or primary/secondary; etc)

  • Pre-provisioning PVs makes sense for machines with local storage

    (e.g. cloud instance storage; or storage directly attached to a physical machine)

  • Dynamic provisioning makes sense for large number of applications

    (when we can't or won't dedicate a whole disk to a volume)

  • It's possible to mix both (using distinct Storage Classes)

k8s/local-persistent-volumes.md

816/1692

Image separating from the next chapter

817/1692

Kustomize

(automatically generated title slide)

818/1692

Kustomize

  • Kustomize lets us transform YAML files representing Kubernetes resources

  • The original YAML files are valid resource files

    (e.g. they can be loaded with kubectl apply -f)

  • They are left untouched by Kustomize

  • Kustomize lets us define overlays that extend or change the resource files

k8s/kustomize.md

819/1692

Differences with Helm

  • Helm charts use placeholders {{ like.this }}

  • Kustomize "bases" are standard Kubernetes YAML

  • It is possible to use an existing set of YAML as a Kustomize base

  • As a result, writing a Helm chart is more work ...

  • ... But Helm charts are also more powerful; e.g. they can:

    • use flags to conditionally include resources or blocks

    • check if a given Kubernetes API group is supported

    • and much more

k8s/kustomize.md

820/1692

Kustomize concepts

  • Kustomize needs a kustomization.yaml file

  • That file can be a base or a variant

  • If it's a base:

    • it lists YAML resource files to use
  • If it's a variant (or overlay):

    • it refers to (at least) one base

    • and some patches

k8s/kustomize.md

821/1692

An easy way to get started with Kustomize

  • We are going to use Replicated Ship to experiment with Kustomize

  • The Replicated Ship CLI has been installed on our clusters

  • Replicated Ship has multiple workflows; here is what we will do:

    • initialize a Kustomize overlay from a remote GitHub repository

    • customize some values using the web UI provided by Ship

    • look at the resulting files and apply them to the cluster

k8s/kustomize.md

822/1692

Getting started with Ship

  • We need to run ship init in a new directory

  • ship init requires a URL to a remote repository containing Kubernetes YAML

  • It will clone that repository and start a web UI

  • Later, it can watch that repository and/or update from it

  • We will use the jpetazzo/kubercoins repository

    (it contains all the DockerCoins resources as YAML files)

k8s/kustomize.md

823/1692

ship init

  • Change to a new directory:

    mkdir ~/kustomcoins
    cd ~/kustomcoins
  • Run ship init with the kustomcoins repository:

    ship init https://github.com/jpetazzo/kubercoins

k8s/kustomize.md

824/1692

Access the web UI

  • ship init tells us to connect on localhost:8800

  • We need to replace localhost with the address of our node

    (since we run on a remote machine)

  • Follow the steps in the web UI, and change one parameter

    (e.g. set the number of replicas in the worker Deployment)

  • Complete the web workflow, and go back to the CLI

k8s/kustomize.md

825/1692

Inspect the results

  • Look at the content of our directory

  • base contains the kubercoins repository + a kustomization.yaml file

  • overlays/ship contains the Kustomize overlay referencing the base + our patch(es)

  • rendered.yaml is a YAML bundle containing the patched application

  • .ship contains a state file used by Ship

k8s/kustomize.md

826/1692

Using the results

  • We can kubectl apply -f rendered.yaml

    (on any version of Kubernetes)

  • Starting with Kubernetes 1.14, we can apply the overlay directly with:

    kubectl apply -k overlays/ship
  • But let's not do that for now!

  • We will create a new copy of DockerCoins in another namespace

k8s/kustomize.md

827/1692

Deploy DockerCoins with Kustomize

  • Create a new namespace:

    kubectl create namespace kustomcoins
  • Deploy DockerCoins:

    kubectl apply -f rendered.yaml --namespace=kustomcoins
  • Or, with Kubernetes 1.14, you can also do this:

    kubectl apply -k overlays/ship --namespace=kustomcoins

k8s/kustomize.md

828/1692

Checking our new copy of DockerCoins

  • We can check the worker logs, or the web UI
  • Retrieve the NodePort number of the web UI:

    kubectl get service webui --namespace=kustomcoins
  • Open it in a web browser

  • Look at the worker logs:

    kubectl logs deploy/worker --tail=10 --follow --namespace=kustomcoins

Note: it might take a minute or two for the worker to start.

k8s/kustomize.md

829/1692

Image separating from the next chapter

830/1692

Managing stacks with Helm

(automatically generated title slide)

831/1692

Managing stacks with Helm

  • We created our first resources with kubectl run, kubectl expose ...

  • We have also created resources by loading YAML files with kubectl apply -f

  • For larger stacks, managing thousands of lines of YAML is unreasonable

  • These YAML bundles need to be customized with variable parameters

    (E.g.: number of replicas, image version to use ...)

  • It would be nice to have an organized, versioned collection of bundles

  • It would be nice to be able to upgrade/rollback these bundles carefully

  • Helm is an open source project offering all these things!

k8s/helm-intro.md

832/1692

Helm concepts

  • helm is a CLI tool

  • It is used to find, install, upgrade charts

  • A chart is an archive containing templatized YAML bundles

  • Charts are versioned

  • Charts can be stored on private or public repositories

k8s/helm-intro.md

833/1692

Differences between charts and packages

  • A package (deb, rpm...) contains binaries, libraries, etc.

  • A chart contains YAML manifests

    (the binaries, libraries, etc. are in the images referenced by the chart)

  • On most distributions, a package can only be installed once

    (installing another version replaces the installed one)

  • A chart can be installed multiple times

  • Each installation is called a release

  • This allows to install e.g. 10 instances of MongoDB

    (with potentially different versions and configurations)

k8s/helm-intro.md

834/1692

Wait a minute ...

But, on my Debian system, I have Python 2 and Python 3.
Also, I have multiple versions of the Postgres database engine!

Yes!

But they have different package names:

  • python2.7, python3.8

  • postgresql-10, postgresql-11

Good to know: the Postgres package in Debian includes provisions to deploy multiple Postgres servers on the same system, but it's an exception (and it's a lot of work done by the package maintainer, not by the dpkg or apt tools).

k8s/helm-intro.md

835/1692

Helm 2 vs Helm 3

  • Helm 3 was released November 13, 2019

  • Charts remain compatible between Helm 2 and Helm 3

  • The CLI is very similar (with minor changes to some commands)

  • The main difference is that Helm 2 uses tiller, a server-side component

  • Helm 3 doesn't use tiller at all, making it simpler (yay!)

k8s/helm-intro.md

836/1692

With or without tiller

  • With Helm 3:

    • the helm CLI communicates directly with the Kubernetes API

    • it creates resources (deployments, services...) with our credentials

  • With Helm 2:

    • the helm CLI communicates with tiller, telling tiller what to do

    • tiller then communicates with the Kubernetes API, using its own credentials

  • This indirect model caused significant permissions headaches

    (tiller required very broad permissions to function)

  • tiller was removed in Helm 3 to simplify the security aspects

k8s/helm-intro.md

837/1692

Installing Helm

  • If the helm CLI is not installed in your environment, install it
  • Check if helm is installed:

    helm
  • If it's not installed, run the following command:

    curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get-helm-3 \
    | bash

(To install Helm 2, replace get-helm-3 with get.)

k8s/helm-intro.md

838/1692

Only if using Helm 2 ...

  • We need to install Tiller and give it some permissions

  • Tiller is composed of a service and a deployment in the kube-system namespace

  • They can be managed (installed, upgraded...) with the helm CLI

  • Deploy Tiller:
    helm init

At the end of the install process, you will see:

Happy Helming!

k8s/helm-intro.md

839/1692

Only if using Helm 2 ...

  • Tiller needs permissions to create Kubernetes resources

  • In a more realistic deployment, you might create per-user or per-team service accounts, roles, and role bindings

  • Grant cluster-admin role to kube-system:default service account:
    kubectl create clusterrolebinding add-on-cluster-admin \
    --clusterrole=cluster-admin --serviceaccount=kube-system:default

(Defining the exact roles and permissions on your cluster requires a deeper knowledge of Kubernetes' RBAC model. The command above is fine for personal and development clusters.)

k8s/helm-intro.md

840/1692

Charts and repositories

  • A repository (or repo in short) is a collection of charts

  • It's just a bunch of files

    (they can be hosted by a static HTTP server, or on a local directory)

  • We can add "repos" to Helm, giving them a nickname

  • The nickname is used when referring to charts on that repo

    (for instance, if we try to install hello/world, that means the chart world on the repo hello; and that repo hello might be something like https://blahblah.hello.io/charts/)

k8s/helm-intro.md

841/1692

Managing repositories

  • Let's check what repositories we have, and add the stable repo

    (the stable repo contains a set of official-ish charts)

  • List our repos:

    helm repo list
  • Add the stable repo:

    helm repo add stable https://kubernetes-charts.storage.googleapis.com/

Adding a repo can take a few seconds (it downloads the list of charts from the repo).

It's OK to add a repo that already exists (it will merely update it).

k8s/helm-intro.md

842/1692

Search available charts

  • We can search available charts with helm search

  • We need to specify where to search (only our repos, or Helm Hub)

  • Let's search for all charts mentioning tomcat!

  • Search for tomcat in the repo that we added earlier:

    helm search repo tomcat
  • Search for tomcat on the Helm Hub:

    helm search hub tomcat

Helm Hub indexes many repos, using the Monocular server.

k8s/helm-intro.md

843/1692

Charts and releases

  • "Installing a chart" means creating a release

  • We need to name that release

    (or use the --generate-name to get Helm to generate one for us)

  • Install the tomcat chart that we found earlier:

    helm install java4ever stable/tomcat
  • List the releases:

    helm list

k8s/helm-intro.md

844/1692

Searching and installing with Helm 2

  • Helm 2 doesn't have support for the Helm Hub

  • The helm search command only takes a search string argument

    (e.g. helm search tomcat)

  • With Helm 2, the name is optional:

    helm install stable/tomcat will automatically generate a name

    helm install --name java4ever stable/tomcat will specify a name

k8s/helm-intro.md

845/1692

Viewing resources of a release

  • This specific chart labels all its resources with a release label

  • We can use a selector to see these resources

  • List all the resources created by this release:
    kubectl get all --selector=release=java4ever

Note: this release label wasn't added automatically by Helm.
It is defined in that chart. In other words, not all charts will provide this label.

k8s/helm-intro.md

846/1692

Configuring a release

  • By default, stable/tomcat creates a service of type LoadBalancer

  • We would like to change that to a NodePort

  • We could use kubectl edit service java4ever-tomcat, but ...

    ... our changes would get overwritten next time we update that chart!

  • Instead, we are going to set a value

  • Values are parameters that the chart can use to change its behavior

  • Values have default values

  • Each chart is free to define its own values and their defaults

k8s/helm-intro.md

847/1692

Checking possible values

  • We can inspect a chart with helm show or helm inspect
  • Look at the README for tomcat:

    helm show readme stable/tomcat
  • Look at the values and their defaults:

    helm show values stable/tomcat

The values may or may not have useful comments.

The readme may or may not have (accurate) explanations for the values.

(If we're unlucky, there won't be any indication about how to use the values!)

k8s/helm-intro.md

848/1692

Setting values

  • Values can be set when installing a chart, or when upgrading it

  • We are going to update java4ever to change the type of the service

  • Update java4ever:
    helm upgrade java4ever stable/tomcat --set service.type=NodePort

Note that we have to specify the chart that we use (stable/tomcat), even if we just want to update some values.

We can set multiple values. If we want to set many values, we can use -f/--values and pass a YAML file with all the values.

All unspecified values will take the default values defined in the chart.

k8s/helm-intro.md

849/1692

Connecting to tomcat

  • Let's check the tomcat server that we just installed

  • Note: its readiness probe has a 60s delay

    (so it will take 60s after the initial deployment before the service works)

  • Check the node port allocated to the service:

    kubectl get service java4ever-tomcat
    PORT=$(kubectl get service java4ever-tomcat -o jsonpath={..nodePort})
  • Connect to it, checking the demo app on /sample/:

    curl localhost:$PORT/sample/

k8s/helm-intro.md

850/1692

Image separating from the next chapter

851/1692

Helm chart format

(automatically generated title slide)

852/1692

Helm chart format

  • What exactly is a chart?

  • What's in it?

  • What would be involved in creating a chart?

    (we won't create a chart, but we'll see the required steps)

k8s/helm-chart-format.md

853/1692

What is a chart

  • A chart is a set of files

  • Some of these files are mandatory for the chart to be viable

    (more on that later)

  • These files are typically packed in a tarball

  • These tarballs are stored in "repos"

    (which can be static HTTP servers)

  • We can install from a repo, from a local tarball, or an unpacked tarball

    (the latter option is preferred when developing a chart)

k8s/helm-chart-format.md

854/1692

What's in a chart

  • A chart must have at least:

    • a templates directory, with YAML manifests for Kubernetes resources

    • a values.yaml file, containing (tunable) parameters for the chart

    • a Chart.yaml file, containing metadata (name, version, description ...)

  • Let's look at a simple chart, stable/tomcat

k8s/helm-chart-format.md

855/1692

Downloading a chart

  • We can use helm pull to download a chart from a repo
  • Download the tarball for stable/tomcat:

    helm pull stable/tomcat

    (This will create a file named tomcat-X.Y.Z.tgz.)

  • Or, download + untar stable/tomcat:

    helm pull stable/tomcat --untar

    (This will create a directory named tomcat.)

k8s/helm-chart-format.md

856/1692

Looking at the chart's content

  • Let's look at the files and directories in the tomcat chart
  • Display the tree structure of the chart we just downloaded:
    tree tomcat

We see the components mentioned above: Chart.yaml, templates/, values.yaml.

k8s/helm-chart-format.md

857/1692

Templates

  • The templates/ directory contains YAML manifests for Kubernetes resources

    (Deployments, Services, etc.)

  • These manifests can contain template tags

    (using the standard Go template library)

  • Look at the template file for the tomcat Service resource:
    cat tomcat/templates/appsrv-svc.yaml

k8s/helm-chart-format.md

858/1692

Analyzing the template file

  • Tags are identified by {{ ... }}

  • {{ template "x.y" }} expands a named template

    (previously defined with {{ define "x.y "}}...stuff...{{ end }})

  • The . in {{ template "x.y" . }} is the context for that named template

    (so that the named template block can access variables from the local context)

  • {{ .Release.xyz }} refers to built-in variables initialized by Helm

    (indicating the chart name, version, whether we are installing or upgrading ...)

  • {{ .Values.xyz }} refers to tunable/settable values

    (more on that in a minute)

k8s/helm-chart-format.md

859/1692

Values

  • Each chart comes with a values file

  • It's a YAML file containing a set of default parameters for the chart

  • The values can be accessed in templates with e.g. {{ .Values.x.y }}

    (corresponding to field y in map x in the values file)

  • The values can be set or overridden when installing or ugprading a chart:

    • with --set x.y=z (can be used multiple times to set multiple values)

    • with --values some-yaml-file.yaml (set a bunch of values from a file)

  • Charts following best practices will have values following specific patterns

    (e.g. having a service map allowing to set service.type etc.)

k8s/helm-chart-format.md

860/1692

Other useful tags

  • {{ if x }} y {{ end }} allows to include y if x evaluates to true

    (can be used for e.g. healthchecks, annotations, or even an entire resource)

  • {{ range x }} y {{ end }} iterates over x, evaluating y each time

    (the elements of x are assigned to . in the range scope)

  • {{- x }}/{{ x -}} will remove whitespace on the left/right

  • The whole Sprig library, with additions:

    lower upper quote trim default b64enc b64dec sha256sum indent toYaml ...

k8s/helm-chart-format.md

861/1692

Pipelines

  • {{ quote blah }} can also be expressed as {{ blah | quote }}

  • With multiple arguments, {{ x y z }} can be expressed as {{ z | x y }})

  • Example: {{ .Values.annotations | toYaml | indent 4 }}

    • transforms the map under annotations into a YAML string

    • indents it with 4 spaces (to match the surrounding context)

  • Pipelines are not specific to Helm, but a feature of Go templates

    (check the Go text/template documentation for more details and examples)

k8s/helm-chart-format.md

862/1692

README and NOTES.txt

  • At the top-level of the chart, it's a good idea to have a README

  • It will be viewable with e.g. helm show readme stable/tomcat

  • In the templates/ directory, we can also have a NOTES.txt file

  • When the template is installed (or upgraded), NOTES.txt is processed too

    (i.e. its {{ ... }} tags are evaluated)

  • It gets displayed after the install or upgrade

  • It's a great place to generate messages to tell the user:

    • how to connect to the release they just deployed

    • any passwords or other thing that we generated for them

k8s/helm-chart-format.md

863/1692

Additional files

  • We can place arbitrary files in the chart (outside of the templates/ directory)

  • They can be accessed in templates with .Files

  • They can be transformed into ConfigMaps or Secrets with AsConfig and AsSecrets

    (see this example in the Helm docs)

k8s/helm-chart-format.md

864/1692

Hooks and tests

  • We can define hooks in our templates

  • Hooks are resources annotated with "helm.sh/hook": NAME-OF-HOOK

  • Hook names include pre-install, post-install, test, and much more

  • The resources defined in hooks are loaded at a specific time

  • Hook execution is synchronous

    (if the resource is a Job or Pod, Helm will wait for its completion)

  • This can be use for database migrations, backups, notifications, smoke tests ...

  • Hooks named test are executed only when running helm test RELEASE-NAME

k8s/helm-chart-format.md

865/1692

Image separating from the next chapter

866/1692

Creating a basic chart

(automatically generated title slide)

867/1692

Creating a basic chart

  • We are going to show a way to create a very simplified chart

  • In a real chart, lots of things would be templatized

    (Resource names, service types, number of replicas...)

  • Create a sample chart:

    helm create dockercoins
  • Move away the sample templates and create an empty template directory:

    mv dockercoins/templates dockercoins/default-templates
    mkdir dockercoins/templates

k8s/helm-create-basic-chart.md

868/1692

Exporting the YAML for our application

  • The following section assumes that DockerCoins is currently running

  • If DockerCoins is not running, see next slide

  • Create one YAML file for each resource that we need:
    while read kind name; do
    kubectl get -o yaml $kind $name > dockercoins/templates/$name-$kind.yaml
    done <<EOF
    deployment worker
    deployment hasher
    daemonset rng
    deployment webui
    deployment redis
    service hasher
    service rng
    service webui
    service redis
    EOF

k8s/helm-create-basic-chart.md

869/1692

Obtaining DockerCoins YAML

  • If DockerCoins is not running, we can also obtain the YAML from a public repository
  • Clone the kubercoins repository:

    git clone https://github.com/jpetazzo/kubercoins
  • Copy the YAML files to the templates/ directory:

    cp kubercoins/*.yaml dockercoins/templates/

k8s/helm-create-basic-chart.md

870/1692

Testing our helm chart

  • Let's install our helm chart!
    helm install helmcoins dockercoins
    (helmcoins is the name of the release; dockercoins is the local path of the chart)
871/1692

Testing our helm chart

  • Let's install our helm chart!
    helm install helmcoins dockercoins
    (helmcoins is the name of the release; dockercoins is the local path of the chart)
  • Since the application is already deployed, this will fail:

    Error: rendered manifests contain a resource that already exists.
    Unable to continue with install: existing resource conflict:
    kind: Service, namespace: default, name: hasher
  • To avoid naming conflicts, we will deploy the application in another namespace

k8s/helm-create-basic-chart.md

872/1692

Switching to another namespace

  • We need create a new namespace

    (Helm 2 creates namespaces automatically; Helm 3 doesn't anymore)

  • We need to tell Helm which namespace to use

  • Create a new namespace:

    kubectl create namespace helmcoins
  • Deploy our chart in that namespace:

    helm install helmcoins dockercoins --namespace=helmcoins

k8s/helm-create-basic-chart.md

873/1692

Helm releases are namespaced

  • Let's try to see the release that we just deployed
  • List Helm releases:
    helm list

Our release doesn't show up!

We have to specify its namespace (or switch to that namespace).

k8s/helm-create-basic-chart.md

874/1692

Specifying the namespace

  • Try again, with the correct namespace
  • List Helm releases in helmcoins:
    helm list --namespace=helmcoins

k8s/helm-create-basic-chart.md

875/1692

Checking our new copy of DockerCoins

  • We can check the worker logs, or the web UI
  • Retrieve the NodePort number of the web UI:

    kubectl get service webui --namespace=helmcoins
  • Open it in a web browser

  • Look at the worker logs:

    kubectl logs deploy/worker --tail=10 --follow --namespace=helmcoins

Note: it might take a minute or two for the worker to start.

k8s/helm-create-basic-chart.md

876/1692

Discussion, shortcomings

  • Helm (and Kubernetes) best practices recommend to add a number of annotations

    (e.g. app.kubernetes.io/name, helm.sh/chart, app.kubernetes.io/instance ...)

  • Our basic chart doesn't have any of these

  • Our basic chart doesn't use any template tag

  • Does it make sense to use Helm in that case?

  • Yes, because Helm will:

    • track the resources created by the chart

    • save successive revisions, allowing us to rollback

Helm docs and Kubernetes docs have details about recommended annotations and labels.

k8s/helm-create-basic-chart.md

877/1692

Cleaning up

  • Let's remove that chart before moving on
  • Delete the release (don't forget to specify the namespace):
    helm delete helmcoins --namespace=helmcoins

k8s/helm-create-basic-chart.md

878/1692

Image separating from the next chapter

879/1692

Creating better Helm charts

(automatically generated title slide)

880/1692

Creating better Helm charts

  • We are going to create a chart with the helper helm create

  • This will give us a chart implementing lots of Helm best practices

    (labels, annotations, structure of the values.yaml file ...)

  • We will use that chart as a generic Helm chart

  • We will use it to deploy DockerCoins

  • Each component of DockerCoins will have its own release

  • In other words, we will "install" that Helm chart multiple times

    (one time per component of DockerCoins)

k8s/helm-create-better-chart.md

881/1692

Creating a generic chart

  • Rather than starting from scratch, we will use helm create

  • This will give us a basic chart that we will customize

  • Create a basic chart:
    cd ~
    helm create helmcoins

This creates a basic chart in the directory helmcoins.

k8s/helm-create-better-chart.md

882/1692

What's in the basic chart?

  • The basic chart will create a Deployment and a Service

  • Optionally, it will also include an Ingress

  • If we don't pass any values, it will deploy the nginx image

  • We can override many things in that chart

  • Let's try to deploy DockerCoins components with that chart!

k8s/helm-create-better-chart.md

883/1692

Writing values.yaml for our components

  • We need to write one values.yaml file for each component

    (hasher, redis, rng, webui, worker)

  • We will start with the values.yaml of the chart, and remove what we don't need

  • We will create 5 files:

    hasher.yaml, redis.yaml, rng.yaml, webui.yaml, worker.yaml

  • In each file, we want to have:

    image:
    repository: IMAGE-REPOSITORY-NAME
    tag: IMAGE-TAG

k8s/helm-create-better-chart.md

884/1692

Getting started

  • For component X, we want to use the image dockercoins/X:v0.1

    (for instance, for rng, we want to use the image dockercoins/rng:v0.1)

  • Exception: for redis, we want to use the official image redis:latest

  • Write YAML files for the 5 components, with the following model:
    image:
    repository: IMAGE-REPOSITORY-NAME (e.g. dockercoins/worker)
    tag: IMAGE-TAG (e.g. v0.1)

k8s/helm-create-better-chart.md

885/1692

Deploying DockerCoins components

  • For convenience, let's work in a separate namespace
  • Create a new namespace (if it doesn't already exist):

    kubectl create namespace helmcoins
  • Switch to that namespace:

    kns helmcoins

k8s/helm-create-better-chart.md

886/1692

Deploying the chart

  • To install a chart, we can use the following command:

    helm install COMPONENT-NAME CHART-DIRECTORY
  • We can also use the following command, which is idempotent:

    helm upgrade COMPONENT-NAME CHART-DIRECTORY --install
  • Install the 5 components of DockerCoins:
    for COMPONENT in hasher redis rng webui worker; do
    helm upgrade $COMPONENT helmcoins --install --values=$COMPONENT.yaml
    done

k8s/helm-create-better-chart.md

887/1692

Checking what we've done

  • Let's see if DockerCoins is working!
  • Check the logs of the worker:

    stern worker
  • Look at the resources that were created:

    kubectl get all

There are many issues to fix!

k8s/helm-create-better-chart.md

888/1692

Can't pull image

  • It looks like our images can't be found
  • Use kubectl describe on any of the pods in error
  • We're trying to pull rng:1.16.0 instead of rng:v0.1!

  • Where does that 1.16.0 tag come from?

k8s/helm-create-better-chart.md

889/1692

Inspecting our template

  • Let's look at the templates/ directory

    (and try to find the one generating the Deployment resource)

  • Show the structure of the helmcoins chart that Helm generated:

    tree helmcoins
  • Check the file helmcoins/templates/deployment.yaml

  • Look for the image: parameter

The image tag references {{ .Chart.AppVersion }}. Where does that come from?

k8s/helm-create-better-chart.md

890/1692

The .Chart variable

  • .Chart is a map corresponding to the values in Chart.yaml

  • Let's look for AppVersion there!

  • Check the file helmcoins/Chart.yaml

  • Look for the appVersion: parameter

(Yes, the case is different between the template and the Chart file.)

k8s/helm-create-better-chart.md

891/1692

Using the correct tags

  • If we change AppVersion to v0.1, it will change for all deployments

    (including redis)

  • Instead, let's change the template to use {{ .Values.image.tag }}

    (to match what we've specified in our values YAML files)

  • Edit helmcoins/templates/deployment.yaml

  • Replace {{ .Chart.AppVersion }} with {{ .Values.image.tag }}

k8s/helm-create-better-chart.md

892/1692

Upgrading to use the new template

  • Technically, we just made a new version of the chart

  • To use the new template, we need to upgrade the release to use that chart

  • Upgrade all components:

    for COMPONENT in hasher redis rng webui worker; do
    helm upgrade $COMPONENT helmcoins
    done
  • Check how our pods are doing:

    kubectl get pods

We should see all pods "Running". But ... not all of them are READY.

k8s/helm-create-better-chart.md

893/1692

Troubleshooting readiness

  • hasher, rng, webui should show up as 1/1 READY

  • But redis and worker should show up as 0/1 READY

  • Why?

k8s/helm-create-better-chart.md

894/1692

Troubleshooting pods

  • The easiest way to troubleshoot pods is to look at events

  • We can look at all the events on the cluster (with kubectl get events)

  • Or we can use kubectl describe on the objects that have problems

    (kubectl describe will retrieve the events related to the object)

  • Check the events for the redis pods:
    kubectl describe pod -l app.kubernetes.io/name=redis

It's failing both its liveness and readiness probes!

k8s/helm-create-better-chart.md

895/1692

Healthchecks

  • The default chart defines healthchecks doing HTTP requests on port 80

  • That won't work for redis and worker

    (redis is not HTTP, and not on port 80; worker doesn't even listen)

896/1692

Healthchecks

  • The default chart defines healthchecks doing HTTP requests on port 80

  • That won't work for redis and worker

    (redis is not HTTP, and not on port 80; worker doesn't even listen)

  • We could remove or comment out the healthchecks

  • We could also make them conditional

  • This sounds more interesting, let's do that!

k8s/helm-create-better-chart.md

897/1692

Conditionals

  • We need to enclose the healthcheck block with:

    {{ if false }} at the beginning (we can change the condition later)

    {{ end }} at the end

  • Edit helmcoins/templates/deployment.yaml

  • Add {{ if false }} on the line before livenessProbe

  • Add {{ end }} after the readinessProbe section

    (see next slide for details)

k8s/helm-create-better-chart.md

898/1692

This is what the new YAML should look like (added lines in yellow):

ports:
- name: http
containerPort: 80
protocol: TCP
{{ if false }}
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
{{ end }}
resources:
{{- toYaml .Values.resources | nindent 12 }}

k8s/helm-create-better-chart.md

899/1692

Testing the new chart

  • We need to upgrade all the services again to use the new chart
  • Upgrade all components:

    for COMPONENT in hasher redis rng webui worker; do
    helm upgrade $COMPONENT helmcoins
    done
  • Check how our pods are doing:

    kubectl get pods

Everything should now be running!

k8s/helm-create-better-chart.md

900/1692

What's next?

  • Is this working now?
  • Let's check the logs of the worker:
    stern worker

This error might look familiar ... The worker can't resolve redis.

Typically, that error means that the redis service doesn't exist.

k8s/helm-create-better-chart.md

901/1692

Checking services

  • What about the services created by our chart?
  • Check the list of services:
    kubectl get services

They are named COMPONENT-helmcoins instead of just COMPONENT.

We need to change that!

k8s/helm-create-better-chart.md

902/1692

Where do the service names come from?

  • Look at the YAML template used for the services

  • It should be using {{ include "helmcoins.fullname" }}

  • include indicates a template block defined somewhere else

  • Find where that fullname thing is defined:
    grep define.*fullname helmcoins/templates/*

It should be in _helpers.tpl.

We can look at the definition, but it's fairly complex ...

k8s/helm-create-better-chart.md

903/1692

Changing service names

  • Instead of that {{ include }} tag, let's use the name of the release

  • The name of the release is available as {{ .Release.Name }}

  • Edit helmcoins/templates/service.yaml

  • Replace the service name with {{ .Release.Name }}

  • Upgrade all the releases to use the new chart

  • Confirm that the services now have the right names

k8s/helm-create-better-chart.md

904/1692

Is it working now?

  • If we look at the worker logs, it appears that the worker is still stuck

  • What could be happening?

905/1692

Is it working now?

  • If we look at the worker logs, it appears that the worker is still stuck

  • What could be happening?

  • The redis service is not on port 80!

  • Let's see how the port number is set

  • We need to look at both the deployment template and the service template

k8s/helm-create-better-chart.md

906/1692

Service template

  • In the service template, we have the following section:

    ports:
    - port: {{ .Values.service.port }}
    targetPort: http
    protocol: TCP
    name: http
  • port is the port on which the service is "listening"

    (i.e. to which our code needs to connect)

  • targetPort is the port on which the pods are listening

  • The name is not important (it's OK if it's http even for non-HTTP traffic)

k8s/helm-create-better-chart.md

907/1692

Setting the redis port

  • Let's add a service.port value to the redis release
  • Edit redis.yaml to add:

    service:
    port: 6379
  • Apply the new values file:

    helm upgrade redis helmcoins --values=redis.yaml

k8s/helm-create-better-chart.md

908/1692

Deployment template

  • If we look at the deployment template, we see this section:

    ports:
    - name: http
    containerPort: 80
    protocol: TCP
  • The container port is hard-coded to 80

  • We'll change it to use the port number specified in the values

k8s/helm-create-better-chart.md

909/1692

Changing the deployment template

  • Edit helmcoins/templates/deployment.yaml

  • The line with containerPort should be:

    containerPort: {{ .Values.service.port }}

k8s/helm-create-better-chart.md

910/1692

Apply changes

  • Re-run the for loop to execute helm upgrade one more time

  • Check the worker logs

  • This time, it should be working!

k8s/helm-create-better-chart.md

911/1692

Extra steps

  • We don't need to create a service for the worker

  • We can put the whole service block in a conditional

    (this will require additional changes in other files referencing the service)

  • We can set the webui to be a NodePort service

  • We can change the number of workers with replicaCount

  • And much more!

k8s/helm-create-better-chart.md

912/1692

Image separating from the next chapter

913/1692

Helm secrets

(automatically generated title slide)

914/1692

Helm secrets

  • Helm can do rollbacks:

    • to previously installed charts

    • to previous sets of values

  • How and where does it store the data needed to do that?

  • Let's investigate!

k8s/helm-secrets.md

915/1692

We need a release

  • We need to install something with Helm

  • Let's use the stable/tomcat chart as an example

  • Install a release called tomcat with the chart stable/tomcat:

    helm upgrade tomcat stable/tomcat --install
  • Let's upgrade that release, and change a value:

    helm upgrade tomcat stable/tomcat --set ingress.enabled=true

k8s/helm-secrets.md

916/1692

Release history

  • Helm stores successive revisions of each release
  • View the history for that release:
    helm history tomcat

Where does that come from?

k8s/helm-secrets.md

917/1692

Investigate

  • Possible options:

    • local filesystem (no, because history is visible from other machines)

    • persistent volumes (no, Helm works even without them)

    • ConfigMaps, Secrets?

  • Look for ConfigMaps and Secrets:
    kubectl get configmaps,secrets
918/1692

Investigate

  • Possible options:

    • local filesystem (no, because history is visible from other machines)

    • persistent volumes (no, Helm works even without them)

    • ConfigMaps, Secrets?

  • Look for ConfigMaps and Secrets:
    kubectl get configmaps,secrets

We should see a number of secrets with TYPE helm.sh/release.v1.

k8s/helm-secrets.md

919/1692

Unpacking a secret

  • Let's find out what is in these Helm secrets
  • Examine the secret corresponding to the second release of tomcat:
    kubectl describe secret sh.helm.release.v1.tomcat.v2
    (v1 is the secret format; v2 means revision 2 of the tomcat release)

There is a key named release.

k8s/helm-secrets.md

920/1692

Unpacking the release data

  • Let's see what's in this release thing!
  • Dump the secret:
    kubectl get secret sh.helm.release.v1.tomcat.v2 \
    -o go-template='{{ .data.release }}'

Secrets are encoded in base64. We need to decode that!

k8s/helm-secrets.md

921/1692

Decoding base64

  • We can pipe the output through base64 -d or use go-template's base64decode
  • Decode the secret:
    kubectl get secret sh.helm.release.v1.tomcat.v2 \
    -o go-template='{{ .data.release | base64decode }}'
922/1692

Decoding base64

  • We can pipe the output through base64 -d or use go-template's base64decode
  • Decode the secret:
    kubectl get secret sh.helm.release.v1.tomcat.v2 \
    -o go-template='{{ .data.release | base64decode }}'

... Wait, this still looks like base64. What's going on?

923/1692

Decoding base64

  • We can pipe the output through base64 -d or use go-template's base64decode
  • Decode the secret:
    kubectl get secret sh.helm.release.v1.tomcat.v2 \
    -o go-template='{{ .data.release | base64decode }}'

... Wait, this still looks like base64. What's going on?

Let's try one more round of decoding!

k8s/helm-secrets.md

924/1692

Decoding harder

  • Just add one more base64 decode filter
  • Decode it twice:
    kubectl get secret sh.helm.release.v1.tomcat.v2 \
    -o go-template='{{ .data.release | base64decode | base64decode }}'
925/1692

Decoding harder

  • Just add one more base64 decode filter
  • Decode it twice:
    kubectl get secret sh.helm.release.v1.tomcat.v2 \
    -o go-template='{{ .data.release | base64decode | base64decode }}'

... OK, that was a lot of binary data. What sould we do with it?

k8s/helm-secrets.md

926/1692

Guessing data type

  • We could use file to figure out the data type
  • Pipe the decoded release through file -:
    kubectl get secret sh.helm.release.v1.tomcat.v2 \
    -o go-template='{{ .data.release | base64decode | base64decode }}' \
    | file -
927/1692

Guessing data type

  • We could use file to figure out the data type
  • Pipe the decoded release through file -:
    kubectl get secret sh.helm.release.v1.tomcat.v2 \
    -o go-template='{{ .data.release | base64decode | base64decode }}' \
    | file -

Gzipped data! It can be decoded with gunzip -c.

k8s/helm-secrets.md

928/1692

Uncompressing the data

  • Let's uncompress the data and save it to a file
  • Rerun the previous command, but with | gunzip -c > release-info :

    kubectl get secret sh.helm.release.v1.tomcat.v2 \
    -o go-template='{{ .data.release | base64decode | base64decode }}' \
    | gunzip -c > release-info
  • Look at release-info:

    cat release-info
929/1692

Uncompressing the data

  • Let's uncompress the data and save it to a file
  • Rerun the previous command, but with | gunzip -c > release-info :

    kubectl get secret sh.helm.release.v1.tomcat.v2 \
    -o go-template='{{ .data.release | base64decode | base64decode }}' \
    | gunzip -c > release-info
  • Look at release-info:

    cat release-info

It's a bundle of YAML JSON.

k8s/helm-secrets.md

930/1692

Looking at the JSON

If we inspect that JSON (e.g. with jq keys release-info), we see:

  • chart (contains the entire chart used for that release)
  • config (contains the values that we've set)
  • info (date of deployment, status messages)
  • manifest (YAML generated from the templates)
  • name (name of the release, so tomcat)
  • namespace (namespace where we deployed the release)
  • version (revision number within that release; starts at 1)

The chart is in a structured format, but it's entirely captured in this JSON.

k8s/helm-secrets.md

931/1692

Conclusions

  • Helm stores each release information in a Secret in the namespace of the release

  • The secret is JSON object (gzipped and encoded in base64)

  • It contains the manifests generated for that release

  • ... And everything needed to rebuild these manifests

    (including the full source of the chart, and the values used)

  • This allows arbitrary rollbacks, as well as tweaking values even without having access to the source of the chart (or the chart repo) used for deployment

k8s/helm-secrets.md

932/1692

Image separating from the next chapter

933/1692

Extending the Kubernetes API

(automatically generated title slide)

934/1692

Extending the Kubernetes API

There are multiple ways to extend the Kubernetes API.

We are going to cover:

  • Custom Resource Definitions (CRDs)

  • Admission Webhooks

  • The Aggregation Layer

k8s/extending-api.md

935/1692

Revisiting the API server

  • The Kubernetes API server is a central point of the control plane

    (everything connects to it: controller manager, scheduler, kubelets)

  • Almost everything in Kubernetes is materialized by a resource

  • Resources have a type (or "kind")

    (similar to strongly typed languages)

  • We can see existing types with kubectl api-resources

  • We can list resources of a given type with kubectl get <type>

k8s/extending-api.md

936/1692

Creating new types

  • We can create new types with Custom Resource Definitions (CRDs)

  • CRDs are created dynamically

    (without recompiling or restarting the API server)

  • CRDs themselves are resources:

    • we can create a new type with kubectl create and some YAML

    • we can see all our custom types with kubectl get crds

  • After we create a CRD, the new type works just like built-in types

k8s/extending-api.md

937/1692

A very simple CRD

The YAML below describes a very simple CRD representing different kinds of coffee:

apiVersion: apiextensions.k8s.io/v1alpha1
kind: CustomResourceDefinition
metadata:
name: coffees.container.training
spec:
group: container.training
version: v1alpha1
scope: Namespaced
names:
plural: coffees
singular: coffee
kind: Coffee
shortNames:
- cof

k8s/extending-api.md

938/1692

Creating a CRD

  • Let's create the Custom Resource Definition for our Coffee resource
  • Load the CRD:

    kubectl apply -f ~/container.training/k8s/coffee-1.yaml
  • Confirm that it shows up:

    kubectl get crds

k8s/extending-api.md

939/1692

Creating custom resources

The YAML below defines a resource using the CRD that we just created:

kind: Coffee
apiVersion: container.training/v1alpha1
metadata:
name: arabica
spec:
taste: strong
  • Create a few types of coffee beans:
    kubectl apply -f ~/container.training/k8s/coffees.yaml

k8s/extending-api.md

940/1692

Viewing custom resources

  • By default, kubectl get only shows name and age of custom resources
  • View the coffee beans that we just created:
    kubectl get coffees
  • We can improve that, but it's outside the scope of this section!

k8s/extending-api.md

941/1692

What can we do with CRDs?

There are many possibilities!

  • Operators encapsulate complex sets of resources

    (e.g.: a PostgreSQL replicated cluster; an etcd cluster...
    see awesome operators and OperatorHub to find more)

  • Custom use-cases like gitkube

    • creates a new custom type, Remote, exposing a git+ssh server

    • deploy by pushing YAML or Helm charts to that remote

  • Replacing built-in types with CRDs

    (see this lightning talk by Tim Hockin)

k8s/extending-api.md

942/1692

Little details

  • By default, CRDs are not validated

    (we can put anything we want in the spec)

  • When creating a CRD, we can pass an OpenAPI v3 schema (BETA!)

    (which will then be used to validate resources)

  • Generally, when creating a CRD, we also want to run a controller

    (otherwise nothing will happen when we create resources of that type)

  • The controller will typically watch our custom resources

    (and take action when they are created/updated)

Examples: YAML to install the gitkube CRD, YAML to install a redis operator CRD

k8s/extending-api.md

943/1692

(Ab)using the API server

  • If we need to store something "safely" (as in: in etcd), we can use CRDs

  • This gives us primitives to read/write/list objects (and optionally validate them)

  • The Kubernetes API server can run on its own

    (without the scheduler, controller manager, and kubelets)

  • By loading CRDs, we can have it manage totally different objects

    (unrelated to containers, clusters, etc.)

k8s/extending-api.md

944/1692

Service catalog

  • Service catalog is another extension mechanism

  • It's not extending the Kubernetes API strictly speaking

    (but it still provides new features!)

  • It doesn't create new types; it uses:

    • ClusterServiceBroker
    • ClusterServiceClass
    • ClusterServicePlan
    • ServiceInstance
    • ServiceBinding
  • It uses the Open service broker API

k8s/extending-api.md

945/1692

Admission controllers

  • Admission controllers are another way to extend the Kubernetes API

  • Instead of creating new types, admission controllers can transform or vet API requests

  • The diagram on the next slide shows the path of an API request

    (courtesy of Banzai Cloud)

k8s/extending-api.md

946/1692

Types of admission controllers

  • Validating admission controllers can accept/reject the API call

  • Mutating admission controllers can modify the API request payload

  • Both types can also trigger additional actions

    (e.g. automatically create a Namespace if it doesn't exist)

  • There are a number of built-in admission controllers

    (see documentation for a list)

  • We can also dynamically define and register our own

k8s/extending-api.md

948/1692

Some built-in admission controllers

  • ServiceAccount:

    automatically adds a ServiceAccount to Pods that don't explicitly specify one

  • LimitRanger:

    applies resource constraints specified by LimitRange objects when Pods are created

  • NamespaceAutoProvision:

    automatically creates namespaces when an object is created in a non-existent namespace

Note: #1 and #2 are enabled by default; #3 is not.

k8s/extending-api.md

949/1692

Admission Webhooks

  • We can setup admission webhooks to extend the behavior of the API server

  • The API server will submit incoming API requests to these webhooks

  • These webhooks can be validating or mutating

  • Webhooks can be set up dynamically (without restarting the API server)

  • To setup a dynamic admission webhook, we create a special resource:

    a ValidatingWebhookConfiguration or a MutatingWebhookConfiguration

  • These resources are created and managed like other resources

    (i.e. kubectl create, kubectl get...)

k8s/extending-api.md

950/1692

Webhook Configuration

  • A ValidatingWebhookConfiguration or MutatingWebhookConfiguration contains:

    • the address of the webhook

    • the authentication information to use with the webhook

    • a list of rules

  • The rules indicate for which objects and actions the webhook is triggered

    (to avoid e.g. triggering webhooks when setting up webhooks)

k8s/extending-api.md

951/1692

The aggregation layer

  • We can delegate entire parts of the Kubernetes API to external servers

  • This is done by creating APIService resources

    (check them with kubectl get apiservices!)

  • The APIService resource maps a type (kind) and version to an external service

  • All requests concerning that type are sent (proxied) to the external service

  • This allows to have resources like CRDs, but that aren't stored in etcd

  • Example: metrics-server

    (storing live metrics in etcd would be extremely inefficient)

  • Requires significantly more work than CRDs!

k8s/extending-api.md

952/1692

Image separating from the next chapter

954/1692

Operators

(automatically generated title slide)

955/1692

Operators

  • Operators are one of the many ways to extend Kubernetes

  • We will define operators

  • We will see how they work

  • We will install a specific operator (for ElasticSearch)

  • We will use it to provision an ElasticSearch cluster

k8s/operators.md

956/1692

What are operators?

An operator represents human operational knowledge in software,
to reliably manage an application. — CoreOS

Examples:

  • Deploying and configuring replication with MySQL, PostgreSQL ...

  • Setting up Elasticsearch, Kafka, RabbitMQ, Zookeeper ...

  • Reacting to failures when intervention is needed

  • Scaling up and down these systems

k8s/operators.md

957/1692

What are they made from?

  • Operators combine two things:

    • Custom Resource Definitions

    • controller code watching the corresponding resources and acting upon them

  • A given operator can define one or multiple CRDs

  • The controller code (control loop) typically runs within the cluster

    (running as a Deployment with 1 replica is a common scenario)

  • But it could also run elsewhere

    (nothing mandates that the code run on the cluster, as long as it has API access)

k8s/operators.md

958/1692

Why use operators?

  • Kubernetes gives us Deployments, StatefulSets, Services ...

  • These mechanisms give us building blocks to deploy applications

  • They work great for services that are made of N identical containers

    (like stateless ones)

  • They also work great for some stateful applications like Consul, etcd ...

    (with the help of highly persistent volumes)

  • They're not enough for complex services:

    • where different containers have different roles

    • where extra steps have to be taken when scaling or replacing containers

k8s/operators.md

959/1692

Use-cases for operators

  • Systems with primary/secondary replication

    Examples: MariaDB, MySQL, PostgreSQL, Redis ...

  • Systems where different groups of nodes have different roles

    Examples: ElasticSearch, MongoDB ...

  • Systems with complex dependencies (that are themselves managed with operators)

    Examples: Flink or Kafka, which both depend on Zookeeper

k8s/operators.md

960/1692

More use-cases

  • Representing and managing external resources

    (Example: AWS Service Operator)

  • Managing complex cluster add-ons

    (Example: Istio operator)

  • Deploying and managing our applications' lifecycles

    (more on that later)

k8s/operators.md

961/1692

How operators work

  • An operator creates one or more CRDs

    (i.e., it creates new "Kinds" of resources on our cluster)

  • The operator also runs a controller that will watch its resources

  • Each time we create/update/delete a resource, the controller is notified

    (we could write our own cheap controller with kubectl get --watch)

k8s/operators.md

962/1692

One operator in action

  • We will install Elastic Cloud on Kubernetes, an ElasticSearch operator

  • This operator requires PersistentVolumes

  • We will install Rancher's local path storage provisioner to automatically create these

  • Then, we will create an ElasticSearch resource

  • The operator will detect that resource and provision the cluster

k8s/operators.md

963/1692

Installing a Persistent Volume provisioner

(This step can be skipped if you already have a dynamic volume provisioner.)

  • This provisioner creates Persistent Volumes backed by hostPath

    (local directories on our nodes)

  • It doesn't require anything special ...

  • ... But losing a node = losing the volumes on that node!

  • Install the local path storage provisioner:
    kubectl apply -f ~/container.training/k8s/local-path-storage.yaml

k8s/operators.md

964/1692

Making sure we have a default StorageClass

  • The ElasticSearch operator will create StatefulSets

  • These StatefulSets will instantiate PersistentVolumeClaims

  • These PVCs need to be explicitly associated with a StorageClass

  • Or we need to tag a StorageClass to be used as the default one

  • List StorageClasses:
    kubectl get storageclasses

We should see the local-path StorageClass.

k8s/operators.md

965/1692

Setting a default StorageClass

  • This is done by adding an annotation to the StorageClass:

    storageclass.kubernetes.io/is-default-class: true

  • Tag the StorageClass so that it's the default one:

    kubectl annotate storageclass local-path \
    storageclass.kubernetes.io/is-default-class=true
  • Check the result:

    kubectl get storageclasses

Now, the StorageClass should have (default) next to its name.

k8s/operators.md

966/1692

Install the ElasticSearch operator

  • The operator provides:

    • a few CustomResourceDefinitions
    • a Namespace for its other resources
    • a ValidatingWebhookConfiguration for type checking
    • a StatefulSet for its controller and webhook code
    • a ServiceAccount, ClusterRole, ClusterRoleBinding for permissions
  • All these resources are grouped in a convenient YAML file

  • Install the operator:
    kubectl apply -f ~/container.training/k8s/eck-operator.yaml

k8s/operators.md

967/1692

Check our new custom resources

  • Let's see which CRDs were created
  • List all CRDs:
    kubectl get crds

This operator supports ElasticSearch, but also Kibana and APM. Cool!

k8s/operators.md

968/1692

Create the eck-demo namespace

  • For clarity, we will create everything in a new namespace, eck-demo

  • This namespace is hard-coded in the YAML files that we are going to use

  • We need to create that namespace

  • Create the eck-demo namespace:

    kubectl create namespace eck-demo
  • Switch to that namespace:

    kns eck-demo

k8s/operators.md

969/1692

Can we use a different namespace?

Yes, but then we need to update all the YAML manifests that we are going to apply in the next slides.

The eck-demo namespace is hard-coded in these YAML manifests.

Why?

Because when defining a ClusterRoleBinding that references a ServiceAccount, we have to indicate in which namespace the ServiceAccount is located.

k8s/operators.md

970/1692

Create an ElasticSearch resource

  • We can now create a resource with kind: ElasticSearch

  • The YAML for that resource will specify all the desired parameters:

    • how many nodes we want
    • image to use
    • add-ons (kibana, cerebro, ...)
    • whether to use TLS or not
    • etc.
  • Create our ElasticSearch cluster:
    kubectl apply -f ~/container.training/k8s/eck-elasticsearch.yaml

k8s/operators.md

971/1692

Operator in action

  • Over the next minutes, the operator will create our ES cluster

  • It will report our cluster status through the CRD

  • Check the logs of the operator:
    stern --namespace=elastic-system operator
  • Watch the status of the cluster through the CRD:
    kubectl get es -w

k8s/operators.md

972/1692

Connecting to our cluster

  • It's not easy to use the ElasticSearch API from the shell

  • But let's check at least if ElasticSearch is up!

  • Get the ClusterIP of our ES instance:

    kubectl get services
  • Issue a request with curl:

    curl http://CLUSTERIP:9200

We get an authentication error. Our cluster is protected!

k8s/operators.md

973/1692

Obtaining the credentials

  • The operator creates a user named elastic

  • It generates a random password and stores it in a Secret

  • Extract the password:

    kubectl get secret demo-es-elastic-user \
    -o go-template="{{ .data.elastic | base64decode }} "
  • Use it to connect to the API:

    curl -u elastic:PASSWORD http://CLUSTERIP:9200

We should see a JSON payload with the "You Know, for Search" tagline.

k8s/operators.md

974/1692

Sending data to the cluster

  • Let's send some data to our brand new ElasticSearch cluster!

  • We'll deploy a filebeat DaemonSet to collect node logs

  • Deploy filebeat:

    kubectl apply -f ~/container.training/k8s/eck-filebeat.yaml
  • Wait until some pods are up:

    watch kubectl get pods -l k8s-app=filebeat
  • Check that a filebeat index was created:
    curl -u elastic:PASSWORD http://CLUSTERIP:9200/_cat/indices

k8s/operators.md

975/1692

Deploying an instance of Kibana

  • Kibana can visualize the logs injected by filebeat

  • The ECK operator can also manage Kibana

  • Let's give it a try!

  • Deploy a Kibana instance:

    kubectl apply -f ~/container.training/k8s/eck-kibana.yaml
  • Wait for it to be ready:

    kubectl get kibana -w

k8s/operators.md

976/1692

Connecting to Kibana

  • Kibana is automatically set up to conect to ElasticSearch

    (this is arranged by the YAML that we're using)

  • However, it will ask for authentication

  • It's using the same user/password as ElasticSearch

  • Get the NodePort allocated to Kibana:

    kubectl get services
  • Connect to it with a web browser

  • Use the same user/password as before

k8s/operators.md

977/1692

Setting up Kibana

After the Kibana UI loads, we need to click around a bit

  • Pick "explore on my own"

  • Click on Use Elasticsearch data / Connect to your Elasticsearch index"

  • Enter filebeat-* for the index pattern and click "Next step"

  • Select @timestamp as time filter field name

  • Click on "discover" (the small icon looking like a compass on the left bar)

  • Play around!

k8s/operators.md

978/1692

Scaling up the cluster

  • At this point, we have only one node

  • We are going to scale up

  • But first, we'll deploy Cerebro, an UI for ElasticSearch

  • This will let us see the state of the cluster, how indexes are sharded, etc.

k8s/operators.md

979/1692

Deploying Cerebro

  • Cerebro is stateless, so it's fairly easy to deploy

    (one Deployment + one Service)

  • However, it needs the address and credentials for ElasticSearch

  • We prepared yet another manifest for that!

  • Deploy Cerebro:

    kubectl apply -f ~/container.training/k8s/eck-cerebro.yaml
  • Lookup the NodePort number and connect to it:

    kubectl get services

k8s/operators.md

980/1692

Scaling up the cluster

  • We can see on Cerebro that the cluster is "yellow"

    (because our index is not replicated)

  • Let's change that!

  • Edit the ElasticSearch cluster manifest:

    kubectl edit es demo
  • Find the field count: 1 and change it to 3

  • Save and quit

k8s/operators.md

981/1692

Deploying our apps with operators

  • It is very simple to deploy with kubectl run / kubectl expose

  • We can unlock more features by writing YAML and using kubectl apply

  • Kustomize or Helm let us deploy in multiple environments

    (and adjust/tweak parameters in each environment)

  • We can also use an operator to deploy our application

k8s/operators.md

982/1692

Pros and cons of deploying with operators

  • The app definition and configuration is persisted in the Kubernetes API

  • Multiple instances of the app can be manipulated with kubectl get

  • We can add labels, annotations to the app instances

  • Our controller can execute custom code for any lifecycle event

  • However, we need to write this controller

  • We need to be careful about changes

    (what happens when the resource spec is updated?)

k8s/operators.md

983/1692

Operators are not magic

  • Look at the ElasticSearch resource definition

    (~/container.training/k8s/eck-elasticsearch.yaml)

  • What should happen if we flip the TLS flag? Twice?

  • What should happen if we add another group of nodes?

  • What if we want different images or parameters for the different nodes?

Operators can be very powerful.
But we need to know exactly the scenarios that they can handle.

k8s/operators.md

984/1692

What does it take to write an operator?

  • Writing a quick-and-dirty operator, or a POC/MVP, is easy

  • Writing a robust operator is hard

  • We will describe the general idea

  • We will identify some of the associated challenges

  • We will list a few tools that can help us

k8s/operators-design.md

985/1692

Top-down vs. bottom-up

  • Both approaches are possible

  • Let's see what they entail, and their respective pros and cons

k8s/operators-design.md

986/1692

Top-down approach

  • Start with high-level design (see next slide)

  • Pros:

    • can yield cleaner design that will be more robust
  • Cons:

    • must be able to anticipate all the events that might happen

    • design will be better only to the extent of what we anticipated

    • hard to anticipate if we don't have production experience

k8s/operators-design.md

987/1692

High-level design

  • What are we solving?

    (e.g.: geographic databases backed by PostGIS with Redis caches)

  • What are our use-cases, stories?

    (e.g.: adding/resizing caches and read replicas; load balancing queries)

  • What kind of outage do we want to address?

    (e.g.: loss of individual node, pod, volume)

  • What are our non-features, the things we don't want to address?

    (e.g.: loss of datacenter/zone; differentiating between read and write queries;
    cache invalidation; upgrading to newer major versions of Redis, PostGIS, PostgreSQL)

k8s/operators-design.md

988/1692

Low-level design

  • What Custom Resource Definitions do we need?

    (one, many?)

  • How will we store configuration information?

    (part of the CRD spec fields, annotations, other?)

  • Do we need to store state? If so, where?

    • state that is small and doesn't change much can be stored via the Kubernetes API
      (e.g.: leader information, configuration, credentials)

    • things that are big and/or change a lot should go elsewhere
      (e.g.: metrics, bigger configuration file like GeoIP)

k8s/operators-design.md

989/1692

What can we store via the Kubernetes API?

  • The API server stores most Kubernetes resources in etcd

  • Etcd is designed for reliability, not for performance

  • If our storage needs exceed what etcd can offer, we need to use something else:

    • either directly

    • or by extending the API server
      (for instance by using the agregation layer, like metrics server does)

k8s/operators-design.md

990/1692

Bottom-up approach

  • Start with existing Kubernetes resources (Deployment, Stateful Set...)

  • Run the system in production

  • Add scripts, automation, to facilitate day-to-day operations

  • Turn the scripts into an operator

  • Pros: simpler to get started; reflects actual use-cases

  • Cons: can result in convoluted designs requiring extensive refactor

k8s/operators-design.md

991/1692

General idea

  • Our operator will watch its CRDs and associated resources

  • Drawing state diagrams and finite state automata helps a lot

  • It's OK if some transitions lead to a big catch-all "human intervention"

  • Over time, we will learn about new failure modes and add to these diagrams

  • It's OK to start with CRD creation / deletion and prevent any modification

    (that's the easy POC/MVP we were talking about)

  • Presentation and validation will help our users

    (more on that later)

k8s/operators-design.md

992/1692

Challenges

  • Reacting to infrastructure disruption can seem hard at first

  • Kubernetes gives us a lot of primitives to help:

    • Pods and Persistent Volumes will eventually recover

    • Stateful Sets give us easy ways to "add N copies" of a thing

  • The real challenges come with configuration changes

    (i.e., what to do when our users update our CRDs)

  • Keep in mind that some of the largest cloud outages haven't been caused by natural catastrophes, or even code bugs, but by configuration changes k8s/operators-design.md

993/1692

Configuration changes

  • It is helpful to analyze and understand how Kubernetes controllers work:

    • watch resource for modifications

    • compare desired state (CRD) and current state

    • issue actions to converge state

  • Configuration changes will probably require another state diagram or FSA

  • Again, it's OK to have transitions labeled as "unsupported"

    (i.e. reject some modifications because we can't execute them)

k8s/operators-design.md

994/1692

Tools

k8s/operators-design.md

995/1692

Validation

  • By default, a CRD is "free form"

    (we can put pretty much anything we want in it)

  • When creating a CRD, we can provide an OpenAPI v3 schema (Example)

  • The API server will then validate resources created/edited with this schema

  • If we need a stronger validation, we can use a Validating Admission Webhook:

k8s/operators-design.md

996/1692

Presentation

  • By default, kubectl get mycustomresource won't display much information

    (just the name and age of each resource)

  • When creating a CRD, we can specify additional columns to print (Example, Docs)

  • By default, kubectl describe mycustomresource will also be generic

  • kubectl describe can show events related to our custom resources

    (for that, we need to create Event resources, and fill the involvedObject field)

  • For scalable resources, we can define a scale sub-resource

  • This will enable the use of kubectl scale and other scaling-related operations

k8s/operators-design.md

997/1692

About scaling

  • It is possible to use the HPA (Horizontal Pod Autoscaler) with CRDs

  • But it is not always desirable

  • The HPA works very well for homogenous, stateless workloads

  • For other workloads, your mileage may vary

  • Some systems can scale across multiple dimensions

    (for instance: increase number of replicas, or number of shards?)

  • If autoscaling is desired, the operator will have to take complex decisions

    (example: Zalando's Elasticsearch Operator (Video))

k8s/operators-design.md

998/1692

Versioning

  • As our operator evolves over time, we may have to change the CRD

    (add, remove, change fields)

  • Like every other resource in Kubernetes, custom resources are versioned

  • When creating a CRD, we need to specify a list of versions

  • Versions can be marked as stored and/or served

k8s/operators-design.md

999/1692

Stored version

  • Exactly one version has to be marked as the stored version

  • As the name implies, it is the one that will be stored in etcd

  • Resources in storage are never converted automatically

    (we need to read and re-write them ourselves)

  • Yes, this means that we can have different versions in etcd at any time

  • Our code needs to handle all the versions that still exist in storage

k8s/operators-design.md

1000/1692

Served versions

  • By default, the Kubernetes API will serve resources "as-is"

    (using their stored version)

  • It will assume that all versions are compatible storage-wise

    (i.e. that the spec and fields are compatible between versions)

  • We can provide conversion webhooks to "translate" requests

    (the alternative is to upgrade all stored resources and stop serving old versions)

k8s/operators-design.md

1001/1692

Operator reliability

  • Remember that the operator itself must be resilient

    (e.g.: the node running it can fail)

  • Our operator must be able to restart and recover gracefully

  • Do not store state locally

    (unless we can reconstruct that state when we restart)

  • As indicated earlier, we can use the Kubernetes API to store data:

    • in the custom resources themselves

    • in other resources' annotations

k8s/operators-design.md

1002/1692

Beyond CRDs

  • CRDs cannot use custom storage (e.g. for time series data)

  • CRDs cannot support arbitrary subresources (like logs or exec for Pods)

  • CRDs cannot support protobuf (for faster, more efficient communication)

  • If we need these things, we can use the aggregation layer instead

  • The aggregation layer proxies all requests below a specific path to another server

    (this is used e.g. by the metrics server)

  • This documentation page compares the features of CRDs and API aggregation

k8s/operators-design.md

1003/1692

Image separating from the next chapter

1004/1692

Owners and dependents

(automatically generated title slide)

1005/1692

Image separating from the next chapter

1006/1692

Owners and dependents

(automatically generated title slide)

1007/1692

Owners and dependents

  • Some objects are created by other objects

    (example: pods created by replica sets, themselves created by deployments)

  • When an owner object is deleted, its dependents are deleted

    (this is the default behavior; it can be changed)

  • We can delete a dependent directly if we want

    (but generally, the owner will recreate another right away)

  • An object can have multiple owners

k8s/owners-and-dependents.md

1008/1692

Finding out the owners of an object

  • The owners are recorded in the field ownerReferences in the metadata block
  • Let's create a deployment running nginx:

    kubectl create deployment yanginx --image=nginx
  • Scale it to a few replicas:

    kubectl scale deployment yanginx --replicas=3
  • Once it's up, check the corresponding pods:

    kubectl get pods -l app=yanginx -o yaml | head -n 25

These pods are owned by a ReplicaSet named yanginx-xxxxxxxxxx.

k8s/owners-and-dependents.md

1009/1692

Listing objects with their owners

  • This is a good opportunity to try the custom-columns output!
  • Show all pods with their owners:
    kubectl get pod -o custom-columns=\
    NAME:.metadata.name,\
    OWNER-KIND:.metadata.ownerReferences[0].kind,\
    OWNER-NAME:.metadata.ownerReferences[0].name

Note: the custom-columns option should be one long option (without spaces), so the lines should not be indented (otherwise the indentation will insert spaces).

k8s/owners-and-dependents.md

1010/1692

Deletion policy

  • When deleting an object through the API, three policies are available:

    • foreground (API call returns after all dependents are deleted)

    • background (API call returns immediately; dependents are scheduled for deletion)

    • orphan (the dependents are not deleted)

  • When deleting an object with kubectl, this is selected with --cascade:

    • --cascade=true deletes all dependent objects (default)

    • --cascade=false orphans dependent objects

k8s/owners-and-dependents.md

1011/1692

What happens when an object is deleted

  • It is removed from the list of owners of its dependents

  • If, for one of these dependents, the list of owners becomes empty ...

    • if the policy is "orphan", the object stays

    • otherwise, the object is deleted

k8s/owners-and-dependents.md

1012/1692

Orphaning pods

  • We are going to delete the Deployment and Replica Set that we created

  • ... without deleting the corresponding pods!

  • Delete the Deployment:

    kubectl delete deployment -l app=yanginx --cascade=false
  • Delete the Replica Set:

    kubectl delete replicaset -l app=yanginx --cascade=false
  • Check that the pods are still here:

    kubectl get pods

k8s/owners-and-dependents.md

1013/1692

When and why would we have orphans?

  • If we remove an owner and explicitly instruct the API to orphan dependents

    (like on the previous slide)

  • If we change the labels on a dependent, so that it's not selected anymore

    (e.g. change the app: yanginx in the pods of the previous example)

  • If a deployment tool that we're using does these things for us

  • If there is a serious problem within API machinery or other components

    (i.e. "this should not happen")

k8s/owners-and-dependents.md

1014/1692

Finding orphan objects

  • We're going to output all pods in JSON format

  • Then we will use jq to keep only the ones without an owner

  • And we will display their name

  • List all pods that do not have an owner:
    kubectl get pod -o json | jq -r "
    .items[]
    | select(.metadata.ownerReferences|not)
    | .metadata.name"

k8s/owners-and-dependents.md

1015/1692

Deleting orphan pods

  • Now that we can list orphan pods, deleting them is easy
  • Add | xargs kubectl delete pod to the previous command:
    kubectl get pod -o json | jq -r "
    .items[]
    | select(.metadata.ownerReferences|not)
    | .metadata.name" | xargs kubectl delete pod

As always, the documentation has useful extra information and pointers.

k8s/owners-and-dependents.md

1016/1692

Image separating from the next chapter

1017/1692

Centralized logging

(automatically generated title slide)

1018/1692

Centralized logging

  • Using kubectl or stern is simple; but it has drawbacks:

    • when a node goes down, its logs are not available anymore

    • we can only dump or stream logs; we want to search/index/count...

  • We want to send all our logs to a single place

  • We want to parse them (e.g. for HTTP logs) and index them

  • We want a nice web dashboard

1019/1692

Centralized logging

  • Using kubectl or stern is simple; but it has drawbacks:

    • when a node goes down, its logs are not available anymore

    • we can only dump or stream logs; we want to search/index/count...

  • We want to send all our logs to a single place

  • We want to parse them (e.g. for HTTP logs) and index them

  • We want a nice web dashboard

  • We are going to deploy an EFK stack

k8s/logs-centralized.md

1020/1692

What is EFK?

  • EFK is three components:

    • ElasticSearch (to store and index log entries)

    • Fluentd (to get container logs, process them, and put them in ElasticSearch)

    • Kibana (to view/search log entries with a nice UI)

  • The only component that we need to access from outside the cluster will be Kibana

k8s/logs-centralized.md

1021/1692

Deploying EFK on our cluster

  • We are going to use a YAML file describing all the required resources
  • Load the YAML file into our cluster:
    kubectl apply -f ~/container.training/k8s/efk.yaml

If we look at the YAML file, we see that it creates a daemon set, two deployments, two services, and a few roles and role bindings (to give fluentd the required permissions).

k8s/logs-centralized.md

1022/1692

The itinerary of a log line (before Fluentd)

  • A container writes a line on stdout or stderr

  • Both are typically piped to the container engine (Docker or otherwise)

  • The container engine reads the line, and sends it to a logging driver

  • The timestamp and stream (stdout or stderr) is added to the log line

  • With the default configuration for Kubernetes, the line is written to a JSON file

    (/var/log/containers/pod-name_namespace_container-id.log)

  • That file is read when we invoke kubectl logs; we can access it directly too

k8s/logs-centralized.md

1023/1692

The itinerary of a log line (with Fluentd)

  • Fluentd runs on each node (thanks to a daemon set)

  • It bind-mounts /var/log/containers from the host (to access these files)

  • It continuously scans this directory for new files; reads them; parses them

  • Each log line becomes a JSON object, fully annotated with extra information:
    container id, pod name, Kubernetes labels...

  • These JSON objects are stored in ElasticSearch

  • ElasticSearch indexes the JSON objects

  • We can access the logs through Kibana (and perform searches, counts, etc.)

k8s/logs-centralized.md

1024/1692

Accessing Kibana

  • Kibana offers a web interface that is relatively straightforward

  • Let's check it out!

  • Check which NodePort was allocated to Kibana:

    kubectl get svc kibana
  • With our web browser, connect to Kibana

k8s/logs-centralized.md

1025/1692

Using Kibana

Note: this is not a Kibana workshop! So this section is deliberately very terse.

  • The first time you connect to Kibana, you must "configure an index pattern"

  • Just use the one that is suggested, @timestamp*

  • Then click "Discover" (in the top-left corner)

  • You should see container logs

  • Advice: in the left column, select a few fields to display, e.g.:

    kubernetes.host, kubernetes.pod_name, stream, log

*If you don't see @timestamp, it's probably because no logs exist yet.
Wait a bit, and double-check the logging pipeline!

k8s/logs-centralized.md

1026/1692

Caveat emptor

We are using EFK because it is relatively straightforward to deploy on Kubernetes, without having to redeploy or reconfigure our cluster. But it doesn't mean that it will always be the best option for your use-case. If you are running Kubernetes in the cloud, you might consider using the cloud provider's logging infrastructure (if it can be integrated with Kubernetes).

The deployment method that we will use here has been simplified: there is only one ElasticSearch node. In a real deployment, you might use a cluster, both for performance and reliability reasons. But this is outside of the scope of this chapter.

The YAML file that we used creates all the resources in the default namespace, for simplicity. In a real scenario, you will create the resources in the kube-system namespace or in a dedicated namespace.

k8s/logs-centralized.md

1027/1692

Image separating from the next chapter

1028/1692

Collecting metrics with Prometheus

(automatically generated title slide)

1029/1692

Collecting metrics with Prometheus

  • Prometheus is an open-source monitoring system including:

    • multiple service discovery backends to figure out which metrics to collect

    • a scraper to collect these metrics

    • an efficient time series database to store these metrics

    • a specific query language (PromQL) to query these time series

    • an alert manager to notify us according to metrics values or trends

  • We are going to use it to collect and query some metrics on our Kubernetes cluster

k8s/prometheus.md

1030/1692

Why Prometheus?

  • We don't endorse Prometheus more or less than any other system

  • It's relatively well integrated within the cloud-native ecosystem

  • It can be self-hosted (this is useful for tutorials like this)

  • It can be used for deployments of varying complexity:

    • one binary and 10 lines of configuration to get started

    • all the way to thousands of nodes and millions of metrics

k8s/prometheus.md

1031/1692

Exposing metrics to Prometheus

  • Prometheus obtains metrics and their values by querying exporters

  • An exporter serves metrics over HTTP, in plain text

  • This is what the node exporter looks like:

    http://demo.robustperception.io:9100/metrics

  • Prometheus itself exposes its own internal metrics, too:

    http://demo.robustperception.io:9090/metrics

  • If you want to expose custom metrics to Prometheus:

    • serve a text page like these, and you're good to go

    • libraries are available in various languages to help with quantiles etc.

k8s/prometheus.md

1032/1692

How Prometheus gets these metrics

  • The Prometheus server will scrape URLs like these at regular intervals

    (by default: every minute; can be more/less frequent)

  • The list of URLs to scrape (the scrape targets) is defined in configuration

Worried about the overhead of parsing a text format?
Check this comparison of the text format with the (now deprecated) protobuf format!

k8s/prometheus.md

1033/1692

Defining scrape targets

This is maybe the simplest configuration file for Prometheus:

scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
  • In this configuration, Prometheus collects its own internal metrics

  • A typical configuration file will have multiple scrape_configs

  • In this configuration, the list of targets is fixed

  • A typical configuration file will use dynamic service discovery

k8s/prometheus.md

1034/1692

Service discovery

This configuration file will leverage existing DNS A records:

scrape_configs:
- ...
- job_name: 'node'
dns_sd_configs:
- names: ['api-backends.dc-paris-2.enix.io']
type: 'A'
port: 9100
  • In this configuration, Prometheus resolves the provided name(s)

    (here, api-backends.dc-paris-2.enix.io)

  • Each resulting IP address is added as a target on port 9100

k8s/prometheus.md

1035/1692

Dynamic service discovery

  • In the DNS example, the names are re-resolved at regular intervals

  • As DNS records are created/updated/removed, scrape targets change as well

  • Existing data (previously collected metrics) is not deleted

  • Other service discovery backends work in a similar fashion

k8s/prometheus.md

1036/1692

Other service discovery mechanisms

  • Prometheus can connect to e.g. a cloud API to list instances

  • Or to the Kubernetes API to list nodes, pods, services ...

  • Or a service like Consul, Zookeeper, etcd, to list applications

  • The resulting configurations files are way more complex

    (but don't worry, we won't need to write them ourselves)

k8s/prometheus.md

1037/1692

Time series database

  • We could wonder, "why do we need a specialized database?"

  • One metrics data point = metrics ID + timestamp + value

  • With a classic SQL or noSQL data store, that's at least 160 bits of data + indexes

  • Prometheus is way more efficient, without sacrificing performance

    (it will even be gentler on the I/O subsystem since it needs to write less)

  • Would you like to know more? Check this video:

    Storage in Prometheus 2.0 by Goutham V at DC17EU

k8s/prometheus.md

1038/1692

Checking if Prometheus is installed

  • Before trying to install Prometheus, let's check if it's already there
  • Look for services with a label app=prometheus across all namespaces:
    kubectl get services --selector=app=prometheus --all-namespaces

If we see a NodePort service called prometheus-server, we're good!

(We can then skip to "Connecting to the Prometheus web UI".)

k8s/prometheus.md

1039/1692

Running Prometheus on our cluster

We need to:

  • Run the Prometheus server in a pod

    (using e.g. a Deployment to ensure that it keeps running)

  • Expose the Prometheus server web UI (e.g. with a NodePort)

  • Run the node exporter on each node (with a Daemon Set)

  • Set up a Service Account so that Prometheus can query the Kubernetes API

  • Configure the Prometheus server

    (storing the configuration in a Config Map for easy updates)

k8s/prometheus.md

1040/1692

Helm charts to the rescue

  • To make our lives easier, we are going to use a Helm chart

  • The Helm chart will take care of all the steps explained above

    (including some extra features that we don't need, but won't hurt)

k8s/prometheus.md

1041/1692

Step 1: install Helm

  • If we already installed Helm earlier, this command won't break anything
  • Install the Helm CLI:
    curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get-helm-3 \
    | bash

k8s/prometheus.md

1042/1692

Step 2: add the stable repo

  • This will add the repository containing the chart for Prometheus

  • This command is idempotent

    (it won't break anything if the repository was already added)

  • Add the repository:
    helm repo add stable https://kubernetes-charts.storage.googleapis.com/

k8s/prometheus.md

1043/1692

Step 3: install Prometheus

  • The following command, just like the previous ones, is idempotent

    (it won't error out if Prometheus is already installed)

  • Install Prometheus on our cluster:
    helm upgrade prometheus stable/prometheus \
    --install \
    --namespace kube-system \
    --set server.service.type=NodePort \
    --set server.service.nodePort=30090 \
    --set server.persistentVolume.enabled=false \
    --set alertmanager.enabled=false

Curious about all these flags? They're explained in the next slide.

k8s/prometheus.md

1044/1692

Explaining all the Helm flags

  • helm upgrade prometheus → upgrade release "prometheus" to the latest version...

    (a "release" is a unique name given to an app deployed with Helm)

  • stable/prometheus → ... of the chart prometheus in repo stable

  • --install → if the app doesn't exist, create it

  • --namespace kube-system → put it in that specific namespace

  • And set the following values when rendering the chart's templates:

    • server.service.type=NodePort → expose the Prometheus server with a NodePort
    • server.service.nodePort=30090 → set the specific NodePort number to use
    • server.persistentVolume.enabled=false → do not use a PersistentVolumeClaim
    • alertmanager.enabled=false → disable the alert manager entirely

k8s/prometheus.md

1045/1692

Connecting to the Prometheus web UI

  • Let's connect to the web UI and see what we can do
  • Figure out the NodePort that was allocated to the Prometheus server:

    kubectl get svc --all-namespaces | grep prometheus-server
  • With your browser, connect to that port

k8s/prometheus.md

1046/1692

Querying some metrics

  • This is easy... if you are familiar with PromQL
  • Click on "Graph", and in "expression", paste the following:
    sum by (instance) (
    irate(
    container_cpu_usage_seconds_total{
    pod_name=~"worker.*"
    }[5m]
    )
    )
  • Click on the blue "Execute" button and on the "Graph" tab just below

  • We see the cumulated CPU usage of worker pods for each node
    (if we just deployed Prometheus, there won't be much data to see, though)

k8s/prometheus.md

1047/1692

Getting started with PromQL

  • We can't learn PromQL in just 5 minutes

  • But we can cover the basics to get an idea of what is possible

    (and have some keywords and pointers)

  • We are going to break down the query above

    (building it one step at a time)

k8s/prometheus.md

1048/1692

Graphing one metric across all tags

This query will show us CPU usage across all containers:

container_cpu_usage_seconds_total
  • The suffix of the metrics name tells us:

    • the unit (seconds of CPU)

    • that it's the total used since the container creation

  • Since it's a "total," it is an increasing quantity

    (we need to compute the derivative if we want e.g. CPU % over time)

  • We see that the metrics retrieved have tags attached to them

k8s/prometheus.md

1049/1692

Selecting metrics with tags

This query will show us only metrics for worker containers:

container_cpu_usage_seconds_total{pod_name=~"worker.*"}
  • The =~ operator allows regex matching

  • We select all the pods with a name starting with worker

    (it would be better to use labels to select pods; more on that later)

  • The result is a smaller set of containers

k8s/prometheus.md

1050/1692

Transforming counters in rates

This query will show us CPU usage % instead of total seconds used:

100*irate(container_cpu_usage_seconds_total{pod_name=~"worker.*"}[5m])
  • The irate operator computes the "per-second instant rate of increase"

    • rate is similar but allows decreasing counters and negative values

    • with irate, if a counter goes back to zero, we don't get a negative spike

  • The [5m] tells how far to look back if there is a gap in the data

  • And we multiply with 100* to get CPU % usage

k8s/prometheus.md

1051/1692

Aggregation operators

This query sums the CPU usage per node:

sum by (instance) (
irate(container_cpu_usage_seconds_total{pod_name=~"worker.*"}[5m])
)
  • instance corresponds to the node on which the container is running

  • sum by (instance) (...) computes the sum for each instance

  • Note: all the other tags are collapsed

    (in other words, the resulting graph only shows the instance tag)

  • PromQL supports many more aggregation operators

k8s/prometheus.md

1052/1692

What kind of metrics can we collect?

  • Node metrics (related to physical or virtual machines)

  • Container metrics (resource usage per container)

  • Databases, message queues, load balancers, ...

    (check out this list of exporters!)

  • Instrumentation (=deluxe printf for our code)

  • Business metrics (customers served, revenue, ...)

k8s/prometheus.md

1053/1692

Node metrics

  • CPU, RAM, disk usage on the whole node

  • Total number of processes running, and their states

  • Number of open files, sockets, and their states

  • I/O activity (disk, network), per operation or volume

  • Physical/hardware (when applicable): temperature, fan speed...

  • ...and much more!

k8s/prometheus.md

1054/1692

Container metrics

  • Similar to node metrics, but not totally identical

  • RAM breakdown will be different

    • active vs inactive memory
    • some memory is shared between containers, and specially accounted for
  • I/O activity is also harder to track

    • async writes can cause deferred "charges"
    • some page-ins are also shared between containers

For details about container metrics, see:
http://jpetazzo.github.io/2013/10/08/docker-containers-metrics/

k8s/prometheus.md

1055/1692

Application metrics

  • Arbitrary metrics related to your application and business

  • System performance: request latency, error rate...

  • Volume information: number of rows in database, message queue size...

  • Business data: inventory, items sold, revenue...

k8s/prometheus.md

1056/1692

Detecting scrape targets

  • Prometheus can leverage Kubernetes service discovery

    (with proper configuration)

  • Services or pods can be annotated with:

    • prometheus.io/scrape: true to enable scraping
    • prometheus.io/port: 9090 to indicate the port number
    • prometheus.io/path: /metrics to indicate the URI (/metrics by default)
  • Prometheus will detect and scrape these (without needing a restart or reload)

k8s/prometheus.md

1057/1692

Querying labels

  • What if we want to get metrics for containers belonging to a pod tagged worker?

  • The cAdvisor exporter does not give us Kubernetes labels

  • Kubernetes labels are exposed through another exporter

  • We can see Kubernetes labels through metrics kube_pod_labels

    (each container appears as a time series with constant value of 1)

  • Prometheus kind of supports "joins" between time series

  • But only if the names of the tags match exactly

k8s/prometheus.md

1058/1692

Unfortunately ...

  • The cAdvisor exporter uses tag pod_name for the name of a pod

  • The Kubernetes service endpoints exporter uses tag pod instead

  • See this blog post or this other one to see how to perform "joins"

  • Alas, Prometheus cannot "join" time series with different labels

    (see Prometheus issue #2204 for the rationale)

  • There is a workaround involving relabeling, but it's "not cheap"

k8s/prometheus.md

1059/1692

In practice

  • Grafana is a beautiful (and useful) frontend to display all kinds of graphs

  • Not everyone needs to know Prometheus, PromQL, Grafana, etc.

  • But in a team, it is valuable to have at least one person who know them

  • That person can set up queries and dashboards for the rest of the team

  • It's a little bit like knowing how to optimize SQL queries, Dockerfiles...

    Don't panic if you don't know these tools!

    ...But make sure at least one person in your team is on it 💯

k8s/prometheus.md

1060/1692

Image separating from the next chapter

1061/1692

Resource Limits

(automatically generated title slide)

1062/1692

Resource Limits

  • We can attach resource indications to our pods

    (or rather: to the containers in our pods)

  • We can specify limits and/or requests

  • We can specify quantities of CPU and/or memory

k8s/resource-limits.md

1063/1692

CPU vs memory

  • CPU is a compressible resource

    (it can be preempted immediately without adverse effect)

  • Memory is an incompressible resource

    (it needs to be swapped out to be reclaimed; and this is costly)

  • As a result, exceeding limits will have different consequences for CPU and memory

k8s/resource-limits.md

1064/1692

Exceeding CPU limits

  • CPU can be reclaimed instantaneously

    (in fact, it is preempted hundreds of times per second, at each context switch)

  • If a container uses too much CPU, it can be throttled

    (it will be scheduled less often)

  • The processes in that container will run slower

    (or rather: they will not run faster)

k8s/resource-limits.md

1065/1692

Exceeding memory limits

  • Memory needs to be swapped out before being reclaimed

  • "Swapping" means writing memory pages to disk, which is very slow

  • On a classic system, a process that swaps can get 1000x slower

    (because disk I/O is 1000x slower than memory I/O)

  • Exceeding the memory limit (even by a small amount) can reduce performance a lot

  • Kubernetes does not support swap (more on that later!)

  • Exceeding the memory limit will cause the container to be killed

k8s/resource-limits.md

1066/1692

Limits vs requests

  • Limits are "hard limits" (they can't be exceeded)

    • a container exceeding its memory limit is killed

    • a container exceeding its CPU limit is throttled

  • Requests are used for scheduling purposes

    • a container using less than what it requested will never be killed or throttled

    • the scheduler uses the requested sizes to determine placement

    • the resources requested by all pods on a node will never exceed the node size

k8s/resource-limits.md

1067/1692

Pod quality of service

Each pod is assigned a QoS class (visible in status.qosClass).

  • If limits = requests:

    • as long as the container uses less than the limit, it won't be affected

    • if all containers in a pod have (limits=requests), QoS is considered "Guaranteed"

  • If requests < limits:

    • as long as the container uses less than the request, it won't be affected

    • otherwise, it might be killed/evicted if the node gets overloaded

    • if at least one container has (requests<limits), QoS is considered "Burstable"

  • If a pod doesn't have any request nor limit, QoS is considered "BestEffort"

k8s/resource-limits.md

1068/1692

Quality of service impact

  • When a node is overloaded, BestEffort pods are killed first

  • Then, Burstable pods that exceed their limits

  • Burstable and Guaranteed pods below their limits are never killed

    (except if their node fails)

  • If we only use Guaranteed pods, no pod should ever be killed

    (as long as they stay within their limits)

(Pod QoS is also explained in this page of the Kubernetes documentation and in this blog post.)

k8s/resource-limits.md

1069/1692

Where is my swap?

  • The semantics of memory and swap limits on Linux cgroups are complex

  • In particular, it's not possible to disable swap for a cgroup

    (the closest option is to reduce "swappiness")

  • The architects of Kubernetes wanted to ensure that Guaranteed pods never swap

  • The only solution was to disable swap entirely

k8s/resource-limits.md

1070/1692

Alternative point of view

  • Swap enables paging¹ of anonymous² memory

  • Even when swap is disabled, Linux will still page memory for:

    • executables, libraries

    • mapped files

  • Disabling swap will reduce performance and available resources

  • For a good time, read kubernetes/kubernetes#53533

  • Also read this excellent blog post about swap

¹Paging: reading/writing memory pages from/to disk to reclaim physical memory

²Anonymous memory: memory that is not backed by files or blocks

k8s/resource-limits.md

1071/1692

Enabling swap anyway

  • If you don't care that pods are swapping, you can enable swap

  • You will need to add the flag --fail-swap-on=false to kubelet

    (otherwise, it won't start!)

k8s/resource-limits.md

1072/1692

Specifying resources

  • Resource requests are expressed at the container level

  • CPU is expressed in "virtual CPUs"

    (corresponding to the virtual CPUs offered by some cloud providers)

  • CPU can be expressed with a decimal value, or even a "milli" suffix

    (so 100m = 0.1)

  • Memory is expressed in bytes

  • Memory can be expressed with k, M, G, T, ki, Mi, Gi, Ti suffixes

    (corresponding to 10^3, 10^6, 10^9, 10^12, 2^10, 2^20, 2^30, 2^40)

k8s/resource-limits.md

1073/1692

Specifying resources in practice

This is what the spec of a Pod with resources will look like:

containers:
- name: httpenv
image: jpetazzo/httpenv
resources:
limits:
memory: "100Mi"
cpu: "100m"
requests:
memory: "100Mi"
cpu: "10m"

This set of resources makes sure that this service won't be killed (as long as it stays below 100 MB of RAM), but allows its CPU usage to be throttled if necessary.

k8s/resource-limits.md

1074/1692

Default values

  • If we specify a limit without a request:

    the request is set to the limit

  • If we specify a request without a limit:

    there will be no limit

    (which means that the limit will be the size of the node)

  • If we don't specify anything:

    the request is zero and the limit is the size of the node

Unless there are default values defined for our namespace!

k8s/resource-limits.md

1075/1692

We need default resource values

  • If we do not set resource values at all:

    • the limit is "the size of the node"

    • the request is zero

  • This is generally not what we want

    • a container without a limit can use up all the resources of a node

    • if the request is zero, the scheduler can't make a smart placement decision

  • To address this, we can set default values for resources

  • This is done with a LimitRange object

k8s/resource-limits.md

1076/1692

Image separating from the next chapter

1077/1692

Defining min, max, and default resources

(automatically generated title slide)

1078/1692

Defining min, max, and default resources

  • We can create LimitRange objects to indicate any combination of:

    • min and/or max resources allowed per pod

    • default resource limits

    • default resource requests

    • maximal burst ratio (limit/request)

  • LimitRange objects are namespaced

  • They apply to their namespace only

k8s/resource-limits.md

1079/1692

LimitRange example

apiVersion: v1
kind: LimitRange
metadata:
name: my-very-detailed-limitrange
spec:
limits:
- type: Container
min:
cpu: "100m"
max:
cpu: "2000m"
memory: "1Gi"
default:
cpu: "500m"
memory: "250Mi"
defaultRequest:
cpu: "500m"

k8s/resource-limits.md

1080/1692

Example explanation

The YAML on the previous slide shows an example LimitRange object specifying very detailed limits on CPU usage, and providing defaults on RAM usage.

Note the type: Container line: in the future, it might also be possible to specify limits per Pod, but it's not officially documented yet.

k8s/resource-limits.md

1081/1692

LimitRange details

  • LimitRange restrictions are enforced only when a Pod is created

    (they don't apply retroactively)

  • They don't prevent creation of e.g. an invalid Deployment or DaemonSet

    (but the pods will not be created as long as the LimitRange is in effect)

  • If there are multiple LimitRange restrictions, they all apply together

    (which means that it's possible to specify conflicting LimitRanges,
    preventing any Pod from being created)

  • If a LimitRange specifies a max for a resource but no default,
    that max value becomes the default limit too

k8s/resource-limits.md

1082/1692

Image separating from the next chapter

1083/1692

Namespace quotas

(automatically generated title slide)

1084/1692

Namespace quotas

  • We can also set quotas per namespace

  • Quotas apply to the total usage in a namespace

    (e.g. total CPU limits of all pods in a given namespace)

  • Quotas can apply to resource limits and/or requests

    (like the CPU and memory limits that we saw earlier)

  • Quotas can also apply to other resources:

    • "extended" resources (like GPUs)

    • storage size

    • number of objects (number of pods, services...)

k8s/resource-limits.md

1085/1692

Creating a quota for a namespace

  • Quotas are enforced by creating a ResourceQuota object

  • ResourceQuota objects are namespaced, and apply to their namespace only

  • We can have multiple ResourceQuota objects in the same namespace

  • The most restrictive values are used

k8s/resource-limits.md

1086/1692

Limiting total CPU/memory usage

  • The following YAML specifies an upper bound for limits and requests:
    apiVersion: v1
    kind: ResourceQuota
    metadata:
    name: a-little-bit-of-compute
    spec:
    hard:
    requests.cpu: "10"
    requests.memory: 10Gi
    limits.cpu: "20"
    limits.memory: 20Gi

These quotas will apply to the namespace where the ResourceQuota is created.

k8s/resource-limits.md

1087/1692

Limiting number of objects

  • The following YAML specifies how many objects of specific types can be created:
    apiVersion: v1
    kind: ResourceQuota
    metadata:
    name: quota-for-objects
    spec:
    hard:
    pods: 100
    services: 10
    secrets: 10
    configmaps: 10
    persistentvolumeclaims: 20
    services.nodeports: 0
    services.loadbalancers: 0
    count/roles.rbac.authorization.k8s.io: 10

(The count/ syntax allows limiting arbitrary objects, including CRDs.)

k8s/resource-limits.md

1088/1692

YAML vs CLI

  • Quotas can be created with a YAML definition

  • ...Or with the kubectl create quota command

  • Example:

    kubectl create quota my-resource-quota --hard=pods=300,limits.memory=300Gi
  • With both YAML and CLI form, the values are always under the hard section

    (there is no soft quota)

k8s/resource-limits.md

1089/1692

Viewing current usage

When a ResourceQuota is created, we can see how much of it is used:

kubectl describe resourcequota my-resource-quota
Name: my-resource-quota
Namespace: default
Resource Used Hard
-------- ---- ----
pods 12 100
services 1 5
services.loadbalancers 0 0
services.nodeports 0 0

k8s/resource-limits.md

1090/1692

Advanced quotas and PriorityClass

  • Since Kubernetes 1.12, it is possible to create PriorityClass objects

  • Pods can be assigned a PriorityClass

  • Quotas can be linked to a PriorityClass

  • This allows us to reserve resources for pods within a namespace

  • For more details, check this documentation page

k8s/resource-limits.md

1091/1692

Image separating from the next chapter

1092/1692

Limiting resources in practice

(automatically generated title slide)

1093/1692

Limiting resources in practice

  • We have at least three mechanisms:

    • requests and limits per Pod

    • LimitRange per namespace

    • ResourceQuota per namespace

  • Let's see a simple recommendation to get started with resource limits

k8s/resource-limits.md

1094/1692

Set a LimitRange

  • In each namespace, create a LimitRange object

  • Set a small default CPU request and CPU limit

    (e.g. "100m")

  • Set a default memory request and limit depending on your most common workload

    • for Java, Ruby: start with "1G"

    • for Go, Python, PHP, Node: start with "250M"

  • Set upper bounds slightly below your expected node size

    (80-90% of your node size, with at least a 500M memory buffer)

k8s/resource-limits.md

1095/1692

Set a ResourceQuota

  • In each namespace, create a ResourceQuota object

  • Set generous CPU and memory limits

    (e.g. half the cluster size if the cluster hosts multiple apps)

  • Set generous objects limits

    • these limits should not be here to constrain your users

    • they should catch a runaway process creating many resources

    • example: a custom controller creating many pods

k8s/resource-limits.md

1096/1692

Observe, refine, iterate

  • Observe the resource usage of your pods

    (we will see how in the next chapter)

  • Adjust individual pod limits

  • If you see trends: adjust the LimitRange

    (rather than adjusting every individual set of pod limits)

  • Observe the resource usage of your namespaces

    (with kubectl describe resourcequota ...)

  • Rinse and repeat regularly

k8s/resource-limits.md

1097/1692

Additional resources

k8s/resource-limits.md

1098/1692

Image separating from the next chapter

1099/1692

Checking pod and node resource usage

(automatically generated title slide)

1100/1692

Checking pod and node resource usage

  • Since Kubernetes 1.8, metrics are collected by the resource metrics pipeline

  • The resource metrics pipeline is:

    • optional (Kubernetes can function without it)

    • necessary for some features (like the Horizontal Pod Autoscaler)

    • exposed through the Kubernetes API using the aggregation layer

    • usually implemented by the "metrics server"

k8s/metrics-server.md

1101/1692

How to know if the metrics server is running?

  • The easiest way to know is to run kubectl top
  • Check if the core metrics pipeline is available:
    kubectl top nodes

If it shows our nodes and their CPU and memory load, we're good!

k8s/metrics-server.md

1102/1692

Installing metrics server

  • The metrics server doesn't have any particular requirements

    (it doesn't need persistence, as it doesn't store metrics)

  • It has its own repository, kubernetes-incubator/metrics-server

  • The repository comes with YAML files for deployment

  • These files may not work on some clusters

    (e.g. if your node names are not in DNS)

  • The container.training repository has a metrics-server.yaml file to help with that

    (we can kubectl apply -f that file if needed)

k8s/metrics-server.md

1103/1692

Showing container resource usage

  • Once the metrics server is running, we can check container resource usage
  • Show resource usage across all containers:
    kubectl top pods --containers --all-namespaces
  • We can also use selectors (-l app=...)

k8s/metrics-server.md

1104/1692

Other tools

k8s/metrics-server.md

1105/1692

Image separating from the next chapter

1106/1692

Cluster sizing

(automatically generated title slide)

1107/1692

Cluster sizing

  • What happens when the cluster gets full?

  • How can we scale up the cluster?

  • Can we do it automatically?

  • What are other methods to address capacity planning?

k8s/cluster-sizing.md

1108/1692

When are we out of resources?

  • kubelet monitors node resources:

    • memory

    • node disk usage (typically the root filesystem of the node)

    • image disk usage (where container images and RW layers are stored)

  • For each resource, we can provide two thresholds:

    • a hard threshold (if it's met, it provokes immediate action)

    • a soft threshold (provokes action only after a grace period)

  • Resource thresholds and grace periods are configurable

    (by passing kubelet command-line flags)

k8s/cluster-sizing.md

1109/1692

What happens then?

  • If disk usage is too high:

    • kubelet will try to remove terminated pods

    • then, it will try to evict pods

  • If memory usage is too high:

    • it will try to evict pods
  • The node is marked as "under pressure"

  • This temporarily prevents new pods from being scheduled on the node

k8s/cluster-sizing.md

1110/1692

Which pods get evicted?

  • kubelet looks at the pods' QoS and PriorityClass

  • First, pods with BestEffort QoS are considered

  • Then, pods with Burstable QoS exceeding their requests

    (but only if the exceeding resource is the one that is low on the node)

  • Finally, pods with Guaranteed QoS, and Burstable pods within their requests

  • Within each group, pods are sorted by PriorityClass

  • If there are pods with the same PriorityClass, they are sorted by usage excess

    (i.e. the pods whose usage exceeds their requests the most are evicted first)

k8s/cluster-sizing.md

1111/1692

Eviction of Guaranteed pods

  • Normally, pods with Guaranteed QoS should not be evicted

  • A chunk of resources is reserved for node processes (like kubelet)

  • It is expected that these processes won't use more than this reservation

  • If they do use more resources anyway, all bets are off!

  • If this happens, kubelet must evict Guaranteed pods to preserve node stability

    (or Burstable pods that are still within their requested usage)

k8s/cluster-sizing.md

1112/1692

What happens to evicted pods?

  • The pod is terminated

  • It is marked as Failed at the API level

  • If the pod was created by a controller, the controller will recreate it

  • The pod will be recreated on another node, if there are resources available!

  • For more details about the eviction process, see:

k8s/cluster-sizing.md

1113/1692

What if there are no resources available?

  • Sometimes, a pod cannot be scheduled anywhere:

    • all the nodes are under pressure,

    • or the pod requests more resources than are available

  • The pod then remains in Pending state until the situation improves

k8s/cluster-sizing.md

1114/1692

Cluster scaling

  • One way to improve the situation is to add new nodes

  • This can be done automatically with the Cluster Autoscaler

  • The autoscaler will automatically scale up:

    • if there are pods that failed to be scheduled
  • The autoscaler will automatically scale down:

    • if nodes have a low utilization for an extended period of time

k8s/cluster-sizing.md

1115/1692

Restrictions, gotchas ...

  • The Cluster Autoscaler only supports a few cloud infrastructures

    (see here for a list)

  • The Cluster Autoscaler cannot scale down nodes that have pods using:

    • local storage

    • affinity/anti-affinity rules preventing them from being rescheduled

    • a restrictive PodDisruptionBudget

k8s/cluster-sizing.md

1116/1692

Other way to do capacity planning

  • "Running Kubernetes without nodes"

  • Systems like Virtual Kubelet or Kiyot can run pods using on-demand resources

    • Virtual Kubelet can leverage e.g. ACI or Fargate to run pods

    • Kiyot runs pods in ad-hoc EC2 instances (1 instance per pod)

  • Economic advantage (no wasted capacity)

  • Security advantage (stronger isolation between pods)

Check this blog post for more details.

k8s/cluster-sizing.md

1117/1692

Image separating from the next chapter

1118/1692

The Horizontal Pod Autoscaler

(automatically generated title slide)

1119/1692

The Horizontal Pod Autoscaler

  • What is the Horizontal Pod Autoscaler, or HPA?

  • It is a controller that can perform horizontal scaling automatically

  • Horizontal scaling = changing the number of replicas

    (adding/removing pods)

  • Vertical scaling = changing the size of individual replicas

    (increasing/reducing CPU and RAM per pod)

  • Cluster scaling = changing the size of the cluster

    (adding/removing nodes)

k8s/horizontal-pod-autoscaler.md

1120/1692

Principle of operation

  • Each HPA resource (or "policy") specifies:

    • which object to monitor and scale (e.g. a Deployment, ReplicaSet...)

    • min/max scaling ranges (the max is a safety limit!)

    • a target resource usage (e.g. the default is CPU=80%)

  • The HPA continuously monitors the CPU usage for the related object

  • It computes how many pods should be running:

    TargetNumOfPods = ceil(sum(CurrentPodsCPUUtilization) / Target)

  • It scales the related object up/down to this target number of pods

k8s/horizontal-pod-autoscaler.md

1121/1692

Pre-requirements

  • The metrics server needs to be running

    (i.e. we need to be able to see pod metrics with kubectl top pods)

  • The pods that we want to autoscale need to have resource requests

    (because the target CPU% is not absolute, but relative to the request)

  • The latter actually makes a lot of sense:

    • if a Pod doesn't have a CPU request, it might be using 10% of CPU...

    • ...but only because there is no CPU time available!

    • this makes sure that we won't add pods to nodes that are already resource-starved

k8s/horizontal-pod-autoscaler.md

1122/1692

Testing the HPA

  • We will start a CPU-intensive web service

  • We will send some traffic to that service

  • We will create an HPA policy

  • The HPA will automatically scale up the service for us

k8s/horizontal-pod-autoscaler.md

1123/1692

A CPU-intensive web service

  • Let's use jpetazzo/busyhttp

    (it is a web server that will use 1s of CPU for each HTTP request)

  • Deploy the web server:

    kubectl create deployment busyhttp --image=jpetazzo/busyhttp
  • Expose it with a ClusterIP service:

    kubectl expose deployment busyhttp --port=80
  • Get the ClusterIP allocated to the service:

    kubectl get svc busyhttp

k8s/horizontal-pod-autoscaler.md

1124/1692

Monitor what's going on

  • Let's start a bunch of commands to watch what is happening
  • Monitor pod CPU usage:
    watch kubectl top pods -l app=busyhttp
  • Monitor service latency:
    httping http://$CLUSTERIP/
  • Monitor cluster events:
    kubectl get events -w

k8s/horizontal-pod-autoscaler.md

1125/1692

Send traffic to the service

  • We will use ab (Apache Bench) to send traffic
  • Send a lot of requests to the service, with a concurrency level of 3:
    ab -c 3 -n 100000 http://$CLUSTERIP/

The latency (reported by httping) should increase above 3s.

The CPU utilization should increase to 100%.

(The server is single-threaded and won't go above 100%.)

k8s/horizontal-pod-autoscaler.md

1126/1692

Create an HPA policy

  • There is a helper command to do that for us: kubectl autoscale
  • Create the HPA policy for the busyhttp deployment:
    kubectl autoscale deployment busyhttp --max=10

By default, it will assume a target of 80% CPU usage.

This can also be set with --cpu-percent=.

1127/1692

Create an HPA policy

  • There is a helper command to do that for us: kubectl autoscale
  • Create the HPA policy for the busyhttp deployment:
    kubectl autoscale deployment busyhttp --max=10

By default, it will assume a target of 80% CPU usage.

This can also be set with --cpu-percent=.

The autoscaler doesn't seem to work. Why?

k8s/horizontal-pod-autoscaler.md

1128/1692

What did we miss?

  • The events stream gives us a hint, but to be honest, it's not very clear:

    missing request for cpu

  • We forgot to specify a resource request for our Deployment!

  • The HPA target is not an absolute CPU%

  • It is relative to the CPU requested by the pod

k8s/horizontal-pod-autoscaler.md

1129/1692

Adding a CPU request

  • Let's edit the deployment and add a CPU request

  • Since our server can use up to 1 core, let's request 1 core

  • Edit the Deployment definition:
    kubectl edit deployment busyhttp
  • In the containers list, add the following block:
    resources:
    requests:
    cpu: "1"

k8s/horizontal-pod-autoscaler.md

1130/1692

Results

  • After saving and quitting, a rolling update happens

    (if ab or httping exits, make sure to restart it)

  • It will take a minute or two for the HPA to kick in:

    • the HPA runs every 30 seconds by default

    • it needs to gather metrics from the metrics server first

  • If we scale further up (or down), the HPA will react after a few minutes:

    • it won't scale up if it already scaled in the last 3 minutes

    • it won't scale down if it already scaled in the last 5 minutes

k8s/horizontal-pod-autoscaler.md

1131/1692

What about other metrics?

  • The HPA in API group autoscaling/v1 only supports CPU scaling

  • The HPA in API group autoscaling/v2beta2 supports metrics from various API groups:

    • metrics.k8s.io, aka metrics server (per-Pod CPU and RAM)

    • custom.metrics.k8s.io, custom metrics per Pod

    • external.metrics.k8s.io, external metrics (not associated to Pods)

  • Kubernetes doesn't implement any of these API groups

  • Using these metrics requires registering additional APIs

  • The metrics provided by metrics server are standard; everything else is custom

  • For more details, see this great blog post or this talk

k8s/horizontal-pod-autoscaler.md

1132/1692

Cleanup

  • Since busyhttp uses CPU cycles, let's stop it before moving on
  • Delete the busyhttp Deployment:
    kubectl delete deployment busyhttp

k8s/horizontal-pod-autoscaler.md

1133/1692

Image separating from the next chapter

1134/1692

Declarative vs imperative

(automatically generated title slide)

1135/1692

Declarative vs imperative

  • Our container orchestrator puts a very strong emphasis on being declarative

  • Declarative:

    I would like a cup of tea.

  • Imperative:

    Boil some water. Pour it in a teapot. Add tea leaves. Steep for a while. Serve in a cup.

1136/1692

Declarative vs imperative

  • Our container orchestrator puts a very strong emphasis on being declarative

  • Declarative:

    I would like a cup of tea.

  • Imperative:

    Boil some water. Pour it in a teapot. Add tea leaves. Steep for a while. Serve in a cup.

  • Declarative seems simpler at first ...

1137/1692

Declarative vs imperative

  • Our container orchestrator puts a very strong emphasis on being declarative

  • Declarative:

    I would like a cup of tea.

  • Imperative:

    Boil some water. Pour it in a teapot. Add tea leaves. Steep for a while. Serve in a cup.

  • Declarative seems simpler at first ...

  • ... As long as you know how to brew tea

shared/declarative.md

1138/1692

Declarative vs imperative

  • What declarative would really be:

    I want a cup of tea, obtained by pouring an infusion¹ of tea leaves in a cup.

1139/1692

Declarative vs imperative

  • What declarative would really be:

    I want a cup of tea, obtained by pouring an infusion¹ of tea leaves in a cup.

    ¹An infusion is obtained by letting the object steep a few minutes in hot² water.

1140/1692

Declarative vs imperative

  • What declarative would really be:

    I want a cup of tea, obtained by pouring an infusion¹ of tea leaves in a cup.

    ¹An infusion is obtained by letting the object steep a few minutes in hot² water.

    ²Hot liquid is obtained by pouring it in an appropriate container³ and setting it on a stove.

1141/1692

Declarative vs imperative

  • What declarative would really be:

    I want a cup of tea, obtained by pouring an infusion¹ of tea leaves in a cup.

    ¹An infusion is obtained by letting the object steep a few minutes in hot² water.

    ²Hot liquid is obtained by pouring it in an appropriate container³ and setting it on a stove.

    ³Ah, finally, containers! Something we know about. Let's get to work, shall we?

1142/1692

Declarative vs imperative

  • What declarative would really be:

    I want a cup of tea, obtained by pouring an infusion¹ of tea leaves in a cup.

    ¹An infusion is obtained by letting the object steep a few minutes in hot² water.

    ²Hot liquid is obtained by pouring it in an appropriate container³ and setting it on a stove.

    ³Ah, finally, containers! Something we know about. Let's get to work, shall we?

Did you know there was an ISO standard specifying how to brew tea?

shared/declarative.md

1143/1692

Declarative vs imperative

  • Imperative systems:

    • simpler

    • if a task is interrupted, we have to restart from scratch

  • Declarative systems:

    • if a task is interrupted (or if we show up to the party half-way through), we can figure out what's missing and do only what's necessary

    • we need to be able to observe the system

    • ... and compute a "diff" between what we have and what we want

shared/declarative.md

1144/1692

Declarative vs imperative in Kubernetes

  • With Kubernetes, we cannot say: "run this container"

  • All we can do is write a spec and push it to the API server

    (for example, by creating a resource like a Pod or a Deployment)

  • The API server will validate that spec (and reject it if it's invalid)

  • Then it will store it in etcd

  • A controller will "notice" that spec and act upon it

k8s/declarative.md

1145/1692

Reconciling state

  • Watch for the spec fields in the YAML files later!

  • The spec describes how we want the thing to be

  • Kubernetes will reconcile the current state with the spec
    (technically, this is done by a number of controllers)

  • When we want to change some resource, we update the spec

  • Kubernetes will then converge that resource

k8s/declarative.md

1146/1692

Image separating from the next chapter

1147/1692

Kubernetes Management Approaches

(automatically generated title slide)

1148/1692

Kubernetes Management Approaches

  • Imperative commands: run, expose, scale, edit, create deployment
    • Best for dev/learning/personal projects
    • Easy to learn, hardest to manage over time
1149/1692

Kubernetes Management Approaches

  • Imperative commands: run, expose, scale, edit, create deployment

    • Best for dev/learning/personal projects
    • Easy to learn, hardest to manage over time
  • Imperative objects: create -f file.yml, replace -f file.yml, delete...

    • Good for prod of small environments, single file per command
    • Store your changes in git-based yaml files
    • Hard to automate
1150/1692

Kubernetes Management Approaches

  • Imperative commands: run, expose, scale, edit, create deployment

    • Best for dev/learning/personal projects
    • Easy to learn, hardest to manage over time
  • Imperative objects: create -f file.yml, replace -f file.yml, delete...

    • Good for prod of small environments, single file per command
    • Store your changes in git-based yaml files
    • Hard to automate
  • Declarative objects: apply -f file.yml or -f dir\, diff

    • Best for prod, easier to automate
    • Harder to understand and predict changes

k8smastery/cli-good-better-best.md

1151/1692

Image separating from the next chapter

1152/1692

Recording deployment actions

(automatically generated title slide)

1153/1692

Recording deployment actions

  • Some commands that modify a Deployment accept an optional --record flag

    (Example: kubectl set image deployment worker worker=alpine --record)

  • That flag will store the command line in the Deployment

    (Technically, using the annotation kubernetes.io/change-cause)

  • It gets copied to the corresponding ReplicaSet

    (Allowing to keep track of which command created or promoted this ReplicaSet)

  • We can view this information with kubectl rollout history

k8s/record.md

1154/1692

Using --record

  • Let's make a couple of changes to a Deployment and record them
  • Roll back worker to image version 0.1:

    kubectl set image deployment worker worker=dockercoins/worker:v0.1 --record
  • Promote it to version 0.2 again:

    kubectl set image deployment worker worker=dockercoins/worker:v0.2 --record
  • View the change history:

    kubectl rollout history deployment worker

k8s/record.md

1155/1692

Pitfall #1: forgetting --record

  • What happens if we don't specify --record?
  • Promote worker to image version 0.3:

    kubectl set image deployment worker worker=dockercoins/worker:v0.3
  • View the change history:

    kubectl rollout history deployment worker
1156/1692

Pitfall #1: forgetting --record

  • What happens if we don't specify --record?
  • Promote worker to image version 0.3:

    kubectl set image deployment worker worker=dockercoins/worker:v0.3
  • View the change history:

    kubectl rollout history deployment worker

It recorded version 0.2 instead of 0.3! Why?

k8s/record.md

1157/1692

How --record really works

  • kubectl adds the annotation kubernetes.io/change-cause to the Deployment

  • The Deployment controller copies that annotation to the ReplicaSet

  • kubectl rollout history shows the ReplicaSets' annotations

  • If we don't specify --record, the annotation is not updated

  • The previous value of that annotation is copied to the new ReplicaSet

  • In that case, the ReplicaSet annotation does not reflect reality!

k8s/record.md

1158/1692

Pitfall #2: recording scale commands

  • What happens if we use kubectl scale --record?
  • Check the current history:

    kubectl rollout history deployment worker
  • Scale the deployment:

    kubectl scale deployment worker --replicas=3 --record
  • Check the change history again:

    kubectl rollout history deployment worker
1159/1692

Pitfall #2: recording scale commands

  • What happens if we use kubectl scale --record?
  • Check the current history:

    kubectl rollout history deployment worker
  • Scale the deployment:

    kubectl scale deployment worker --replicas=3 --record
  • Check the change history again:

    kubectl rollout history deployment worker

The last entry in the history was overwritten by the scale command! Why?

k8s/record.md

1160/1692

Actions that don't create a new ReplicaSet

  • The scale command updates the Deployment definition

  • But it doesn't create a new ReplicaSet

  • Using the --record flag sets the annotation like before

  • The annotation gets copied to the existing ReplicaSet

  • This overwrites the previous annotation that was there

  • In that case, we lose the previous change cause!

k8s/record.md

1161/1692

Updating the annotation directly

  • Let's see what happens if we set the annotation manually
  • Annotate the Deployment:

    kubectl annotate deployment worker kubernetes.io/change-cause="Just for fun"
  • Check that our annotation shows up in the change history:

    kubectl rollout history deployment worker
1162/1692

Updating the annotation directly

  • Let's see what happens if we set the annotation manually
  • Annotate the Deployment:

    kubectl annotate deployment worker kubernetes.io/change-cause="Just for fun"
  • Check that our annotation shows up in the change history:

    kubectl rollout history deployment worker

Our annotation shows up (and overwrote whatever was there before).

k8s/record.md

1163/1692

Using change cause

  • It sounds like a good idea to use --record, but:

    "Incorrect documentation is often worse than no documentation."
    (Bertrand Meyer)

  • If we use --record once, we need to either:

    • use it every single time after that

    • or clear the Deployment annotation after using --record
      (subsequent changes will show up with a <none> change cause)

  • A safer way is to set it through our tooling

k8s/record.md

1164/1692

Image separating from the next chapter

1165/1692

Git-based workflows

(automatically generated title slide)

1166/1692

Git-based workflows

  • Deploying with kubectl has downsides:

    • we don't know who deployed what and when

    • there is no audit trail (except the API server logs)

    • there is no easy way to undo most operations

    • there is no review/approval process (like for code reviews)

  • We have all these things for code, though

  • Can we manage cluster state like we manage our source code?

k8s/gitworkflows.md

1167/1692

Reminder: Kubernetes is declarative

  • All we do is create/change resources

  • These resources have a perfect YAML representation

  • All we do is manipulating these YAML representations

    (kubectl run generates a YAML file that gets applied)

  • We can store these YAML representations in a code repository

  • We can version that code repository and maintain it with best practices

    • define which branch(es) can go to qa/staging/production

    • control who can push to which branches

    • have formal review processes, pull requests ...

k8s/gitworkflows.md

1168/1692

Enabling git-based workflows

  • There are a few tools out there to help us do that

  • We'll see demos of two of them: Flux and Gitkube

  • There are many other tools, some of them with even more features

  • There are also many integrations with popular CI/CD systems

    (e.g.: GitLab, Jenkins, ...) k8s/gitworkflows.md

1169/1692

Flux overview

  • We put our Kubernetes resources as YAML files in a git repository

  • Flux polls that repository regularly (every 5 minutes by default)

  • The resources described by the YAML files are created/updated automatically

  • Changes are made by updating the code in the repository

k8s/gitworkflows.md

1170/1692

Preparing a repository for Flux

  • We need a repository with Kubernetes YAML files

  • I have one: https://github.com/jpetazzo/kubercoins

  • Fork it to your GitHub account

  • Create a new branch in your fork; e.g. prod

    (e.g. by adding a line in the README through the GitHub web UI)

  • This is the branch that we are going to use for deployment

k8s/gitworkflows.md

1171/1692

Setting up Flux

  • Clone the Flux repository:

    git clone https://github.com/fluxcd/flux
  • Edit deploy/flux-deployment.yaml

  • Change the --git-url and --git-branch parameters:

    - [email protected]:your-git-username/kubercoins
    - --git-branch=prod
  • Apply all the YAML:

    kubectl apply -f deploy/

k8s/gitworkflows.md

1172/1692

Allowing Flux to access the repository

  • When it starts, Flux generates an SSH key

  • Display that key:

    kubectl logs deployment/flux | grep identity
  • Then add that key to the repository, giving it write access

    (some Flux features require write access)

  • After a minute or so, DockerCoins will be deployed to the current namespace

k8s/gitworkflows.md

1173/1692

Making changes

  • Make changes (on the prod branch), e.g. change replicas in worker

  • After a few minutes, the changes will be picked up by Flux and applied

k8s/gitworkflows.md

1174/1692

Other features

  • Flux can keep a list of all the tags of all the images we're running

  • The fluxctl tool can show us if we're running the latest images

  • We can also "automate" a resource (i.e. automatically deploy new images)

  • And much more!

k8s/gitworkflows.md

1175/1692

Gitkube overview

  • We put our Kubernetes resources as YAML files in a git repository

  • Gitkube is a git server (or "git remote")

  • After making changes to the repository, we push to Gitkube

  • Gitkube applies the resources to the cluster

k8s/gitworkflows.md

1176/1692

Setting up Gitkube

  • Install the CLI:

    sudo curl -L -o /usr/local/bin/gitkube \
    https://github.com/hasura/gitkube/releases/download/v0.2.1/gitkube_linux_amd64
    sudo chmod +x /usr/local/bin/gitkube
  • Install Gitkube on the cluster:

    gitkube install --expose ClusterIP

k8s/gitworkflows.md

1177/1692

Creating a Remote

  • Gitkube provides a new type of API resource: Remote

    (this is using a mechanism called Custom Resource Definitions or CRD)

  • Create and apply a YAML file containing the following manifest:

    apiVersion: gitkube.sh/v1alpha1
    kind: Remote
    metadata:
    name: example
    spec:
    authorizedKeys:
    - ssh-rsa AAA...
    manifests:
    path: "."

    (replace the ssh-rsa AAA... section with the content of ~/.ssh/id_rsa.pub)

k8s/gitworkflows.md

1178/1692

Pushing to our remote

  • Get the gitkubed IP address:

    kubectl -n kube-system get svc gitkubed
    IP=$(kubectl -n kube-system get svc gitkubed -o json |
    jq -r .spec.clusterIP)
  • Get ourselves a sample repository with resource YAML files:

    git clone git://github.com/jpetazzo/kubercoins
    cd kubercoins
  • Add the remote and push to it:

    git remote add k8s ssh://default-example@$IP/~/git/default-example
    git push k8s master

k8s/gitworkflows.md

1179/1692

Making changes

  • Edit a local file

  • Commit

  • Push!

  • Make sure that you push to the k8s remote

k8s/gitworkflows.md

1180/1692

Other features

  • Gitkube can also build container images for us

    (see the documentation for more details)

  • Gitkube can also deploy Helm charts

    (instead of raw YAML files)

k8s/gitworkflows.md

1181/1692

Image separating from the next chapter

1182/1692

Building images with the Docker Engine

(automatically generated title slide)

1183/1692

Building images with the Docker Engine

  • Until now, we have built our images manually, directly on a node

  • We are going to show how to build images from within the cluster

    (by executing code in a container controlled by Kubernetes)

  • We are going to use the Docker Engine for that purpose

  • To access the Docker Engine, we will mount the Docker socket in our container

  • After building the image, we will push it to our self-hosted registry

k8s/build-with-docker.md

1184/1692

Resource specification for our builder pod

apiVersion: v1
kind: Pod
metadata:
name: build-image
spec:
restartPolicy: OnFailure
containers:
- name: docker-build
image: docker
env:
- name: REGISTRY_PORT
value: "3XXXX"
command: ["sh", "-c"]
args:
- |
apk add --no-cache git &&
mkdir /workspace &&
git clone https://github.com/jpetazzo/container.training /workspace &&
docker build -t localhost:$REGISTRY_PORT/worker /workspace/dockercoins/worker &&
docker push localhost:$REGISTRY_PORT/worker
volumeMounts:
- name: docker-socket
mountPath: /var/run/docker.sock
volumes:
- name: docker-socket
hostPath:
path: /var/run/docker.sock

k8s/build-with-docker.md

1185/1692

Breaking down the pod specification (1/2)

  • restartPolicy: OnFailure prevents the build from running in an infinite lopo

  • We use the docker image (so that the docker CLI is available)

  • We rely on the fact that the docker image is based on alpine

    (which is why we use apk to install git)

  • The port for the registry is passed through an environment variable

    (this avoids repeating it in the specification, which would be error-prone)

The environment variable has to be a string, so the "s are mandatory!

k8s/build-with-docker.md

1186/1692

Breaking down the pod specification (2/2)

  • The volume docker-socket is declared with a hostPath, indicating a bind-mount

  • It is then mounted in the container onto the default Docker socket path

  • We show a interesting way to specify the commands to run in the container:

    • the command executed will be sh -c <args>

    • args is a list of strings

    • | is used to pass a multi-line string in the YAML file

k8s/build-with-docker.md

1187/1692

Running our pod

  • Let's try this out!
  • Check the port used by our self-hosted registry:

    kubectl get svc registry
  • Edit ~/container.training/k8s/docker-build.yaml to put the port number

  • Schedule the pod by applying the resource file:

    kubectl apply -f ~/container.training/k8s/docker-build.yaml
  • Watch the logs:

    stern build-image

k8s/build-with-docker.md

1188/1692

What's missing?

What do we need to change to make this production-ready?

  • Build from a long-running container (e.g. a Deployment) triggered by web hooks

    (the payload of the web hook could indicate the repository to build)

  • Build a specific branch or tag; tag image accordingly

  • Handle repositories where the Dockerfile is not at the root

    (or containing multiple Dockerfiles)

  • Expose build logs so that troubleshooting is straightforward

1189/1692

What's missing?

What do we need to change to make this production-ready?

  • Build from a long-running container (e.g. a Deployment) triggered by web hooks

    (the payload of the web hook could indicate the repository to build)

  • Build a specific branch or tag; tag image accordingly

  • Handle repositories where the Dockerfile is not at the root

    (or containing multiple Dockerfiles)

  • Expose build logs so that troubleshooting is straightforward

🤔 That seems like a lot of work!

1190/1692

What's missing?

What do we need to change to make this production-ready?

  • Build from a long-running container (e.g. a Deployment) triggered by web hooks

    (the payload of the web hook could indicate the repository to build)

  • Build a specific branch or tag; tag image accordingly

  • Handle repositories where the Dockerfile is not at the root

    (or containing multiple Dockerfiles)

  • Expose build logs so that troubleshooting is straightforward

🤔 That seems like a lot of work!

That's why services like Docker Hub (with automated builds) are helpful.
They handle the whole "code repository → Docker image" workflow.

k8s/build-with-docker.md

1191/1692

Things to be aware of

  • This is talking directly to a node's Docker Engine to build images

  • It bypasses resource allocation mechanisms used by Kubernetes

    (but you can use taints and tolerations to dedicate builder nodes)

  • Be careful not to introduce conflicts when naming images

    (e.g. do not allow the user to specify the image names!)

  • Your builds are going to be fast

    (because they will leverage Docker's caching system)

k8s/build-with-docker.md

1192/1692

Image separating from the next chapter

1193/1692

Building images with Kaniko

(automatically generated title slide)

1194/1692

Building images with Kaniko

  • Kaniko is an open source tool to build container images within Kubernetes

  • It can build an image using any standard Dockerfile

  • The resulting image can be pushed to a registry or exported as a tarball

  • It doesn't require any particular privilege

    (and can therefore run in a regular container in a regular pod)

  • This combination of features is pretty unique

    (most other tools use different formats, or require elevated privileges)

k8s/build-with-kaniko.md

1195/1692

Kaniko in practice

  • Kaniko provides an "executor image", gcr.io/kaniko-project/executor

  • When running that image, we need to specify at least:

    • the path to the build context (=the directory with our Dockerfile)

    • the target image name (including the registry address)

  • Simplified example:

    docker run \
    -v ...:/workspace gcr.io/kaniko-project/executor \
    --context=/workspace \
    --destination=registry:5000/image_name:image_tag

k8s/build-with-kaniko.md

1196/1692

Running Kaniko in a Docker container

  • Let's build the image for the DockerCoins worker service with Kaniko
  • Find the port number for our self-hosted registry:

    kubectl get svc registry
    PORT=$(kubectl get svc registry -o json | jq .spec.ports[0].nodePort)
  • Run Kaniko:

    docker run --net host \
    -v ~/container.training/dockercoins/worker:/workspace \
    gcr.io/kaniko-project/executor \
    --context=/workspace \
    --destination=127.0.0.1:$PORT/worker-kaniko:latest

We use --net host so that we can connect to the registry over 127.0.0.1.

k8s/build-with-kaniko.md

1197/1692

Running Kaniko in a Kubernetes pod

  • We need to mount or copy the build context to the pod

  • We are going to build straight from the git repository

    (to avoid depending on files sitting on a node, outside of containers)

  • We need to git clone the repository before running Kaniko

  • We are going to use two containers sharing a volume:

    • a first container to git clone the repository to the volume

    • a second container to run Kaniko, using the content of the volume

  • However, we need the first container to be done before running the second one

🤔 How could we do that?

k8s/build-with-kaniko.md

1198/1692

Init Containers to the rescue

  • A pod can have a list of initContainers

  • initContainers are executed in the specified order

  • Each Init Container needs to complete (exit) successfully

  • If any Init Container fails (non-zero exit status) the pod fails

    (what happens next depends on the pod's restartPolicy)

  • After all Init Containers have run successfully, normal containers are started

  • We are going to execute the git clone operation in an Init Container

k8s/build-with-kaniko.md

1199/1692

Our Kaniko builder pod

apiVersion: v1
kind: Pod
metadata:
name: kaniko-build
spec:
initContainers:
- name: git-clone
image: alpine
command: ["sh", "-c"]
args:
- |
apk add --no-cache git &&
git clone git://github.com/jpetazzo/container.training /workspace
volumeMounts:
- name: workspace
mountPath: /workspace
containers:
- name: build-image
image: gcr.io/kaniko-project/executor:latest
args:
- "--context=/workspace/dockercoins/rng"
- "--insecure"
- "--destination=registry:5000/rng-kaniko:latest"
volumeMounts:
- name: workspace
mountPath: /workspace
volumes:
- name: workspace

k8s/build-with-kaniko.md

1200/1692

Explanations

  • We define a volume named workspace (using the default emptyDir provider)

  • That volume is mounted to /workspace in both our containers

  • The git-clone Init Container installs git and runs git clone

  • The build-image container executes Kaniko

  • We use our self-hosted registry DNS name (registry)

  • We add --insecure to use plain HTTP to talk to the registry

k8s/build-with-kaniko.md

1201/1692

Running our Kaniko builder pod

  • The YAML for the pod is in k8s/kaniko-build.yaml
  • Create the pod:

    kubectl apply -f ~/container.training/k8s/kaniko-build.yaml
  • Watch the logs:

    stern kaniko

k8s/build-with-kaniko.md

1202/1692

Discussion

What should we use? The Docker build technique shown earlier? Kaniko? Something else?

  • The Docker build technique is simple, and has the potential to be very fast

  • However, it doesn't play nice with Kubernetes resource limits

  • Kaniko plays nice with resource limits

  • However, it's slower (there is no caching at all)

  • The ultimate building tool will probably be Jessica Frazelle's img builder

    (it depends on upstream changes that are not in Kubernetes 1.11.2 yet)

But ... is it all about speed? (No!)

k8s/build-with-kaniko.md

1203/1692

The big picture

  • For starters: the Docker Hub automated builds are very easy to set up

    • link a GitHub repository with the Docker Hub

    • each time you push to GitHub, an image gets build on the Docker Hub

  • If this doesn't work for you: why?

    • too slow (I'm far from us-east-1!) → consider using your cloud provider's registry

    • I'm not using a cloud provider → ok, perhaps you need to self-host then

    • I need fancy features (e.g. CI) → consider something like GitLab

k8s/build-with-kaniko.md

1204/1692

Image separating from the next chapter

1205/1692

Building our own cluster

(automatically generated title slide)

1206/1692

Building our own cluster

  • Let's build our own cluster!

    Perfection is attained not when there is nothing left to add, but when there is nothing left to take away. (Antoine de Saint-Exupery)

  • Our goal is to build a minimal cluster allowing us to:

    • create a Deployment (with kubectl run or kubectl create deployment)
    • expose it with a Service
    • connect to that service
  • "Minimal" here means:

    • smaller number of components
    • smaller number of command-line flags
    • smaller number of configuration files

k8s/dmuc.md

1207/1692

Non-goals

  • For now, we don't care about security

  • For now, we don't care about scalability

  • For now, we don't care about high availability

  • All we care about is simplicity

k8s/dmuc.md

1208/1692

Our environment

  • We will use the machine indicated as dmuc1

    (this stands for "Dessine Moi Un Cluster" or "Draw Me A Sheep",
    in homage to Saint-Exupery's "The Little Prince")

  • This machine:

    • runs Ubuntu LTS

    • has Kubernetes, Docker, and etcd binaries installed

    • but nothing is running

k8s/dmuc.md

1209/1692

Checking our environment

  • Let's make sure we have everything we need first
  • Log into the dmuc1 machine

  • Get root:

    sudo -i
  • Check available versions:

    etcd -version
    kube-apiserver --version
    dockerd --version

k8s/dmuc.md

1210/1692

The plan

  1. Start API server

  2. Interact with it (create Deployment and Service)

  3. See what's broken

  4. Fix it and go back to step 2 until it works!

k8s/dmuc.md

1211/1692

Dealing with multiple processes

  • We are going to start many processes

  • Depending on what you're comfortable with, you can:

    • open multiple windows and multiple SSH connections

    • use a terminal multiplexer like screen or tmux

    • put processes in the background with &
      (warning: log output might get confusing to read!)

k8s/dmuc.md

1212/1692

Starting API server

  • Try to start the API server:
    kube-apiserver
    # It will fail with "--etcd-servers must be specified"

Since the API server stores everything in etcd, it cannot start without it.

k8s/dmuc.md

1213/1692

Starting etcd

  • Try to start etcd:
    etcd

Success!

Note the last line of output:

serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged!

Sure, that's discouraged. But thanks for telling us the address!

k8s/dmuc.md

1214/1692

Starting API server (for real)

  • Try again, passing the --etcd-servers argument

  • That argument should be a comma-separated list of URLs

  • Start API server:
    kube-apiserver --etcd-servers http://127.0.0.1:2379

Success!

k8s/dmuc.md

1215/1692

Interacting with API server

  • Let's try a few "classic" commands
  • List nodes:

    kubectl get nodes
  • List services:

    kubectl get services

We should get No resources found. and the kubernetes service, respectively.

Note: the API server automatically created the kubernetes service entry.

k8s/dmuc.md

1216/1692

What about kubeconfig?

  • We didn't need to create a kubeconfig file

  • By default, the API server is listening on localhost:8080

    (without requiring authentication)

  • By default, kubectl connects to localhost:8080

    (without providing authentication)

k8s/dmuc.md

1217/1692

Creating a Deployment

  • Let's run a web server!
  • Create a Deployment with NGINX:
    kubectl create deployment web --image=nginx

Success?

k8s/dmuc.md

1218/1692

Checking our Deployment status

  • Look at pods, deployments, etc.:
    kubectl get all

Our Deployment is in bad shape:

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/web 0/1 0 0 2m26s

And, there is no ReplicaSet, and no Pod.

k8s/dmuc.md

1219/1692

What's going on?

  • We stored the definition of our Deployment in etcd

    (through the API server)

  • But there is no controller to do the rest of the work

  • We need to start the controller manager

k8s/dmuc.md

1220/1692

Starting the controller manager

  • Try to start the controller manager:
    kube-controller-manager

The final error message is:

invalid configuration: no configuration has been provided

But the logs include another useful piece of information:

Neither --kubeconfig nor --master was specified.
Using the inClusterConfig. This might not work.

k8s/dmuc.md

1221/1692

Reminder: everyone talks to API server

  • The controller manager needs to connect to the API server

  • It does not have a convenient localhost:8080 default

  • We can pass the connection information in two ways:

    • --master and a host:port combination (easy)

    • --kubeconfig and a kubeconfig file

  • For simplicity, we'll use the first option

k8s/dmuc.md

1222/1692

Starting the controller manager (for real)

  • Start the controller manager:
    kube-controller-manager --master http://localhost:8080

Success!

k8s/dmuc.md

1223/1692

Checking our Deployment status

  • Check all our resources again:
    kubectl get all

We now have a ReplicaSet.

But we still don't have a Pod.

k8s/dmuc.md

1224/1692

What's going on?

In the controller manager logs, we should see something like this:

E0404 15:46:25.753376 22847 replica_set.go:450] Sync "default/web-5bc9bd5b8d"
failed with No API token found for service account "default", retry after the
token is automatically created and added to the service account
  • The service account default was automatically added to our Deployment

    (and to its pods)

  • The service account default exists

  • But it doesn't have an associated token

    (the token is a secret; creating it requires signature; therefore a CA)

k8s/dmuc.md

1225/1692

Solving the missing token issue

There are many ways to solve that issue.

We are going to list a few (to get an idea of what's happening behind the scenes).

Of course, we don't need to perform all the solutions mentioned here.

k8s/dmuc.md

1226/1692

Option 1: disable service accounts

  • Restart the API server with --disable-admission-plugins=ServiceAccount

  • The API server will no longer add a service account automatically

  • Our pods will be created without a service account

k8s/dmuc.md

1227/1692

Option 2: do not mount the (missing) token

  • Add automountServiceAccountToken: false to the Deployment spec

    or

  • Add automountServiceAccountToken: false to the default ServiceAccount

  • The ReplicaSet controller will no longer create pods referencing the (missing) token

  • Programmatically change the default ServiceAccount:
    kubectl patch sa default -p "automountServiceAccountToken: false"

k8s/dmuc.md

1228/1692

Option 3: set up service accounts properly

  • This is the most complex option!

  • Generate a key pair

  • Pass the private key to the controller manager

    (to generate and sign tokens)

  • Pass the public key to the API server

    (to verify these tokens)

k8s/dmuc.md

1229/1692

Continuing without service account token

  • Once we patch the default service account, the ReplicaSet can create a Pod
  • Check that we now have a pod:
    kubectl get all

Note: we might have to wait a bit for the ReplicaSet controller to retry.

If we're impatient, we can restart the controller manager.

k8s/dmuc.md

1230/1692

What's next?

  • Our pod exists, but it is in Pending state

  • Remember, we don't have a node so far

    (kubectl get nodes shows an empty list)

  • We need to:

    • start a container engine

    • start kubelet

k8s/dmuc.md

1231/1692

Starting a container engine

  • We're going to use Docker (because it's the default option)
  • Start the Docker Engine:
    dockerd

Success!

Feel free to check that it actually works with e.g.:

docker run alpine echo hello world

k8s/dmuc.md

1232/1692

Starting kubelet

  • If we start kubelet without arguments, it will start

  • But it will not join the cluster!

  • It will start in standalone mode

  • Just like with the controller manager, we need to tell kubelet where the API server is

  • Alas, kubelet doesn't have a simple --master option

  • We have to use --kubeconfig

  • We need to write a kubeconfig file for kubelet

k8s/dmuc.md

1233/1692

Writing a kubeconfig file

  • We can copy/paste a bunch of YAML

  • Or we can generate the file with kubectl

  • Create the file ~/.kube/config with kubectl:
    kubectl config \
    set-cluster localhost --server http://localhost:8080
    kubectl config \
    set-context localhost --cluster localhost
    kubectl config \
    use-context localhost

k8s/dmuc.md

1234/1692

Our ~/.kube/config file

The file that we generated looks like the one below.

That one has been slightly simplified (removing extraneous fields), but it is still valid.

apiVersion: v1
kind: Config
current-context: localhost
contexts:
- name: localhost
context:
cluster: localhost
clusters:
- name: localhost
cluster:
server: http://localhost:8080

k8s/dmuc.md

1235/1692

Starting kubelet

  • Start kubelet with that kubeconfig file:
    kubelet --kubeconfig ~/.kube/config

Success!

k8s/dmuc.md

1236/1692

Looking at our 1-node cluster

  • Let's check that our node registered correctly
  • List the nodes in our cluster:
    kubectl get nodes

Our node should show up.

Its name will be its hostname (it should be dmuc1).

k8s/dmuc.md

1237/1692

Are we there yet?

  • Let's check if our pod is running
  • List all resources:
    kubectl get all
1238/1692

Are we there yet?

  • Let's check if our pod is running
  • List all resources:
    kubectl get all

Our pod is still Pending. 🤔

1239/1692

Are we there yet?

  • Let's check if our pod is running
  • List all resources:
    kubectl get all

Our pod is still Pending. 🤔

Which is normal: it needs to be scheduled.

(i.e., something needs to decide which node it should go on.)

k8s/dmuc.md

1240/1692

Scheduling our pod

  • Why do we need a scheduling decision, since we have only one node?

  • The node might be full, unavailable; the pod might have constraints ...

  • The easiest way to schedule our pod is to start the scheduler

    (we could also schedule it manually)

k8s/dmuc.md

1241/1692

Starting the scheduler

  • The scheduler also needs to know how to connect to the API server

  • Just like for controller manager, we can use --kubeconfig or --master

  • Start the scheduler:
    kube-scheduler --master http://localhost:8080
  • Our pod should now start correctly

k8s/dmuc.md

1242/1692

Checking the status of our pod

  • Our pod will go through a short ContainerCreating phase

  • Then it will be Running

  • Check pod status:
    kubectl get pods

Success!

k8s/dmuc.md

1243/1692

Scheduling a pod manually

  • We can schedule a pod in Pending state by creating a Binding, e.g.:

    kubectl create -f- <<EOF
    apiVersion: v1
    kind: Binding
    metadata:
    name: name-of-the-pod
    target:
    apiVersion: v1
    kind: Node
    name: name-of-the-node
    EOF
  • This is actually how the scheduler works!

  • It watches pods, makes scheduling decisions, and creates Binding objects

k8s/dmuc.md

1244/1692

Connecting to our pod

  • Let's check that our pod correctly runs NGINX
  • Check our pod's IP address:

    kubectl get pods -o wide
  • Send some HTTP request to the pod:

    curl X.X.X.X

We should see the Welcome to nginx! page.

k8s/dmuc.md

1245/1692

Exposing our Deployment

  • We can now create a Service associated with this Deployment
  • Expose the Deployment's port 80:

    kubectl expose deployment web --port=80
  • Check the Service's ClusterIP, and try connecting:

    kubectl get service web
    curl http://X.X.X.X
1246/1692

Exposing our Deployment

  • We can now create a Service associated with this Deployment
  • Expose the Deployment's port 80:

    kubectl expose deployment web --port=80
  • Check the Service's ClusterIP, and try connecting:

    kubectl get service web
    curl http://X.X.X.X

This won't work. We need kube-proxy to enable internal communication.

k8s/dmuc.md

1247/1692

Starting kube-proxy

  • kube-proxy also needs to connect to the API server

  • It can work with the --master flag

    (although that will be deprecated in the future)

  • Start kube-proxy:
    kube-proxy --master http://localhost:8080

k8s/dmuc.md

1248/1692

Connecting to our Service

  • Now that kube-proxy is running, we should be able to connect
  • Check the Service's ClusterIP again, and retry connecting:
    kubectl get service web
    curl http://X.X.X.X

Success!

k8s/dmuc.md

1249/1692

How kube-proxy works

  • kube-proxy watches Service resources

  • When a Service is created or updated, kube-proxy creates iptables rules

  • Check out the OUTPUT chain in the nat table:

    iptables -t nat -L OUTPUT
  • Traffic is sent to KUBE-SERVICES; check that too:

    iptables -t nat -L KUBE-SERVICES

For each Service, there is an entry in that chain.

k8s/dmuc.md

1250/1692

Diving into iptables

  • The last command showed a chain named KUBE-SVC-... corresponding to our service
  • Check that KUBE-SVC-... chain:

    iptables -t nat -L KUBE-SVC-...
  • It should show a jump to a KUBE-SEP-... chains; check it out too:

    iptables -t nat -L KUBE-SEP-...

This is a DNAT rule to rewrite the destination address of the connection to our pod.

This is how kube-proxy works!

k8s/dmuc.md

1251/1692

kube-router, IPVS

  • With recent versions of Kubernetes, it is possible to tell kube-proxy to use IPVS

  • IPVS is a more powerful load balancing framework

    (remember: iptables was primarily designed for firewalling, not load balancing!)

  • It is also possible to replace kube-proxy with kube-router

  • kube-router uses IPVS by default

  • kube-router can also perform other functions

    (e.g., we can use it as a CNI plugin to provide pod connectivity)

k8s/dmuc.md

1252/1692

What about the kubernetes service?

  • If we try to connect, it won't work

    (by default, it should be 10.0.0.1)

  • If we look at the Endpoints for this service, we will see one endpoint:

    host-address:6443

  • By default, the API server expects to be running directly on the nodes

    (it could be as a bare process, or in a container/pod using the host network)

  • ... And it expects to be listening on port 6443 with TLS

k8s/dmuc.md

1253/1692

Image separating from the next chapter

1254/1692

Adding nodes to the cluster

(automatically generated title slide)

1255/1692

Adding nodes to the cluster

  • So far, our cluster has only 1 node

  • Let's see what it takes to add more nodes

  • We are going to use another set of machines: kubenet

k8s/multinode.md

1256/1692

The environment

  • We have 3 identical machines: kubenet1, kubenet2, kubenet3

  • The Docker Engine is installed (and running) on these machines

  • The Kubernetes packages are installed, but nothing is running

  • We will use kubenet1 to run the control plane

k8s/multinode.md

1257/1692

The plan

  • Start the control plane on kubenet1

  • Join the 3 nodes to the cluster

  • Deploy and scale a simple web server

  • Log into kubenet1

k8s/multinode.md

1258/1692

Running the control plane

  • We will use a Compose file to start the control plane components
  • Clone the repository containing the workshop materials:

    git clone https://github.com/BretFisher/kubernetes-mastery
  • Go to the compose/simple-k8s-control-plane directory:

    cd container.training/compose/simple-k8s-control-plane
  • Start the control plane:

    docker-compose up

k8s/multinode.md

1259/1692

Checking the control plane status

  • Before moving on, verify that the control plane works
  • Show control plane component statuses:

    kubectl get componentstatuses
    kubectl get cs
  • Show the (empty) list of nodes:

    kubectl get nodes

k8s/multinode.md

1260/1692

Differences from dmuc

  • Our new control plane listens on 0.0.0.0 instead of the default 127.0.0.1

  • The ServiceAccount admission plugin is disabled

k8s/multinode.md

1261/1692

Joining the nodes

  • We need to generate a kubeconfig file for kubelet

  • This time, we need to put the public IP address of kubenet1

    (instead of localhost or 127.0.0.1)

  • Generate the kubeconfig file:
    kubectl config set-cluster kubenet --server http://X.X.X.X:8080
    kubectl config set-context kubenet --cluster kubenet
    kubectl config use-context kubenet
    cp ~/.kube/config ~/kubeconfig

k8s/multinode.md

1262/1692

Distributing the kubeconfig file

  • We need that kubeconfig file on the other nodes, too
  • Copy kubeconfig to the other nodes:
    for N in 2 3; do
    scp ~/kubeconfig kubenet$N:
    done

k8s/multinode.md

1263/1692

Starting kubelet

  • Reminder: kubelet needs to run as root; don't forget sudo!
  • Join the first node:

    sudo kubelet --kubeconfig ~/kubeconfig
  • Open more terminals and join the other nodes to the cluster:

    ssh kubenet2 sudo kubelet --kubeconfig ~/kubeconfig
    ssh kubenet3 sudo kubelet --kubeconfig ~/kubeconfig

k8s/multinode.md

1264/1692

Checking cluster status

  • We should now see all 3 nodes

  • At first, their STATUS will be NotReady

  • They will move to Ready state after approximately 10 seconds

  • Check the list of nodes:
    kubectl get nodes

k8s/multinode.md

1265/1692

Deploy a web server

  • Let's create a Deployment and scale it

    (so that we have multiple pods on multiple nodes)

  • Create a Deployment running NGINX:

    kubectl create deployment web --image=nginx
  • Scale it:

    kubectl scale deployment web --replicas=5

k8s/multinode.md

1266/1692

Check our pods

  • The pods will be scheduled on the nodes

  • The nodes will pull the nginx image, and start the pods

  • What are the IP addresses of our pods?

  • Check the IP addresses of our pods
    kubectl get pods -o wide
1267/1692

Check our pods

  • The pods will be scheduled on the nodes

  • The nodes will pull the nginx image, and start the pods

  • What are the IP addresses of our pods?

  • Check the IP addresses of our pods
    kubectl get pods -o wide

🤔 Something's not right ... Some pods have the same IP address!

k8s/multinode.md

1268/1692

What's going on?

  • Without the --network-plugin flag, kubelet defaults to "no-op" networking

  • It lets the container engine use a default network

    (in that case, we end up with the default Docker bridge)

  • Our pods are running on independent, disconnected, host-local networks

k8s/multinode.md

1269/1692

What do we need to do?

  • On a normal cluster, kubelet is configured to set up pod networking with CNI plugins

  • This requires:

    • installing CNI plugins

    • writing CNI configuration files

    • running kubelet with --network-plugin=cni

k8s/multinode.md

1270/1692

Using network plugins

  • We need to set up a better network

  • Before diving into CNI, we will use the kubenet plugin

  • This plugin creates a cbr0 bridge and connects the containers to that bridge

  • This plugin allocates IP addresses from a range:

    • either specified to kubelet (e.g. with --pod-cidr)

    • or stored in the node's spec.podCIDR field

See here for more details about this kubenet plugin. k8s/multinode.md

1271/1692

What kubenet does and does not do

  • It allocates IP addresses to pods locally

    (each node has its own local subnet)

  • It connects the pods to a local bridge

    (pods on the same node can communicate together; not with other nodes)

  • It doesn't set up routing or tunneling

    (we get pods on separated networks; we need to connect them somehow)

  • It doesn't allocate subnets to nodes

    (this can be done manually, or by the controller manager)

k8s/multinode.md

1272/1692

Setting up routing or tunneling

  • On each node, we will add routes to the other nodes' pod network

  • Of course, this is not convenient or scalable!

  • We will see better techniques to do this; but for now, hang on!

k8s/multinode.md

1273/1692

Allocating subnets to nodes

  • There are multiple options:

    • passing the subnet to kubelet with the --pod-cidr flag

    • manually setting spec.podCIDR on each node

    • allocating node CIDRs automatically with the controller manager

  • The last option would be implemented by adding these flags to controller manager:

    --allocate-node-cidrs=true --cluster-cidr=<cidr>

k8s/multinode.md

1274/1692

The pod CIDR field is not mandatory

  • kubenet needs the pod CIDR, but other plugins don't need it

    (e.g. because they allocate addresses in multiple pools, or a single big one)

  • The pod CIDR field may eventually be deprecated and replaced by an annotation

    (see kubernetes/kubernetes#57130)

k8s/multinode.md

1275/1692

Restarting kubelet wih pod CIDR

  • We need to stop and restart all our kubelets

  • We will add the --network-plugin and --pod-cidr flags

  • We all have a "cluster number" (let's call that C) printed on your VM info card

  • We will use pod CIDR 10.C.N.0/24 (where N is the node number: 1, 2, 3)

  • Stop all the kubelets (Ctrl-C is fine)

  • Restart them all, adding --network-plugin=kubenet --pod-cidr 10.C.N.0/24

k8s/multinode.md

1276/1692

What happens to our pods?

  • When we stop (or kill) kubelet, the containers keep running

  • When kubelet starts again, it detects the containers

  • Check that our pods are still here:
    kubectl get pods -o wide

🤔 But our pods still use local IP addresses!

k8s/multinode.md

1277/1692

Recreating the pods

  • The IP address of a pod cannot change

  • kubelet doesn't automatically kill/restart containers with "invalid" addresses
    (in fact, from kubelet's point of view, there is no such thing as an "invalid" address)

  • We must delete our pods and recreate them

  • Delete all the pods, and let the ReplicaSet recreate them:

    kubectl delete pods --all
  • Wait for the pods to be up again:

    kubectl get pods -o wide -w

k8s/multinode.md

1278/1692

Adding kube-proxy

  • Let's start kube-proxy to provide internal load balancing

  • Then see if we can create a Service and use it to contact our pods

  • Start kube-proxy:

    sudo kube-proxy --kubeconfig ~/.kube/config
  • Expose our Deployment:

    kubectl expose deployment web --port=80

k8s/multinode.md

1279/1692

Test internal load balancing

  • Retrieve the ClusterIP address:

    kubectl get svc web
  • Send a few requests to the ClusterIP address (with curl)

1280/1692

Test internal load balancing

  • Retrieve the ClusterIP address:

    kubectl get svc web
  • Send a few requests to the ClusterIP address (with curl)

Sometimes it works, sometimes it doesn't. Why?

k8s/multinode.md

1281/1692

Routing traffic

  • Our pods have new, distinct IP addresses

  • But they are on host-local, isolated networks

  • If we try to ping a pod on a different node, it won't work

  • kube-proxy merely rewrites the destination IP address

  • But we need that IP address to be reachable in the first place

  • How do we fix this?

    (hint: check the title of this slide!)

k8s/multinode.md

1282/1692

Important warning

  • The technique that we are about to use doesn't work everywhere

  • It only works if:

    • all the nodes are directly connected to each other (at layer 2)

    • the underlying network allows the IP addresses of our pods

  • If we are on physical machines connected by a switch: OK

  • If we are on virtual machines in a public cloud: NOT OK

    • on AWS, we need to disable "source and destination checks" on our instances

    • on OpenStack, we need to disable "port security" on our network ports

k8s/multinode.md

1283/1692

Routing basics

  • We need to tell each node:

    "The subnet 10.C.N.0/24 is located on node N" (for all values of N)

  • This is how we add a route on Linux:

    ip route add 10.C.N.0/24 via W.X.Y.Z

    (where W.X.Y.Z is the internal IP address of node N)

  • We can see the internal IP addresses of our nodes with:

    kubectl get nodes -o wide

k8s/multinode.md

1284/1692

Firewalling

  • By default, Docker prevents containers from using arbitrary IP addresses

    (by setting up iptables rules)

  • We need to allow our containers to use our pod CIDR

  • For simplicity, we will insert a blanket iptables rule allowing all traffic:

    iptables -I FORWARD -j ACCEPT

  • This has to be done on every node

k8s/multinode.md

1285/1692

Setting up routing

  • Create all the routes on all the nodes

  • Insert the iptables rule allowing traffic

  • Check that you can ping all the pods from one of the nodes

  • Check that you can curl the ClusterIP of the Service successfully

k8s/multinode.md

1286/1692

What's next?

  • We did a lot of manual operations:

    • allocating subnets to nodes

    • adding command-line flags to kubelet

    • updating the routing tables on our nodes

  • We want to automate all these steps

  • We want something that works on all networks

k8s/multinode.md

1287/1692

Image separating from the next chapter

1288/1692

API server availability

(automatically generated title slide)

1289/1692

API server availability

  • When we set up a node, we need the address of the API server:

    • for kubelet

    • for kube-proxy

    • sometimes for the pod network system (like kube-router)

  • How do we ensure the availability of that endpoint?

    (what if the node running the API server goes down?)

k8s/apilb.md

1290/1692

Option 1: external load balancer

  • Set up an external load balancer

  • Point kubelet (and other components) to that load balancer

  • Put the node(s) running the API server behind that load balancer

  • Update the load balancer if/when an API server node needs to be replaced

  • On cloud infrastructures, some mechanisms provide automation for this

    (e.g. on AWS, an Elastic Load Balancer + Auto Scaling Group)

  • Example in Kubernetes The Hard Way

k8s/apilb.md

1291/1692

Option 2: local load balancer

  • Set up a load balancer (like NGINX, HAProxy...) on each node

  • Configure that load balancer to send traffic to the API server node(s)

  • Point kubelet (and other components) to localhost

  • Update the load balancer configuration when API server nodes are updated

k8s/apilb.md

1292/1692

Updating the local load balancer config

  • Distribute the updated configuration (push)

  • Or regularly check for updates (pull)

  • The latter requires an external, highly available store

    (it could be an object store, an HTTP server, or even DNS...)

  • Updates can be facilitated by a DaemonSet

    (but remember that it can't be used when installing a new node!)

k8s/apilb.md

1293/1692

Option 3: DNS records

  • Put all the API server nodes behind a round-robin DNS

  • Point kubelet (and other components) to that name

  • Update the records when needed

  • Note: this option is not officially supported

    (but since kubelet supports reconnection anyway, it should work)

k8s/apilb.md

1294/1692

Option 4: ....................

  • Many managed clusters expose a high-availability API endpoint

    (and you don't have to worry about it)

  • You can also use HA mechanisms that you're familiar with

    (e.g. virtual IPs)

  • Tunnels are also fine

    (e.g. k3s uses a tunnel to allow each node to contact the API server)

k8s/apilb.md

1295/1692

Image separating from the next chapter

1296/1692

Static pods

(automatically generated title slide)

1297/1692

Static pods

  • Hosting the Kubernetes control plane on Kubernetes has advantages:

    • we can use Kubernetes' replication and scaling features for the control plane

    • we can leverage rolling updates to upgrade the control plane

  • However, there is a catch:

    • deploying on Kubernetes requires the API to be available

    • the API won't be available until the control plane is deployed

  • How can we get out of that chicken-and-egg problem?

k8s/staticpods.md

1298/1692

A possible approach

  • Since each component of the control plane can be replicated...

  • We could set up the control plane outside of the cluster

  • Then, once the cluster is fully operational, create replicas running on the cluster

  • Finally, remove the replicas that are running outside of the cluster

What could possibly go wrong?

k8s/staticpods.md

1299/1692

Sawing off the branch you're sitting on

  • What if anything goes wrong?

    (During the setup or at a later point)

  • Worst case scenario, we might need to:

    • set up a new control plane (outside of the cluster)

    • restore a backup from the old control plane

    • move the new control plane to the cluster (again)

  • This doesn't sound like a great experience

k8s/staticpods.md

1300/1692

Static pods to the rescue

  • Pods are started by kubelet (an agent running on every node)

  • To know which pods it should run, the kubelet queries the API server

  • The kubelet can also get a list of static pods from:

    • a directory containing one (or multiple) manifests, and/or

    • a URL (serving a manifest)

  • These "manifests" are basically YAML definitions

    (As produced by kubectl get pod my-little-pod -o yaml)

k8s/staticpods.md

1301/1692

Static pods are dynamic

  • Kubelet will periodically reload the manifests

  • It will start/stop pods accordingly

    (i.e. it is not necessary to restart the kubelet after updating the manifests)

  • When connected to the Kubernetes API, the kubelet will create mirror pods

  • Mirror pods are copies of the static pods

    (so they can be seen with e.g. kubectl get pods)

k8s/staticpods.md

1302/1692

Bootstrapping a cluster with static pods

  • We can run control plane components with these static pods

  • They can start without requiring access to the API server

  • Once they are up and running, the API becomes available

  • These pods are then visible through the API

    (We cannot upgrade them from the API, though)

This is how kubeadm has initialized our clusters.

k8s/staticpods.md

1303/1692

Static pods vs normal pods

  • The API only gives us read-only access to static pods

  • We can kubectl delete a static pod...

    ...But the kubelet will re-mirror it immediately

  • Static pods can be selected just like other pods

    (So they can receive service traffic)

  • A service can select a mixture of static and other pods

k8s/staticpods.md

1304/1692

From static pods to normal pods

  • Once the control plane is up and running, it can be used to create normal pods

  • We can then set up a copy of the control plane in normal pods

  • Then the static pods can be removed

  • The scheduler and the controller manager use leader election

    (Only one is active at a time; removing an instance is seamless)

  • Each instance of the API server adds itself to the kubernetes service

  • Etcd will typically require more work!

k8s/staticpods.md

1305/1692

From normal pods back to static pods

  • Alright, but what if the control plane is down and we need to fix it?

  • We restart it using static pods!

  • This can be done automatically with the Pod Checkpointer

  • The Pod Checkpointer automatically generates manifests of running pods

  • The manifests are used to restart these pods if API contact is lost

    (More details in the Pod Checkpointer documentation page)

  • This technique is used by bootkube k8s/staticpods.md

1306/1692

Where should the control plane run?

Is it better to run the control plane in static pods, or normal pods?

  • If I'm a user of the cluster: I don't care, it makes no difference to me

  • What if I'm an admin, i.e. the person who installs, upgrades, repairs... the cluster?

  • If I'm using a managed Kubernetes cluster (AKS, EKS, GKE...) it's not my problem

    (I'm not the one setting up and managing the control plane)

  • If I already picked a tool (kubeadm, kops...) to set up my cluster, the tool decides for me

  • What if I haven't picked a tool yet, or if I'm installing from scratch?

    • static pods = easier to set up, easier to troubleshoot, less risk of outage

    • normal pods = easier to upgrade, easier to move (if nodes need to be shut down)

k8s/staticpods.md

1307/1692

Static pods in action

  • On our clusters, the staticPodPath is /etc/kubernetes/manifests
  • Have a look at this directory:
    ls -l /etc/kubernetes/manifests

We should see YAML files corresponding to the pods of the control plane.

k8s/staticpods.md

1308/1692

Running a static pod

  • We are going to add a pod manifest to the directory, and kubelet will run it
  • Copy a manifest to the directory:

    sudo cp ~/container.training/k8s/just-a-pod.yaml /etc/kubernetes/manifests
  • Check that it's running:

    kubectl get pods

The output should include a pod named hello-node1.

k8s/staticpods.md

1309/1692

Remarks

In the manifest, the pod was named hello.

apiVersion: v1
kind: Pod
metadata:
name: hello
namespace: default
spec:
containers:
- name: hello
image: nginx

The -node1 suffix was added automatically by kubelet.

If we delete the pod (with kubectl delete), it will be recreated immediately.

To delete the pod, we need to delete (or move) the manifest file.

k8s/staticpods.md

1310/1692

Owners and dependents

  • Some objects are created by other objects

    (example: pods created by replica sets, themselves created by deployments)

  • When an owner object is deleted, its dependents are deleted

    (this is the default behavior; it can be changed)

  • We can delete a dependent directly if we want

    (but generally, the owner will recreate another right away)

  • An object can have multiple owners

k8s/owners-and-dependents.md

1311/1692

Finding out the owners of an object

  • The owners are recorded in the field ownerReferences in the metadata block
  • Let's create a deployment running nginx:

    kubectl create deployment yanginx --image=nginx
  • Scale it to a few replicas:

    kubectl scale deployment yanginx --replicas=3
  • Once it's up, check the corresponding pods:

    kubectl get pods -l app=yanginx -o yaml | head -n 25

These pods are owned by a ReplicaSet named yanginx-xxxxxxxxxx.

k8s/owners-and-dependents.md

1312/1692

Listing objects with their owners

  • This is a good opportunity to try the custom-columns output!
  • Show all pods with their owners:
    kubectl get pod -o custom-columns=\
    NAME:.metadata.name,\
    OWNER-KIND:.metadata.ownerReferences[0].kind,\
    OWNER-NAME:.metadata.ownerReferences[0].name

Note: the custom-columns option should be one long option (without spaces), so the lines should not be indented (otherwise the indentation will insert spaces).

k8s/owners-and-dependents.md

1313/1692

Deletion policy

  • When deleting an object through the API, three policies are available:

    • foreground (API call returns after all dependents are deleted)

    • background (API call returns immediately; dependents are scheduled for deletion)

    • orphan (the dependents are not deleted)

  • When deleting an object with kubectl, this is selected with --cascade:

    • --cascade=true deletes all dependent objects (default)

    • --cascade=false orphans dependent objects

k8s/owners-and-dependents.md

1314/1692

What happens when an object is deleted

  • It is removed from the list of owners of its dependents

  • If, for one of these dependents, the list of owners becomes empty ...

    • if the policy is "orphan", the object stays

    • otherwise, the object is deleted

k8s/owners-and-dependents.md

1315/1692

Orphaning pods

  • We are going to delete the Deployment and Replica Set that we created

  • ... without deleting the corresponding pods!

  • Delete the Deployment:

    kubectl delete deployment -l app=yanginx --cascade=false
  • Delete the Replica Set:

    kubectl delete replicaset -l app=yanginx --cascade=false
  • Check that the pods are still here:

    kubectl get pods

k8s/owners-and-dependents.md

1316/1692

When and why would we have orphans?

  • If we remove an owner and explicitly instruct the API to orphan dependents

    (like on the previous slide)

  • If we change the labels on a dependent, so that it's not selected anymore

    (e.g. change the app: yanginx in the pods of the previous example)

  • If a deployment tool that we're using does these things for us

  • If there is a serious problem within API machinery or other components

    (i.e. "this should not happen")

k8s/owners-and-dependents.md

1317/1692

Finding orphan objects

  • We're going to output all pods in JSON format

  • Then we will use jq to keep only the ones without an owner

  • And we will display their name

  • List all pods that do not have an owner:
    kubectl get pod -o json | jq -r "
    .items[]
    | select(.metadata.ownerReferences|not)
    | .metadata.name"

k8s/owners-and-dependents.md

1318/1692

Deleting orphan pods

  • Now that we can list orphan pods, deleting them is easy
  • Add | xargs kubectl delete pod to the previous command:
    kubectl get pod -o json | jq -r "
    .items[]
    | select(.metadata.ownerReferences|not)
    | .metadata.name" | xargs kubectl delete pod

As always, the documentation has useful extra information and pointers.

k8s/owners-and-dependents.md

1319/1692

Exposing HTTP services with Ingress resources

  • Services give us a way to access a pod or a set of pods

  • Services can be exposed to the outside world:

    • with type NodePort (on a port >30000)

    • with type LoadBalancer (allocating an external load balancer)

  • What about HTTP services?

    • how can we expose webui, rng, hasher?

    • the Kubernetes dashboard?

    • a new version of webui?

k8smastery/taints.md

1320/1692

Exposing HTTP services

  • If we use NodePort services, clients have to specify port numbers

    (i.e. http://xxxxx:31234 instead of just http://xxxxx)

  • LoadBalancer services are nice, but:

    • they are not available in all environments

    • they often carry an additional cost (e.g. they provision an ELB)

    • They often work at OSI Layer 4 (IP+Port) and not Layer 7 (HTTP/S)

    • they require one extra step for DNS integration
      (waiting for the LoadBalancer to be provisioned; then adding it to DNS)

  • We could build our own reverse proxy

k8smastery/taints.md

1321/1692

Building a custom reverse proxy

  • There are many options available:

    Apache, HAProxy, Envoy, NGINX, Traefik, ...

  • Most of these options require us to update/edit configuration files after each change

  • Some of them can pick up virtual hosts and backends from a configuration store

  • Wouldn't it be nice if this configuration could be managed with the Kubernetes API?

1322/1692

Building a custom reverse proxy

  • There are many options available:

    Apache, HAProxy, Envoy, NGINX, Traefik, ...

  • Most of these options require us to update/edit configuration files after each change

  • Some of them can pick up virtual hosts and backends from a configuration store

  • Wouldn't it be nice if this configuration could be managed with the Kubernetes API?

  • Enter¹ Ingress resources!

¹ Pun maybe intended.

k8smastery/taints.md

1323/1692

Ingress resources

  • Kubernetes API resource (kubectl get ingress/ingresses/ing)

  • Designed to expose HTTP services

  • Basic features:

    • load balancing
    • SSL termination
    • name-based virtual hosting
  • Can also route to different services depending on:

    • URI path (e.g. /apiapi-service, /staticassets-service)
    • Client headers, including cookies (for A/B testing, canary deployment...)
    • and more!

k8smastery/taints.md

1324/1692

Principle of operation

  • Step 1: deploy an ingress controller

    • ingress controller = load balancer + control loop

    • the control loop watches over ingress resources, and configures the LB accordingly

  • Step 2: set up DNS

    • associate DNS entries with the load balancer address
  • Step 3: create ingress resources

    • the ingress controller picks up these resources and configures the LB
  • Step 4: profit!

k8smastery/taints.md

1325/1692

Ingress in action

  • We will deploy the Traefik ingress controller

    • this is an arbitrary choice, the docs list over a dozen options

    • maybe motivated by the fact that Traefik releases are named after cheeses

  • For DNS, we will use nip.io

    • *.1.2.3.4.nip.io resolves to 1.2.3.4
  • We will create ingress resources for various HTTP services

k8smastery/taints.md

1326/1692

Deploying pods listening on port 80

k8smastery/taints.md

1327/1692

Without hostNetwork

  • Normally, each pod gets its own network namespace

    (sometimes called sandbox or network sandbox)

  • An IP address is assigned to the pod

  • This IP address is routed/connected to the cluster network

  • All containers of that pod are sharing that network namespace

    (and therefore using the same IP address)

k8smastery/taints.md

1328/1692

With hostNetwork: true

  • No network namespace gets created

  • The pod is using the network namespace of the host

  • It "sees" (and can use) the interfaces (and IP addresses) of the host

  • The pod can receive outside traffic directly, on any port

  • Downside: with most network plugins, network policies won't work for that pod

    • most network policies work at the IP address level

    • filtering that pod = filtering traffic from the node

k8smastery/taints.md

1329/1692

Running Traefik

  • The Traefik documentation tells us to pick between Deployment and Daemon Set

  • We are going to use a Daemon Set so that each node can accept connections

  • We will do two minor changes to the YAML provided by Traefik:

    • enable hostNetwork

    • add a toleration so that Traefik also runs on node1

k8smastery/taints.md

1330/1692

Taints and tolerations

  • A taint is an attribute added to a node

  • It prevents pods from running on the node

  • ... Unless they have a matching toleration

  • When deploying with kubeadm:

    • a taint is placed on the node dedicated to the control plane

    • the pods running the control plane have a matching toleration

k8smastery/taints.md

1331/1692

Checking taints on our nodes

  • Check our nodes specs:
    kubectl get node node1 -o json | jq .spec
    kubectl get node node2 -o json | jq .spec

We should see a result only for node1 (the one with the control plane):

"taints": [
{
"effect": "NoSchedule",
"key": "node-role.kubernetes.io/master"
}
]

k8smastery/taints.md

1332/1692

Understanding a taint

  • The key can be interpreted as:

    • a reservation for a special set of pods
      (here, this means "this node is reserved for the control plane")

    • an error condition on the node
      (for instance: "disk full," do not start new pods here!)

  • The effect can be:

    • NoSchedule (don't run new pods here)

    • PreferNoSchedule (try not to run new pods here)

    • NoExecute (don't run new pods and evict running pods)

k8smastery/taints.md

1333/1692

Checking tolerations on the control plane

  • Check tolerations for CoreDNS:
    kubectl -n kube-system get deployments coredns -o json |
    jq .spec.template.spec.tolerations

The result should include:

{
"effect": "NoSchedule",
"key": "node-role.kubernetes.io/master"
}

It means: "bypass the exact taint that we saw earlier on node1."

k8smastery/taints.md

1334/1692

Special tolerations

  • Check tolerations on kube-proxy:
    kubectl -n kube-system get ds kube-proxy -o json |
    jq .spec.template.spec.tolerations

The result should include:

{
"operator": "Exists"
}

This one is a special case that means "ignore all taints and run anyway."

k8smastery/taints.md

1335/1692

Running Traefik on our cluster

  • Apply the YAML:
    kubectl apply -f ~/container.training/k8s/traefik.yaml

k8smastery/taints.md

1336/1692

Checking that Traefik runs correctly

  • If Traefik started correctly, we now have a web server listening on each node
  • Check that Traefik is serving 80/tcp:
    curl localhost

We should get a 404 page not found error.

This is normal: we haven't provided any ingress rule yet.

k8smastery/taints.md

1337/1692

Setting up DNS

  • To make our lives easier, we will use nip.io

  • Check out http://cheddar.A.B.C.D.nip.io

    (replacing A.B.C.D with the IP address of node1)

  • We should get the same 404 page not found error

    (meaning that our DNS is "set up properly", so to speak!)

k8smastery/taints.md

1338/1692

Traefik web UI

  • Traefik provides a web dashboard

  • With the current install method, it's listening on port 8080

  • Go to http://node1:8080 (replacing node1 with its IP address)

k8smastery/taints.md

1339/1692

Setting up host-based routing ingress rules

  • We are going to use bretfisher/cheese images

    (there are 3 tags available: wensleydale, cheddar, stilton)

  • These images contain a simple static HTTP server sending a picture of cheese

  • We will run 3 deployments (one for each cheese)

  • We will create 3 services (one for each deployment)

  • Then we will create 3 ingress rules (one for each service)

  • We will route <name-of-cheese>.A.B.C.D.nip.io to the corresponding deployment

k8smastery/taints.md

1340/1692

Running cheesy web servers

  • Run all three deployments:

    kubectl create deployment cheddar --image=bretfisher/cheese:cheddar
    kubectl create deployment stilton --image=bretfisher/cheese:stilton
    kubectl create deployment wensleydale --image=bretfisher/cheese:wensleydale
  • Create a service for each of them:

    kubectl expose deployment cheddar --port=80
    kubectl expose deployment stilton --port=80
    kubectl expose deployment wensleydale --port=80

k8smastery/taints.md

1341/1692

What does an ingress resource look like?

Here is a minimal host-based ingress resource:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: cheddar
spec:
rules:
- host: cheddar.A.B.C.D.nip.io
http:
paths:
- path: /
backend:
serviceName: cheddar
servicePort: 80

(It is in k8s/ingress.yaml.)

k8smastery/taints.md

1342/1692

Creating our first ingress resources

  • Edit the file ~/container.training/k8s/ingress.yaml

  • Replace A.B.C.D with the IP address of node1

  • Apply the file

  • Open http://cheddar.A.B.C.D.nip.io

(An image of a piece of cheese should show up.)

k8smastery/taints.md

1343/1692

Creating the other ingress resources

  • Edit the file ~/container.training/k8s/ingress.yaml

  • Replace cheddar with stilton (in name, host, serviceName)

  • Apply the file

  • Check that stilton.A.B.C.D.nip.io works correctly

  • Repeat for wensleydale

k8smastery/taints.md

1344/1692

Using multiple ingress controllers

  • You can have multiple ingress controllers active simultaneously

    (e.g. Traefik and NGINX)

  • You can even have multiple instances of the same controller

    (e.g. one for internal, another for external traffic)

  • The kubernetes.io/ingress.class annotation can be used to tell which one to use

  • It's OK if multiple ingress controllers configure the same resource

    (it just means that the service will be accessible through multiple paths)

k8smastery/taints.md

1345/1692

Ingress: the good

  • The traffic flows directly from the ingress load balancer to the backends

    • it doesn't need to go through the ClusterIP

    • in fact, we don't even need a ClusterIP (we can use a headless service)

  • The load balancer can be outside of Kubernetes

    (as long as it has access to the cluster subnet)

  • This allows the use of external (hardware, physical machines...) load balancers

  • Annotations can encode special features

    (rate-limiting, A/B testing, session stickiness, etc.)

k8smastery/taints.md

1346/1692

Ingress: the bad

k8smastery/taints.md

1347/1692

Image separating from the next chapter

1348/1692

Upgrading clusters

(automatically generated title slide)

1349/1692

Upgrading clusters

  • It's recommended to run consistent versions across a cluster

    (mostly to have feature parity and latest security updates)

  • It's not mandatory

    (otherwise, cluster upgrades would be a nightmare!)

  • Components can be upgraded one at a time without problems

k8s/cluster-upgrade.md

1350/1692

Checking what we're running

  • It's easy to check the version for the API server
  • Log into node test1

  • Check the version of kubectl and of the API server:

    kubectl version
  • In a HA setup with multiple API servers, they can have different versions

  • Running the command above multiple times can return different values

k8s/cluster-upgrade.md

1351/1692

Node versions

  • It's also easy to check the version of kubelet
  • Check node versions (includes kubelet, kernel, container engine):
    kubectl get nodes -o wide
  • Different nodes can run different kubelet versions

  • Different nodes can run different kernel versions

  • Different nodes can run different container engines

k8s/cluster-upgrade.md

1352/1692

Control plane versions

  • If the control plane is self-hosted (running in pods), we can check it
  • Show image versions for all pods in kube-system namespace:
    kubectl --namespace=kube-system get pods -o json \
    | jq -r '
    .items[]
    | [.spec.nodeName, .metadata.name]
    +
    (.spec.containers[].image | split(":"))
    | @tsv
    ' \
    | column -t

k8s/cluster-upgrade.md

1353/1692

What version are we running anyway?

  • When I say, "I'm running Kubernetes 1.15", is that the version of:

    • kubectl

    • API server

    • kubelet

    • controller manager

    • something else?

k8s/cluster-upgrade.md

1354/1692

Other versions that are important

  • etcd

  • kube-dns or CoreDNS

  • CNI plugin(s)

  • Network controller, network policy controller

  • Container engine

  • Linux kernel

k8s/cluster-upgrade.md

1355/1692

General guidelines

  • To update a component, use whatever was used to install it

  • If it's a distro package, update that distro package

  • If it's a container or pod, update that container or pod

  • If you used configuration management, update with that

k8s/cluster-upgrade.md

1356/1692

Know where your binaries come from

  • Sometimes, we need to upgrade quickly

    (when a vulnerability is announced and patched)

  • If we are using an installer, we should:

    • make sure it's using upstream packages

    • or make sure that whatever packages it uses are current

    • make sure we can tell it to pin specific component versions

k8s/cluster-upgrade.md

1357/1692

Important questions

  • Should we upgrade the control plane before or after the kubelets?

  • Within the control plane, should we upgrade the API server first or last?

  • How often should we upgrade?

  • How long are versions maintained?

  • All the answers are in the documentation about version skew policy!

  • Let's review the key elements together ...

k8s/cluster-upgrade.md

1358/1692

Kubernetes uses semantic versioning

  • Kubernetes versions look like MAJOR.MINOR.PATCH; e.g. in 1.17.2:

    • MAJOR = 1
    • MINOR = 17
    • PATCH = 2
  • It's always possible to mix and match different PATCH releases

    (e.g. 1.16.1 and 1.16.6 are compatible)

  • It is recommended to run the latest PATCH release

    (but it's mandatory only when there is a security advisory)

k8s/cluster-upgrade.md

1359/1692

Version skew

  • API server must be more recent than its clients (kubelet and control plane)

  • ... Which means it must always be upgraded first

  • All components support a difference of one¹ MINOR version

  • This allows live upgrades (since we can mix e.g. 1.15 and 1.16)

  • It also means that going from 1.14 to 1.16 requires going through 1.15

¹Except kubelet, which can be up to two MINOR behind API server, and kubectl, which can be one MINOR ahead or behind API server.

k8s/cluster-upgrade.md

1360/1692

Release cycle

  • There is a new PATCH relese whenever necessary

    (every few weeks, or "ASAP" when there is a security vulnerability)

  • There is a new MINOR release every 3 months (approximately)

  • At any given time, three MINOR releases are maintained

  • ... Which means that MINOR releases are maintained approximately 9 months

  • We should expect to upgrade at least every 3 months (on average)

k8s/cluster-upgrade.md

1361/1692

In practice

  • We are going to update a few cluster components

  • We will change the kubelet version on one node

  • We will change the version of the API server

  • We will work with cluster test (nodes test1, test2, test3)

k8s/cluster-upgrade.md

1362/1692

Updating the API server

  • This cluster has been deployed with kubeadm

  • The control plane runs in static pods

  • These pods are started automatically by kubelet

    (even when kubelet can't contact the API server)

  • They are defined in YAML files in /etc/kubernetes/manifests

    (this path is set by a kubelet command-line flag)

  • kubelet automatically updates the pods when the files are changed

k8s/cluster-upgrade.md

1363/1692

Changing the API server version

  • We will edit the YAML file to use a different image version
  • Log into node test1

  • Check API server version:

    kubectl version
  • Edit the API server pod manifest:

    sudo vim /etc/kubernetes/manifests/kube-apiserver.yaml
  • Look for the image: line, and update it to e.g. v1.16.0

k8s/cluster-upgrade.md

1364/1692

Checking what we've done

  • The API server will be briefly unavailable while kubelet restarts it
  • Check the API server version:
    kubectl version

k8s/cluster-upgrade.md

1365/1692

Was that a good idea?

1366/1692

Was that a good idea?

No!

1367/1692

Was that a good idea?

No!

  • Remember the guideline we gave earlier:

    To update a component, use whatever was used to install it.

  • This control plane was deployed with kubeadm

  • We should use kubeadm to upgrade it!

k8s/cluster-upgrade.md

1368/1692

Updating the whole control plane

  • Let's make it right, and use kubeadm to upgrade the entire control plane

    (note: this is possible only because the cluster was installed with kubeadm)

  • Check what will be upgraded:
    sudo kubeadm upgrade plan

Note 1: kubeadm thinks that our cluster is running 1.16.0.
It is confused by our manual upgrade of the API server!

Note 2: kubeadm itself is still version 1.15.9.
It doesn't know how to upgrade do 1.16.X.

k8s/cluster-upgrade.md

1369/1692

Upgrading kubeadm

  • First things first: we need to upgrade kubeadm
  • Upgrade kubeadm:

    sudo apt install kubeadm
  • Check what kubeadm tells us:

    sudo kubeadm upgrade plan

Problem: kubeadm doesn't know know how to handle upgrades from version 1.15.

This is because we installed version 1.17 (or even later).

We need to install kubeadm version 1.16.X.

k8s/cluster-upgrade.md

1370/1692

Downgrading kubeadm

  • We need to go back to version 1.16.X (e.g. 1.16.6)
  • View available versions for package kubeadm:

    apt show kubeadm -a | grep ^Version | grep 1.16
  • Downgrade kubeadm:

    sudo apt install kubeadm=1.16.6-00
  • Check what kubeadm tells us:

    sudo kubeadm upgrade plan

kubeadm should now agree to upgrade to 1.16.6.

k8s/cluster-upgrade.md

1371/1692

Upgrading the cluster with kubeadm

  • Ideally, we should revert our image: change

    (so that kubeadm executes the right migration steps)

  • Or we can try the upgrade anyway

  • Perform the upgrade:
    sudo kubeadm upgrade apply v1.16.6

k8s/cluster-upgrade.md

1372/1692

Updating kubelet

  • These nodes have been installed using the official Kubernetes packages

  • We can therefore use apt or apt-get

  • Log into node test3

  • View available versions for package kubelet:

    apt show kubelet -a | grep ^Version
  • Upgrade kubelet:

    sudo apt install kubelet=1.16.6-00

k8s/cluster-upgrade.md

1373/1692

Checking what we've done

  • Log into node test1

  • Check node versions:

    kubectl get nodes -o wide
  • Create a deployment and scale it to make sure that the node still works

k8s/cluster-upgrade.md

1374/1692

Was that a good idea?

1375/1692

Was that a good idea?

Almost!

1376/1692

Was that a good idea?

Almost!

  • Yes, kubelet was installed with distribution packages

  • However, kubeadm took care of configuring kubelet

    (when doing kubeadm join ...)

  • We were supposed to run a special command before upgrading kubelet!

  • That command should be executed on each node

  • It will download the kubelet configuration generated by kubeadm

k8s/cluster-upgrade.md

1377/1692

Upgrading kubelet the right way

  • We need to upgrade kubeadm, upgrade kubelet config, then upgrade kubelet

    (after upgrading the control plane)

  • Download the configuration on each node, and upgrade kubelet:
    for N in 1 2 3; do
    ssh test$N "
    sudo apt install kubeadm=1.16.6-00 &&
    sudo kubeadm upgrade node &&
    sudo apt install kubelet=1.16.6-00"
    done

k8s/cluster-upgrade.md

1378/1692

Checking what we've done

  • All our nodes should now be updated to version 1.16.6
  • Check nodes versions:
    kubectl get nodes -o wide

k8s/cluster-upgrade.md

1379/1692

Skipping versions

  • This example worked because we went from 1.15 to 1.16

  • If you are upgrading from e.g. 1.14, you will have to go through 1.15 first

  • This means upgrading kubeadm to 1.15.X, then using it to upgrade the cluster

  • Then upgrading kubeadm to 1.16.X, etc.

  • Make sure to read the release notes before upgrading!

k8s/cluster-upgrade.md

1380/1692

Image separating from the next chapter

1381/1692

Backing up clusters

(automatically generated title slide)

1382/1692

Backing up clusters

  • Backups can have multiple purposes:

    • disaster recovery (servers or storage are destroyed or unreachable)

    • error recovery (human or process has altered or corrupted data)

    • cloning environments (for testing, validation...)

  • Let's see the strategies and tools available with Kubernetes!

k8s/cluster-backup.md

1383/1692

Important

  • Kubernetes helps us with disaster recovery

    (it gives us replication primitives)

  • Kubernetes helps us clone / replicate environments

    (all resources can be described with manifests)

  • Kubernetes does not help us with error recovery

  • We still need to back up/snapshot our data:

    • with database backups (mysqldump, pgdump, etc.)

    • and/or snapshots at the storage layer

    • and/or traditional full disk backups

k8s/cluster-backup.md

1384/1692

In a perfect world ...

  • The deployment of our Kubernetes clusters is automated

    (recreating a cluster takes less than a minute of human time)

  • All the resources (Deployments, Services...) on our clusters are under version control

    (never use kubectl run; always apply YAML files coming from a repository)

  • Stateful components are either:

    • stored on systems with regular snapshots

    • backed up regularly to an external, durable storage

    • outside of Kubernetes

k8s/cluster-backup.md

1385/1692

Kubernetes cluster deployment

  • If our deployment system isn't fully automated, it should at least be documented

  • Litmus test: how long does it take to deploy a cluster...

    • for a senior engineer?

    • for a new hire?

  • Does it require external intervention?

    (e.g. provisioning servers, signing TLS certs...)

k8s/cluster-backup.md

1386/1692

Plan B

  • Full machine backups of the control plane can help

  • If the control plane is in pods (or containers), pay attention to storage drivers

    (if the backup mechanism is not container-aware, the backups can take way more resources than they should, or even be unusable!)

  • If the previous sentence worries you:

    automate the deployment of your clusters!

k8s/cluster-backup.md

1387/1692

Managing our Kubernetes resources

  • Ideal scenario:

    • never create a resource directly on a cluster

    • push to a code repository

    • a special branch (production or even master) gets automatically deployed

  • Some folks call this "GitOps"

    (it's the logical evolution of configuration management and infrastructure as code)

k8s/cluster-backup.md

1388/1692

GitOps in theory

  • What do we keep in version control?

  • For very simple scenarios: source code, Dockerfiles, scripts

  • For real applications: add resources (as YAML files)

  • For applications deployed multiple times: Helm, Kustomize...

    (staging and production count as "multiple times")

k8s/cluster-backup.md

1389/1692

GitOps tooling

  • Various tools exist (Weave Flux, GitKube...)

  • These tools are still very young

  • You still need to write YAML for all your resources

  • There is no tool to:

    • list all resources in a namespace

    • get resource YAML in a canonical form

    • diff YAML descriptions with current state

k8s/cluster-backup.md

1390/1692

GitOps in practice

  • Start describing your resources with YAML

  • Leverage a tool like Kustomize or Helm

  • Make sure that you can easily deploy to a new namespace

    (or even better: to a new cluster)

  • When tooling matures, you will be ready

k8s/cluster-backup.md

1391/1692

Plan B

  • What if we can't describe everything with YAML?

  • What if we manually create resources and forget to commit them to source control?

  • What about global resources, that don't live in a namespace?

  • How can we be sure that we saved everything?

k8s/cluster-backup.md

1392/1692

Backing up etcd

  • All objects are saved in etcd

  • etcd data should be relatively small

    (and therefore, quick and easy to back up)

  • Two options to back up etcd:

    • snapshot the data directory

    • use etcdctl snapshot

k8s/cluster-backup.md

1393/1692

Making an etcd snapshot

  • The basic command is simple:

    etcdctl snapshot save <filename>
  • But we also need to specify:

    • an environment variable to specify that we want etcdctl v3

    • the address of the server to back up

    • the path to the key, certificate, and CA certificate
      (if our etcd uses TLS certificates)

k8s/cluster-backup.md

1394/1692

Snapshotting etcd on kubeadm

  • The following command will work on clusters deployed with kubeadm

    (and maybe others)

  • It should be executed on a master node

docker run --rm --net host -v $PWD:/vol \
-v /etc/kubernetes/pki/etcd:/etc/kubernetes/pki/etcd:ro \
-e ETCDCTL_API=3 k8s.gcr.io/etcd:3.3.10 \
etcdctl --endpoints=https://[127.0.0.1]:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt \
--key=/etc/kubernetes/pki/etcd/healthcheck-client.key \
snapshot save /vol/snapshot
  • It will create a file named snapshot in the current directory

k8s/cluster-backup.md

1395/1692

How can we remember all these flags?

  • Look at the static pod manifest for etcd

    (in /etc/kubernetes/manifests)

  • The healthcheck probe is calling etcdctl with all the right flags 😉👍✌️

  • Exercise: write the YAML for a batch job to perform the backup

k8s/cluster-backup.md

1396/1692

Restoring an etcd snapshot

  • Execute exactly the same command, but replacing save with restore

    (Believe it or not, doing that will not do anything useful!)

  • The restore command does not load a snapshot into a running etcd server

  • The restore command creates a new data directory from the snapshot

    (it's an offline operation; it doesn't interact with an etcd server)

  • It will create a new data directory in a temporary container

    (leaving the running etcd node untouched)

k8s/cluster-backup.md

1397/1692

When using kubeadm

  1. Create a new data directory from the snapshot:

    sudo rm -rf /var/lib/etcd
    docker run --rm -v /var/lib:/var/lib -v $PWD:/vol \
    -e ETCDCTL_API=3 k8s.gcr.io/etcd:3.3.10 \
    etcdctl snapshot restore /vol/snapshot --data-dir=/var/lib/etcd
  2. Provision the control plane, using that data directory:

    sudo kubeadm init \
    --ignore-preflight-errors=DirAvailable--var-lib-etcd
  3. Rejoin the other nodes

k8s/cluster-backup.md

1398/1692

The fine print

  • This only saves etcd state

  • It does not save persistent volumes and local node data

  • Some critical components (like the pod network) might need to be reset

  • As a result, our pods might have to be recreated, too

  • If we have proper liveness checks, this should happen automatically

k8s/cluster-backup.md

1399/1692

More information about etcd backups

k8s/cluster-backup.md

1400/1692

Don't forget ...

  • Also back up the TLS information

    (at the very least: CA key and cert; API server key and cert)

  • With clusters provisioned by kubeadm, this is in /etc/kubernetes/pki

  • If you don't:

    • you will still be able to restore etcd state and bring everything back up

    • you will need to redistribute user certificates

TLS information is highly sensitive!
Anyone who has it has full access to your cluster!

k8s/cluster-backup.md

1401/1692

Stateful services

  • It's totally fine to keep your production databases outside of Kubernetes

    Especially if you have only one database server!

  • Feel free to put development and staging databases on Kubernetes

    (as long as they don't hold important data)

  • Using Kubernetes for stateful services makes sense if you have many

    (because then you can leverage Kubernetes automation)

k8s/cluster-backup.md

1402/1692

Snapshotting persistent volumes

k8s/cluster-backup.md

1403/1692

More backup tools

  • Stash

    back up Kubernetes persistent volumes

  • ReShifter

    cluster state management

  • Heptio Ark Velero

    full cluster backup

  • kube-backup

    simple scripts to save resource YAML to a git repository

  • bivac

    Backup Interface for Volumes Attached to Containers

k8s/cluster-backup.md

1404/1692

Image separating from the next chapter

1405/1692

The Cloud Controller Manager

(automatically generated title slide)

1406/1692

The Cloud Controller Manager

  • Kubernetes has many features that are cloud-specific

    (e.g. providing cloud load balancers when a Service of type LoadBalancer is created)

  • These features were initially implemented in API server and controller manager

  • Since Kubernetes 1.6, these features are available through a separate process:

    the Cloud Controller Manager

  • The CCM is optional, but if we run in a cloud, we probably want it!

k8s/cloud-controller-manager.md

1407/1692

Cloud Controller Manager duties

  • Creating and updating cloud load balancers

  • Configuring routing tables in the cloud network (specific to GCE)

  • Updating node labels to indicate region, zone, instance type...

  • Obtain node name, internal and external addresses from cloud metadata service

  • Deleting nodes from Kubernetes when they're deleted in the cloud

  • Managing some volumes (e.g. ELBs, AzureDisks...)

    (Eventually, volumes will be managed by the Container Storage Interface)

k8s/cloud-controller-manager.md

1408/1692

In-tree vs. out-of-tree

  • A number of cloud providers are supported "in-tree"

    (in the main kubernetes/kubernetes repository on GitHub)

  • More cloud providers are supported "out-of-tree"

    (with code in different repositories)

  • There is an ongoing effort to move everything to out-of-tree providers

k8s/cloud-controller-manager.md

1409/1692

In-tree providers

The following providers are actively maintained:

  • Amazon Web Services
  • Azure
  • Google Compute Engine
  • IBM Cloud
  • OpenStack
  • VMware vSphere

These ones are less actively maintained:

  • Apache CloudStack
  • oVirt
  • VMware Photon

k8s/cloud-controller-manager.md

1410/1692

Out-of-tree providers

The list includes the following providers:

  • DigitalOcean

  • keepalived (not exactly a cloud; provides VIPs for load balancers)

  • Linode

  • Oracle Cloud Infrastructure

(And possibly others; there is no central registry for these.)

k8s/cloud-controller-manager.md

1411/1692

Audience questions

  • What kind of clouds are you using/planning to use?

  • What kind of details would you like to see in this section?

  • Would you appreciate details on clouds that you don't / won't use?

k8s/cloud-controller-manager.md

1412/1692

Cloud Controller Manager in practice

  • Write a configuration file

    (typically /etc/kubernetes/cloud.conf)

  • Run the CCM process

    (on self-hosted clusters, this can be a DaemonSet selecting the control plane nodes)

  • Start kubelet with --cloud-provider=external

  • When using managed clusters, this is done automatically

  • There is very little documentation on writing the configuration file

    (except for OpenStack)

k8s/cloud-controller-manager.md

1413/1692

Bootstrapping challenges

  • When a node joins the cluster, it needs to obtain a signed TLS certificate

  • That certificate must contain the node's addresses

  • These addresses are provided by the Cloud Controller Manager

    (at least the external address)

  • To get these addresses, the node needs to communicate with the control plane

  • ...Which means joining the cluster

(The problem didn't occur when cloud-specific code was running in kubelet: kubelet could obtain the required information directly from the cloud provider's metadata service.)

k8s/cloud-controller-manager.md

1414/1692

More information about CCM

  • CCM configuration and operation is highly specific to each cloud provider

    (which is why this section remains very generic)

  • The Kubernetes documentation has some information:

k8s/cloud-controller-manager.md

1415/1692

Image separating from the next chapter

1416/1692

Namespaces

(automatically generated title slide)

1417/1692

Namespaces

  • We would like to deploy another copy of DockerCoins on our cluster

  • We could rename all our deployments and services:

    hasher → hasher2, redis → redis2, rng → rng2, etc.

  • That would require updating the code

  • There has to be a better way!

1418/1692

Namespaces

  • We would like to deploy another copy of DockerCoins on our cluster

  • We could rename all our deployments and services:

    hasher → hasher2, redis → redis2, rng → rng2, etc.

  • That would require updating the code

  • There has to be a better way!

  • As hinted by the title of this section, we will use namespaces

k8s/namespaces.md

1419/1692

Identifying a resource

  • We cannot have two resources with the same name

    (or can we...?)

1420/1692

Identifying a resource

  • We cannot have two resources with the same name

    (or can we...?)

  • We cannot have two resources of the same kind with the same name

    (but it's OK to have an rng service, an rng deployment, and an rng daemon set)

1421/1692

Identifying a resource

  • We cannot have two resources with the same name

    (or can we...?)

  • We cannot have two resources of the same kind with the same name

    (but it's OK to have an rng service, an rng deployment, and an rng daemon set)

  • We cannot have two resources of the same kind with the same name in the same namespace

    (but it's OK to have e.g. two rng services in different namespaces)

1422/1692

Identifying a resource

  • We cannot have two resources with the same name

    (or can we...?)

  • We cannot have two resources of the same kind with the same name

    (but it's OK to have an rng service, an rng deployment, and an rng daemon set)

  • We cannot have two resources of the same kind with the same name in the same namespace

    (but it's OK to have e.g. two rng services in different namespaces)

  • Except for resources that exist at the cluster scope

    (these do not belong to a namespace)

k8s/namespaces.md

1423/1692

Uniquely identifying a resource

  • For namespaced resources:

    the tuple (kind, name, namespace) needs to be unique

  • For resources at the cluster scope:

    the tuple (kind, name) needs to be unique

  • List resource types again, and check the NAMESPACED column:
    kubectl api-resources

k8s/namespaces.md

1424/1692

Pre-existing namespaces

  • If we deploy a cluster with kubeadm, we have three or four namespaces:

    • default (for our applications)

    • kube-system (for the control plane)

    • kube-public (contains one ConfigMap for cluster discovery)

    • kube-node-lease (in Kubernetes 1.14 and later; contains Lease objects)

  • If we deploy differently, we may have different namespaces

k8s/namespaces.md

1425/1692

Creating namespaces

  • Let's see two identical methods to create a namespace
  • We can use kubectl create namespace:

    kubectl create namespace blue
  • Or we can construct a very minimal YAML snippet:

    kubectl apply -f- <<EOF
    apiVersion: v1
    kind: Namespace
    metadata:
    name: blue
    EOF

k8s/namespaces.md

1426/1692

Using namespaces

  • We can pass a -n or --namespace flag to most kubectl commands:

    kubectl -n blue get svc
  • We can also change our current context

  • A context is a (user, cluster, namespace) tuple

  • We can manipulate contexts with the kubectl config command

k8s/namespaces.md

1427/1692

Viewing existing contexts

  • On our training environments, at this point, there should be only one context
  • View existing contexts to see the cluster name and the current user:
    kubectl config get-contexts
  • The current context (the only one!) is tagged with a *

  • What are NAME, CLUSTER, AUTHINFO, and NAMESPACE?

k8s/namespaces.md

1428/1692

What's in a context

  • NAME is an arbitrary string to identify the context

  • CLUSTER is a reference to a cluster

    (i.e. API endpoint URL, and optional certificate)

  • AUTHINFO is a reference to the authentication information to use

    (i.e. a TLS client certificate, token, or otherwise)

  • NAMESPACE is the namespace

    (empty string = default)

k8s/namespaces.md

1429/1692

Switching contexts

  • We want to use a different namespace

  • Solution 1: update the current context

    This is appropriate if we need to change just one thing (e.g. namespace or authentication).

  • Solution 2: create a new context and switch to it

    This is appropriate if we need to change multiple things and switch back and forth.

  • Let's go with solution 1!

k8s/namespaces.md

1430/1692

Updating a context

  • This is done through kubectl config set-context

  • We can update a context by passing its name, or the current context with --current

  • Update the current context to use the blue namespace:

    kubectl config set-context --current --namespace=blue
  • Check the result:

    kubectl config get-contexts

k8s/namespaces.md

1431/1692

Using our new namespace

  • Let's check that we are in our new namespace, then deploy a new copy of Dockercoins
  • Verify that the new context is empty:
    kubectl get all

k8s/namespaces.md

1432/1692

Deploying DockerCoins with YAML files

  • The GitHub repository jpetazzo/kubercoins contains everything we need!
  • Clone the kubercoins repository:

    cd ~
    git clone https://github.com/jpetazzo/kubercoins
  • Create all the DockerCoins resources:

    kubectl create -f kubercoins

If the argument behind -f is a directory, all the files in that directory are processed.

The subdirectories are not processed, unless we also add the -R flag.

k8s/namespaces.md

1433/1692

Viewing the deployed app

  • Let's see if this worked correctly!
  • Retrieve the port number allocated to the webui service:

    kubectl get svc webui
  • Point our browser to http://X.X.X.X:3xxxx

If the graph shows up but stays at zero, give it a minute or two!

k8s/namespaces.md

1434/1692

Namespaces and isolation

  • Namespaces do not provide isolation

  • A pod in the green namespace can communicate with a pod in the blue namespace

  • A pod in the default namespace can communicate with a pod in the kube-system namespace

  • CoreDNS uses a different subdomain for each namespace

  • Example: from any pod in the cluster, you can connect to the Kubernetes API with:

    https://kubernetes.default.svc.cluster.local:443/

k8s/namespaces.md

1435/1692

Isolating pods

  • Actual isolation is implemented with network policies

  • Network policies are resources (like deployments, services, namespaces...)

  • Network policies specify which flows are allowed:

    • between pods

    • from pods to the outside world

    • and vice-versa

k8s/namespaces.md

1436/1692

Switch back to the default namespace

  • Let's make sure that we don't run future exercises in the blue namespace
  • Switch back to the original context:
    kubectl config set-context --current --namespace=

Note: we could have used --namespace=default for the same result.

k8s/namespaces.md

1437/1692

Switching namespaces more easily

  • We can also use a little helper tool called kubens:

    # Switch to namespace foo
    kubens foo
    # Switch back to the previous namespace
    kubens -
  • On our clusters, kubens is called kns instead

    (so that it's even fewer keystrokes to switch namespaces)

k8s/namespaces.md

1438/1692

kubens and kubectx

  • With kubens, we can switch quickly between namespaces

  • With kubectx, we can switch quickly between contexts

  • Both tools are simple shell scripts available from https://github.com/ahmetb/kubectx

  • On our clusters, they are installed as kns and kctx

    (for brevity and to avoid completion clashes between kubectx and kubectl)

k8s/namespaces.md

1439/1692

kube-ps1

  • It's easy to lose track of our current cluster / context / namespace

  • kube-ps1 makes it easy to track these, by showing them in our shell prompt

  • It is installed on our training clusters, and when using shpod

  • It gives us a prompt looking like this one:

    [123.45.67.89] (kubernetes-admin@kubernetes:default) docker@node1 ~

    (The highlighted part is context:namespace, managed by kube-ps1)

  • Highly recommended if you work across multiple contexts or namespaces!

k8s/namespaces.md

1440/1692

Installing kube-ps1

  • It's a simple shell script available from https://github.com/jonmosco/kube-ps1

  • It needs to be installed in our profile/rc files

    (instructions differ depending on platform, shell, etc.)

  • Once installed, it defines aliases called kube_ps1, kubeon, kubeoff

    (to selectively enable/disable it when needed)

  • Pro-tip: install it on your machine during the next break!

k8s/namespaces.md

1441/1692

Image separating from the next chapter

1442/1692

Controlling a Kubernetes cluster remotely

(automatically generated title slide)

1443/1692

Controlling a Kubernetes cluster remotely

  • kubectl can be used either on cluster instances or outside the cluster

  • Here, we are going to use kubectl from our local machine

k8s/localkubeconfig.md

1444/1692

Requirements

The exercises in this chapter should be done on your local machine.

  • kubectl is officially available on Linux, macOS, Windows

    (and unofficially anywhere we can build and run Go binaries)

  • You may skip these exercises if you are following along from:

    • a tablet or phone

    • a web-based terminal

    • an environment where you can't install and run new binaries

k8s/localkubeconfig.md

1445/1692

Installing kubectl

  • If you already have kubectl on your local machine, you can skip this
  • Download the kubectl binary from one of these links:

    Linux | macOS | Windows

  • On Linux and macOS, make the binary executable with chmod +x kubectl

    (And remember to run it with ./kubectl or move it to your $PATH)

Note: if you are following along with a different platform (e.g. Linux on an architecture different from amd64, or with a phone or tablet), installing kubectl might be more complicated (or even impossible) so feel free to skip this section.

k8s/localkubeconfig.md

1446/1692

Testing kubectl

  • Check that kubectl works correctly

    (before even trying to connect to a remote cluster!)

  • Ask kubectl to show its version number:
    kubectl version --client

The output should look like this:

Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0",
GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean",
BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc",
Platform:"darwin/amd64"}

k8s/localkubeconfig.md

1447/1692

Preserving the existing ~/.kube/config

  • If you already have a ~/.kube/config file, rename it

    (we are going to overwrite it in the following slides!)

  • If you never used kubectl on your machine before: nothing to do!

  • Make a copy of ~/.kube/config; if you are using macOS or Linux, you can do:

    cp ~/.kube/config ~/.kube/config.before.training
  • If you are using Windows, you will need to adapt this command

k8s/localkubeconfig.md

1448/1692

Copying the configuration file from node1

  • The ~/.kube/config file that is on node1 contains all the credentials we need

  • Let's copy it over!

  • Copy the file from node1; if you are using macOS or Linux, you can do:

    scp USER@X.X.X.X:.kube/config ~/.kube/config
    # Make sure to replace X.X.X.X with the IP address of node1,
    # and USER with the user name used to log into node1!
  • If you are using Windows, adapt these instructions to your SSH client

k8s/localkubeconfig.md

1449/1692

Updating the server address

  • There is a good chance that we need to update the server address

  • To know if it is necessary, run kubectl config view

  • Look for the server: address:

    • if it matches the public IP address of node1, you're good!

    • if it is anything else (especially a private IP address), update it!

  • To update the server address, run:

    kubectl config set-cluster kubernetes --server=https://X.X.X.X:6443
    # Make sure to replace X.X.X.X with the IP address of node1!

k8s/localkubeconfig.md

1450/1692

What if we get a certificate error?

  • Generally, the Kubernetes API uses a certificate that is valid for:

    • kubernetes
    • kubernetes.default
    • kubernetes.default.svc
    • kubernetes.default.svc.cluster.local
    • the ClusterIP address of the kubernetes service
    • the hostname of the node hosting the control plane (e.g. node1)
    • the IP address of the node hosting the control plane
  • On most clouds, the IP address of the node is an internal IP address

  • ... And we are going to connect over the external IP address

  • ... And that external IP address was not used when creating the certificate!

k8s/localkubeconfig.md

1451/1692

Working around the certificate error

  • We need to tell kubectl to skip TLS verification

    (only do this with testing clusters, never in production!)

  • The following command will do the trick:

    kubectl config set-cluster kubernetes --insecure-skip-tls-verify

k8s/localkubeconfig.md

1452/1692

Checking that we can connect to the cluster

  • We can now run a couple of trivial commands to check that all is well
  • Check the versions of the local client and remote server:

    kubectl version
  • View the nodes of the cluster:

    kubectl get nodes

We can now utilize the cluster exactly as if we're logged into a node, except that it's remote.

k8s/localkubeconfig.md

1453/1692

Image separating from the next chapter

1454/1692

Accessing internal services

(automatically generated title slide)

1455/1692

Accessing internal services

  • When we are logged in on a cluster node, we can access internal services

    (by virtue of the Kubernetes network model: all nodes can reach all pods and services)

  • When we are accessing a remote cluster, things are different

    (generally, our local machine won't have access to the cluster's internal subnet)

  • How can we temporarily access a service without exposing it to everyone?

1456/1692

Accessing internal services

  • When we are logged in on a cluster node, we can access internal services

    (by virtue of the Kubernetes network model: all nodes can reach all pods and services)

  • When we are accessing a remote cluster, things are different

    (generally, our local machine won't have access to the cluster's internal subnet)

  • How can we temporarily access a service without exposing it to everyone?

  • kubectl proxy: gives us access to the API, which includes a proxy for HTTP resources

  • kubectl port-forward: allows forwarding of TCP ports to arbitrary pods, services, ...

k8s/accessinternal.md

1457/1692

Suspension of disbelief

The exercises in this section assume that we have set up kubectl on our local machine in order to access a remote cluster.

We will therefore show how to access services and pods of the remote cluster, from our local machine.

You can also run these exercises directly on the cluster (if you haven't installed and set up kubectl locally).

Running commands locally will be less useful (since you could access services and pods directly), but keep in mind that these commands will work anywhere as long as you have installed and set up kubectl to communicate with your cluster.

k8s/accessinternal.md

1458/1692

kubectl proxy in theory

  • Running kubectl proxy gives us access to the entire Kubernetes API

  • The API includes routes to proxy HTTP traffic

  • These routes look like the following:

    /api/v1/namespaces/<namespace>/services/<service>/proxy

  • We just add the URI to the end of the request, for instance:

    /api/v1/namespaces/<namespace>/services/<service>/proxy/index.html

  • We can access services and pods this way

k8s/accessinternal.md

1459/1692

kubectl proxy in practice

  • Let's access the webui service through kubectl proxy
  • Run an API proxy in the background:

    kubectl proxy &
  • Access the webui service:

    curl localhost:8001/api/v1/namespaces/default/services/webui/proxy/index.html
  • Terminate the proxy:

    kill %1

k8s/accessinternal.md

1460/1692

kubectl port-forward in theory

  • What if we want to access a TCP service?

  • We can use kubectl port-forward instead

  • It will create a TCP relay to forward connections to a specific port

    (of a pod, service, deployment...)

  • The syntax is:

    kubectl port-forward service/name_of_service local_port:remote_port

  • If only one port number is specified, it is used for both local and remote ports

k8s/accessinternal.md

1461/1692

kubectl port-forward in practice

  • Let's access our remote Redis server
  • Forward connections from local port 10000 to remote port 6379:

    kubectl port-forward svc/redis 10000:6379 &
  • Connect to the Redis server:

    telnet localhost 10000
  • Issue a few commands, e.g. INFO server then QUIT

  • Terminate the port forwarder:
    kill %1

k8s/accessinternal.md

1462/1692

Image separating from the next chapter

1463/1692

Accessing the API with kubectl proxy

(automatically generated title slide)

1464/1692

Accessing the API with kubectl proxy

  • The API requires us to authenticate¹

  • There are many authentication methods available, including:

    • TLS client certificates
      (that's what we've used so far)

    • HTTP basic password authentication
      (from a static file; not recommended)

    • various token mechanisms
      (detailed in the documentation)

¹OK, we lied. If you don't authenticate, you are considered to be user system:anonymous, which doesn't have any access rights by default.

k8s/kubectlproxy.md

1465/1692

Accessing the API directly

  • Let's see what happens if we try to access the API directly with curl
  • Retrieve the ClusterIP allocated to the kubernetes service:

    kubectl get svc kubernetes
  • Replace the IP below and try to connect with curl:

    curl -k https://10.96.0.1/

The API will tell us that user system:anonymous cannot access this path.

k8s/kubectlproxy.md

1466/1692

Authenticating to the API

If we wanted to talk to the API, we would need to:

  • extract our TLS key and certificate information from ~/.kube/config

    (the information is in PEM format, encoded in base64)

  • use that information to present our certificate when connecting

    (for instance, with openssl s_client -key ... -cert ... -connect ...)

  • figure out exactly which credentials to use

    (once we start juggling multiple clusters)

  • change that whole process if we're using another authentication method

🤔 There has to be a better way!

k8s/kubectlproxy.md

1467/1692

Using kubectl proxy for authentication

  • kubectl proxy runs a proxy in the foreground

  • This proxy lets us access the Kubernetes API without authentication

    (kubectl proxy adds our credentials on the fly to the requests)

  • This proxy lets us access the Kubernetes API over plain HTTP

  • This is a great tool to learn and experiment with the Kubernetes API

  • ... And for serious uses as well (suitable for one-shot scripts)

  • For unattended use, it's better to create a service account

k8s/kubectlproxy.md

1468/1692

Trying kubectl proxy

  • Let's start kubectl proxy and then do a simple request with curl!
  • Start kubectl proxy in the background:

    kubectl proxy &
  • Access the API's default route:

    curl localhost:8001
  • Terminate the proxy:
    kill %1

The output is a list of available API routes.

k8s/kubectlproxy.md

1469/1692

OpenAPI (fka Swagger)

  • The Kubernetes API serves an OpenAPI Specification

    (OpenAPI was formerly known as Swagger)

  • OpenAPI has many advantages

    (generate client library code, generate test code ...)

  • For us, this means we can explore the API with Swagger UI

    (for instance with the Swagger UI add-on for Firefox)

k8s/kubectlproxy.md

1470/1692

kubectl proxy is intended for local use

  • By default, the proxy listens on port 8001

    (But this can be changed, or we can tell kubectl proxy to pick a port)

  • By default, the proxy binds to 127.0.0.1

    (Making it unreachable from other machines, for security reasons)

  • By default, the proxy only accepts connections from:

    ^localhost$,^127\.0\.0\.1$,^\[::1\]$

  • This is great when running kubectl proxy locally

  • Not-so-great when you want to connect to the proxy from a remote machine

k8s/kubectlproxy.md

1471/1692

Running kubectl proxy on a remote machine

  • If we wanted to connect to the proxy from another machine, we would need to:

    • bind to INADDR_ANY instead of 127.0.0.1

    • accept connections from any address

  • This is achieved with:

    kubectl proxy --port=8888 --address=0.0.0.0 --accept-hosts=.*

Do not do this on a real cluster: it opens full unauthenticated access!

k8s/kubectlproxy.md

1472/1692

Security considerations

  • Running kubectl proxy openly is a huge security risk

  • It is slightly better to run the proxy where you need it

    (and copy credentials, e.g. ~/.kube/config, to that place)

  • It is even better to use a limited account with reduced permissions

k8s/kubectlproxy.md

1473/1692

Good to know ...

  • kubectl proxy also gives access to all internal services

  • Specifically, services are exposed as such:

    /api/v1/namespaces/<namespace>/services/<service>/proxy
  • We can use kubectl proxy to access an internal service in a pinch

    (or, for non HTTP services, kubectl port-forward)

  • This is not very useful when running kubectl directly on the cluster

    (since we could connect to the services directly anyway)

  • But it is very powerful as soon as you run kubectl from a remote machine

k8s/kubectlproxy.md

1474/1692

Image separating from the next chapter

1475/1692

The Container Network Interface

(automatically generated title slide)

1476/1692

The Container Network Interface

  • Allows us to decouple network configuration from Kubernetes

  • Implemented by plugins

  • Plugins are executables that will be invoked by kubelet

  • Plugins are responsible for:

    • allocating IP addresses for containers

    • configuring the network for containers

  • Plugins can be combined and chained when it makes sense

k8s/cni.md

1477/1692

Combining plugins

  • Interface could be created by e.g. vlan or bridge plugin

  • IP address could be allocated by e.g. dhcp or host-local plugin

  • Interface parameters (MTU, sysctls) could be tweaked by the tuning plugin

The reference plugins are available here.

Look in each plugin's directory for its documentation. k8s/cni.md

1478/1692

How does kubelet know which plugins to use?

  • The plugin (or list of plugins) is set in the CNI configuration

  • The CNI configuration is a single file in /etc/cni/net.d

  • If there are multiple files in that directory, the first one is used

    (in lexicographic order)

  • That path can be changed with the --cni-conf-dir flag of kubelet

k8s/cni.md

1479/1692

CNI configuration in practice

  • When we set up the "pod network" (like Calico, Weave...) it ships a CNI configuration

    (and sometimes, custom CNI plugins)

  • Very often, that configuration (and plugins) is installed automatically

    (by a DaemonSet featuring an initContainer with hostPath volumes)

  • Examples:

k8s/cni.md

1480/1692

Conf vs conflist

  • There are two slightly different configuration formats

  • Basic configuration format:

    • holds configuration for a single plugin
    • typically has a .conf name suffix
    • has a type string field in the top-most structure
    • examples
  • Configuration list format:

    • can hold configuration for multiple (chained) plugins
    • typically has a .conflist name suffix
    • has a plugins list field in the top-most structure
    • examples

k8s/cni.md

1481/1692

How plugins are invoked

  • Parameters are given through environment variables, including:

    • CNI_COMMAND: desired operation (ADD, DEL, CHECK, or VERSION)

    • CNI_CONTAINERID: container ID

    • CNI_NETNS: path to network namespace file

    • CNI_IFNAME: what the network interface should be named

  • The network configuration must be provided to the plugin on stdin

    (this avoids race conditions that could happen by passing a file path)

k8s/cni.md

1482/1692

In practice: kube-router

  • We are going to set up a new cluster

  • For this new cluster, we will use kube-router

  • kube-router will provide the "pod network"

    (connectivity with pods)

  • kube-router will also provide internal service connectivity

    (replacing kube-proxy)

k8s/cni.md

1483/1692

How kube-router works

  • Very simple architecture

  • Does not introduce new CNI plugins

    (uses the bridge plugin, with host-local for IPAM)

  • Pod traffic is routed between nodes

    (no tunnel, no new protocol)

  • Internal service connectivity is implemented with IPVS

  • Can provide pod network and/or internal service connectivity

  • kube-router daemon runs on every node

k8s/cni.md

1484/1692

What kube-router does

  • Connect to the API server

  • Obtain the local node's podCIDR

  • Inject it into the CNI configuration file

    (we'll use /etc/cni/net.d/10-kuberouter.conflist)

  • Obtain the addresses of all nodes

  • Establish a full mesh BGP peering with the other nodes

  • Exchange routes over BGP

k8s/cni.md

1485/1692

What's BGP?

  • BGP (Border Gateway Protocol) is the protocol used between internet routers

  • It scales pretty well (it is used to announce the 700k CIDR prefixes of the internet)

  • It is spoken by many hardware routers from many vendors

  • It also has many software implementations (Quagga, Bird, FRR...)

  • Experienced network folks generally know it (and appreciate it)

  • It also used by Calico (another popular network system for Kubernetes)

  • Using BGP allows us to interconnect our "pod network" with other systems

k8s/cni.md

1486/1692

The plan

  • We'll work in a new cluster (named kuberouter)

  • We will run a simple control plane (like before)

  • ... But this time, the controller manager will allocate podCIDR subnets

    (so that we don't have to manually assign subnets to individual nodes)

  • We will create a DaemonSet for kube-router

  • We will join nodes to the cluster

  • The DaemonSet will automatically start a kube-router pod on each node

k8s/cni.md

1487/1692

Logging into the new cluster

  • Log into node kuberouter1

  • Clone the workshop repository:

    git clone https://github.com/BretFisher/kubernetes-mastery
  • Move to this directory:

    cd container.training/compose/kube-router-k8s-control-plane

k8s/cni.md

1488/1692

Checking the CNI configuration

  • By default, kubelet gets the CNI configuration from /etc/cni/net.d
  • Check the content of /etc/cni/net.d

(On most machines, at this point, /etc/cni/net.d doesn't even exist).)

k8s/cni.md

1489/1692

Our control plane

  • We will use a Compose file to start the control plane

  • It is similar to the one we used with the kubenet cluster

  • The API server is started with --allow-privileged

    (because we will start kube-router in privileged pods)

  • The controller manager is started with extra flags too:

    --allocate-node-cidrs and --cluster-cidr

  • We need to edit the Compose file to set the Cluster CIDR

k8s/cni.md

1490/1692

Starting the control plane

  • Our cluster CIDR will be 10.C.0.0/16

    (where C is our cluster number)

  • Edit the Compose file to set the Cluster CIDR:

    vim docker-compose.yaml
  • Start the control plane:

    docker-compose up

k8s/cni.md

1491/1692

The kube-router DaemonSet

  • In the same directory, there is a kuberouter.yaml file

  • It contains the definition for a DaemonSet and a ConfigMap

  • Before we load it, we also need to edit it

  • We need to indicate the address of the API server

    (because kube-router needs to connect to it to retrieve node information)

k8s/cni.md

1492/1692

Creating the DaemonSet

  • The address of the API server will be http://A.B.C.D:8080

    (where A.B.C.D is the public address of kuberouter1, running the control plane)

  • Edit the YAML file to set the API server address:

    vim kuberouter.yaml
  • Create the DaemonSet:

    kubectl create -f kuberouter.yaml

Note: the DaemonSet won't create any pods (yet) since there are no nodes (yet).

k8s/cni.md

1493/1692

Generating the kubeconfig for kubelet

  • This is similar to what we did for the kubenet cluster
  • Generate the kubeconfig file (replacing X.X.X.X with the address of kuberouter1):
    kubectl config set-cluster cni --server http://X.X.X.X:8080
    kubectl config set-context cni --cluster cni
    kubectl config use-context cni
    cp ~/.kube/config ~/kubeconfig

k8s/cni.md

1494/1692

Distributing kubeconfig

  • We need to copy that kubeconfig file to the other nodes
  • Copy kubeconfig to the other nodes:
    for N in 2 3; do
    scp ~/kubeconfig kuberouter$N:
    done

k8s/cni.md

1495/1692

Starting kubelet

  • We don't need the --pod-cidr option anymore

    (the controller manager will allocate these automatically)

  • We need to pass --network-plugin=cni

  • Join the first node:

    sudo kubelet --kubeconfig ~/kubeconfig --network-plugin=cni
  • Open more terminals and join the other nodes:

    ssh kuberouter2 sudo kubelet --kubeconfig ~/kubeconfig --network-plugin=cni
    ssh kuberouter3 sudo kubelet --kubeconfig ~/kubeconfig --network-plugin=cni

k8s/cni.md

1496/1692

Checking the CNI configuration

  • At this point, kuberouter should have installed its CNI configuration

    (in /etc/cni/net.d)

  • Check the content of /etc/cni/net.d
  • There should be a file created by kuberouter

  • The file should contain the node's podCIDR

k8s/cni.md

1497/1692

Setting up a test

  • Let's create a Deployment and expose it with a Service
  • Create a Deployment running a web server:

    kubectl create deployment web --image=jpetazzo/httpenv
  • Scale it so that it spans multiple nodes:

    kubectl scale deployment web --replicas=5
  • Expose it with a Service:

    kubectl expose deployment web --port=8888

k8s/cni.md

1498/1692

Checking that everything works

  • Get the ClusterIP address for the service:

    kubectl get svc web
  • Send a few requests there:

    curl X.X.X.X:8888

Note that if you send multiple requests, they are load-balanced in a round robin manner.

This shows that we are using IPVS (vs. iptables, which picked random endpoints).

k8s/cni.md

1499/1692

Troubleshooting

  • What if we need to check that everything is working properly?
  • Check the IP addresses of our pods:

    kubectl get pods -o wide
  • Check our routing table:

    route -n
    ip route

We should see the local pod CIDR connected to kube-bridge, and the other nodes' pod CIDRs having individual routes, with each node being the gateway.

k8s/cni.md

1500/1692

More troubleshooting

  • We can also look at the output of the kube-router pods

    (with kubectl logs)

  • kube-router also comes with a special shell that gives lots of useful info

    (we can access it with kubectl exec)

  • But with the current setup of the cluster, these options may not work!

  • Why?

k8s/cni.md

1501/1692

Trying kubectl logs / kubectl exec

  • Try to show the logs of a kube-router pod:

    kubectl -n kube-system logs ds/kube-router
  • Or try to exec into one of the kube-router pods:

    kubectl -n kube-system exec kube-router-xxxxx bash

These commands will give an error message that includes:

dial tcp: lookup kuberouterX on 127.0.0.11:53: no such host

What does that mean?

k8s/cni.md

1502/1692

Internal name resolution

  • To execute these commands, the API server needs to connect to kubelet

  • By default, it creates a connection using the kubelet's name

    (e.g. http://kuberouter1:...)

  • This requires our nodes names to be in DNS

  • We can change that by setting a flag on the API server:

    --kubelet-preferred-address-types=InternalIP

k8s/cni.md

1503/1692

Another way to check the logs

  • We can also ask the logs directly to the container engine

  • First, get the container ID, with docker ps or like this:

    CID=$(docker ps -q \
    --filter label=io.kubernetes.pod.namespace=kube-system \
    --filter label=io.kubernetes.container.name=kube-router)
  • Then view the logs:

    docker logs $CID

k8s/cni.md

1504/1692

Other ways to distribute routing tables

  • We don't need kube-router and BGP to distribute routes

  • The list of nodes (and associated podCIDR subnets) is available through the API

  • This shell snippet generates the commands to add all required routes on a node:

NODES=$(kubectl get nodes -o name | cut -d/ -f2)
for DESTNODE in $NODES; do
if [ "$DESTNODE" != "$HOSTNAME" ]; then
echo $(kubectl get node $DESTNODE -o go-template="
route add -net {{.spec.podCIDR}} gw {{(index .status.addresses 0).address}}")
fi
done
  • This could be useful for embedded platforms with very limited resources

    (or lab environments for learning purposes)

k8s/cni.md

1505/1692

Image separating from the next chapter

1506/1692

Interconnecting clusters

(automatically generated title slide)

1507/1692

Interconnecting clusters

  • We assigned different Cluster CIDRs to each cluster

  • This allows us to connect our clusters together

  • We will leverage kube-router BGP abilities for that

  • We will peer each kube-router instance with a route reflector

  • As a result, we will be able to ping each other's pods

k8s/interco.md

1508/1692

Disclaimers

  • There are many methods to interconnect clusters

  • Depending on your network implementation, you will use different methods

  • The method shown here only works for nodes with direct layer 2 connection

  • We will often need to use tunnels or other network techniques

k8s/interco.md

1509/1692

The plan

  • Someone will start the route reflector

    (typically, that will be the person presenting these slides!)

  • We will update our kube-router configuration

  • We will add a peering with the route reflector

    (instructing kube-router to connect to it and exchange route information)

  • We should see the routes to other clusters on our nodes

    (in the output of e.g. route -n or ip route show)

  • We should be able to ping pods of other nodes

k8s/interco.md

1510/1692

Starting the route reflector

  • Only do this slide if you are doing this on your own

  • There is a Compose file in the compose/frr-route-reflector directory

  • Before continuing, make sure that you have the IP address of the route reflector

k8s/interco.md

1511/1692

Configuring kube-router

  • This can be done in two ways:

    • with command-line flags to the kube-router process

    • with annotations to Node objects

  • We will use the command-line flags

    (because it will automatically propagate to all nodes)

Note: with Calico, this is achieved by creating a BGPPeer CRD.

k8s/interco.md

1512/1692

Updating kube-router configuration

  • We need to pass two command-line flags to the kube-router process
  • Edit the kuberouter.yaml file

  • Add the following flags to the kube-router arguments:

    - "--peer-router-ips=X.X.X.X"
    - "--peer-router-asns=64512"

    (Replace X.X.X.X with the route reflector address)

  • Update the DaemonSet definition:

    kubectl apply -f kuberouter.yaml

k8s/interco.md

1513/1692

Restarting kube-router

  • The DaemonSet will not update the pods automatically

    (it is using the default updateStrategy, which is OnDelete)

  • We will therefore delete the pods

    (they will be recreated with the updated definition)

  • Delete all the kube-router pods:
    kubectl delete pods -n kube-system -l k8s-app=kube-router

Note: the other updateStrategy for a DaemonSet is RollingUpdate.
For critical services, we might want to precisely control the update process.

k8s/interco.md

1514/1692

Checking peering status

  • We can see informative messages in the output of kube-router:

    time="2019-04-07T15:53:56Z" level=info msg="Peer Up"
    Key=X.X.X.X State=BGP_FSM_OPENCONFIRM Topic=Peer
  • We should see the routes of the other clusters show up

  • For debugging purposes, the reflector also exports a route to 1.0.0.2/32

  • That route will show up like this:

    1.0.0.2 172.31.X.Y 255.255.255.255 UGH 0 0 0 eth0
  • We should be able to ping the pods of other clusters!

k8s/interco.md

1515/1692

If we wanted to do more ...

  • kube-router can also export ClusterIP addresses

    (by adding the flag --advertise-cluster-ip)

  • They are exported individually (as /32)

  • This would allow us to easily access other clusters' services

    (without having to resolve the individual addresses of pods)

  • Even better if it's combined with DNS integration

    (to facilitate name → ClusterIP resolution)

k8s/interco.md

1516/1692

Image separating from the next chapter

1517/1692

Network policies

(automatically generated title slide)

1518/1692

Network policies

  • Namespaces help us to organize resources

  • Namespaces do not provide isolation

  • By default, every pod can contact every other pod

  • By default, every service accepts traffic from anyone

  • If we want this to be different, we need network policies

k8s/netpol.md

1519/1692

What's a network policy?

A network policy is defined by the following things.

  • A pod selector indicating which pods it applies to

    e.g.: "all pods in namespace blue with the label zone=internal"

  • A list of ingress rules indicating which inbound traffic is allowed

    e.g.: "TCP connections to ports 8000 and 8080 coming from pods with label zone=dmz, and from the external subnet 4.42.6.0/24, except 4.42.6.5"

  • A list of egress rules indicating which outbound traffic is allowed

A network policy can provide ingress rules, egress rules, or both.

k8s/netpol.md

1520/1692

How do network policies apply?

  • A pod can be "selected" by any number of network policies

  • If a pod isn't selected by any network policy, then its traffic is unrestricted

    (In other words: in the absence of network policies, all traffic is allowed)

  • If a pod is selected by at least one network policy, then all traffic is blocked ...

    ... unless it is explicitly allowed by one of these network policies

k8s/netpol.md

1521/1692

Traffic filtering is flow-oriented

  • Network policies deal with connections, not individual packets

  • Example: to allow HTTP (80/tcp) connections to pod A, you only need an ingress rule

    (You do not need a matching egress rule to allow response traffic to go through)

  • This also applies for UDP traffic

    (Allowing DNS traffic can be done with a single rule)

  • Network policy implementations use stateful connection tracking

k8s/netpol.md

1522/1692

Pod-to-pod traffic

  • Connections from pod A to pod B have to be allowed by both pods:

    • pod A has to be unrestricted, or allow the connection as an egress rule

    • pod B has to be unrestricted, or allow the connection as an ingress rule

  • As a consequence: if a network policy restricts traffic going from/to a pod,
    the restriction cannot be overridden by a network policy selecting another pod

  • This prevents an entity managing network policies in namespace A (but without permission to do so in namespace B) from adding network policies giving them access to namespace B

k8s/netpol.md

1523/1692

The rationale for network policies

  • In network security, it is generally considered better to "deny all, then allow selectively"

    (The other approach, "allow all, then block selectively" makes it too easy to leave holes)

  • As soon as one network policy selects a pod, the pod enters this "deny all" logic

  • Further network policies can open additional access

  • Good network policies should be scoped as precisely as possible

  • In particular: make sure that the selector is not too broad

    (Otherwise, you end up affecting pods that were otherwise well secured)

k8s/netpol.md

1524/1692

Our first network policy

This is our game plan:

  • run a web server in a pod

  • create a network policy to block all access to the web server

  • create another network policy to allow access only from specific pods

k8s/netpol.md

1525/1692

Running our test web server

  • Let's use the nginx image:
    kubectl create deployment testweb --image=nginx
  • Find out the IP address of the pod with one of these two commands:

    kubectl get pods -o wide -l app=testweb
    IP=$(kubectl get pods -l app=testweb -o json | jq -r .items[0].status.podIP)
  • Check that we can connect to the server:

    curl $IP

The curl command should show us the "Welcome to nginx!" page.

k8s/netpol.md

1526/1692

Adding a very restrictive network policy

  • The policy will select pods with the label app=testweb

  • It will specify an empty list of ingress rules (matching nothing)

  • Apply the policy in this YAML file:

    kubectl apply -f ~/container.training/k8s/netpol-deny-all-for-testweb.yaml
  • Check if we can still access the server:

    curl $IP

The curl command should now time out.

k8s/netpol.md

1527/1692

Looking at the network policy

This is the file that we applied:

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: deny-all-for-testweb
spec:
podSelector:
matchLabels:
app: testweb
ingress: []

k8s/netpol.md

1528/1692

Allowing connections only from specific pods

  • We want to allow traffic from pods with the label run=testcurl

  • Reminder: this label is automatically applied when we do kubectl run testcurl ...

  • Apply another policy:
    kubectl apply -f ~/container.training/k8s/netpol-allow-testcurl-for-testweb.yaml

k8s/netpol.md

1529/1692

Looking at the network policy

This is the second file that we applied:

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-testcurl-for-testweb
spec:
podSelector:
matchLabels:
app: testweb
ingress:
- from:
- podSelector:
matchLabels:
run: testcurl

k8s/netpol.md

1530/1692

Testing the network policy

  • Let's create pods with, and without, the required label
  • Try to connect to testweb from a pod with the run=testcurl label:

    kubectl run testcurl --rm -i --image=centos -- curl -m3 $IP
  • Try to connect to testweb with a different label:

    kubectl run testkurl --rm -i --image=centos -- curl -m3 $IP

The first command will work (and show the "Welcome to nginx!" page).

The second command will fail and time out after 3 seconds.

(The timeout is obtained with the -m3 option.)

k8s/netpol.md

1531/1692

An important warning

  • Some network plugins only have partial support for network policies

  • For instance, Weave added support for egress rules in version 2.4 (released in July 2018)

  • But only recently added support for ipBlock in version 2.5 (released in Nov 2018)

  • Unsupported features might be silently ignored

    (Making you believe that you are secure, when you're not)

k8s/netpol.md

1532/1692

Network policies, pods, and services

  • Network policies apply to pods

  • A service can select multiple pods

    (And load balance traffic across them)

  • It is possible that we can connect to some pods, but not some others

    (Because of how network policies have been defined for these pods)

  • In that case, connections to the service will randomly pass or fail

    (Depending on whether the connection was sent to a pod that we have access to or not)

k8s/netpol.md

1533/1692

Network policies and namespaces

  • A good strategy is to isolate a namespace, so that:

    • all the pods in the namespace can communicate together

    • other namespaces cannot access the pods

    • external access has to be enabled explicitly

  • Let's see what this would look like for the DockerCoins app!

k8s/netpol.md

1534/1692

Network policies for DockerCoins

  • We are going to apply two policies

  • The first policy will prevent traffic from other namespaces

  • The second policy will allow traffic to the webui pods

  • That's all we need for that app!

k8s/netpol.md

1535/1692

Blocking traffic from other namespaces

This policy selects all pods in the current namespace.

It allows traffic only from pods in the current namespace.

(An empty podSelector means "all pods.")

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: deny-from-other-namespaces
spec:
podSelector: {}
ingress:
- from:
- podSelector: {}

k8s/netpol.md

1536/1692

Allowing traffic to webui pods

This policy selects all pods with label app=webui.

It allows traffic from any source.

(An empty from field means "all sources.")

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-webui
spec:
podSelector:
matchLabels:
app: webui
ingress:
- from: []

k8s/netpol.md

1537/1692

Applying both network policies

  • Both network policies are declared in the file k8s/netpol-dockercoins.yaml
  • Apply the network policies:

    kubectl apply -f ~/container.training/k8s/netpol-dockercoins.yaml
  • Check that we can still access the web UI from outside
    (and that the app is still working correctly!)

  • Check that we can't connect anymore to rng or hasher through their ClusterIP

Note: using kubectl proxy or kubectl port-forward allows us to connect regardless of existing network policies. This allows us to debug and troubleshoot easily, without having to poke holes in our firewall.

k8s/netpol.md

1538/1692

Cleaning up our network policies

  • The network policies that we have installed block all traffic to the default namespace

  • We should remove them, otherwise further exercises will fail!

  • Remove all network policies:
    kubectl delete networkpolicies --all

k8s/netpol.md

1539/1692

Protecting the control plane

  • Should we add network policies to block unauthorized access to the control plane?

    (etcd, API server, etc.)

1540/1692

Protecting the control plane

  • Should we add network policies to block unauthorized access to the control plane?

    (etcd, API server, etc.)

  • At first, it seems like a good idea ...

1541/1692

Protecting the control plane

  • Should we add network policies to block unauthorized access to the control plane?

    (etcd, API server, etc.)

  • At first, it seems like a good idea ...

  • But it shouldn't be necessary:

    • not all network plugins support network policies

    • the control plane is secured by other methods (mutual TLS, mostly)

    • the code running in our pods can reasonably expect to contact the API
      (and it can do so safely thanks to the API permission model)

  • If we block access to the control plane, we might disrupt legitimate code

  • ...Without necessarily improving security

k8s/netpol.md

1542/1692

Further resources

k8s/netpol.md

1543/1692

Image separating from the next chapter

1544/1692

Authentication and authorization

(automatically generated title slide)

1545/1692

Authentication and authorization

And first, a little refresher!

  • Authentication = verifying the identity of a person

    On a UNIX system, we can authenticate with login+password, SSH keys ...

  • Authorization = listing what they are allowed to do

    On a UNIX system, this can include file permissions, sudoer entries ...

  • Sometimes abbreviated as "authn" and "authz"

  • In good modular systems, these things are decoupled

    (so we can e.g. change a password or SSH key without having to reset access rights)

k8s/authn-authz.md

1546/1692

Authentication in Kubernetes

  • When the API server receives a request, it tries to authenticate it

    (it examines headers, certificates... anything available)

  • Many authentication methods are available and can be used simultaneously

    (we will see them on the next slide)

  • It's the job of the authentication method to produce:

    • the user name
    • the user ID
    • a list of groups
  • The API server doesn't interpret these; that'll be the job of authorizers

k8s/authn-authz.md

1547/1692

Authentication methods

  • TLS client certificates

    (that's what we've been doing with kubectl so far)

  • Bearer tokens

    (a secret token in the HTTP headers of the request)

  • HTTP basic auth

    (carrying user and password in an HTTP header)

  • Authentication proxy

    (sitting in front of the API and setting trusted headers)

k8s/authn-authz.md

1548/1692

Anonymous requests

  • If any authentication method rejects a request, it's denied

    (401 Unauthorized HTTP code)

  • If a request is neither rejected nor accepted by anyone, it's anonymous

    • the user name is system:anonymous

    • the list of groups is [system:unauthenticated]

  • By default, the anonymous user can't do anything

    (that's what you get if you just curl the Kubernetes API)

k8s/authn-authz.md

1549/1692

Authentication with TLS certificates

  • This is enabled in most Kubernetes deployments

  • The user name is derived from the CN in the client certificates

  • The groups are derived from the O fields in the client certificate

  • From the point of view of the Kubernetes API, users do not exist

    (i.e. they are not stored in etcd or anywhere else)

  • Users can be created (and added to groups) independently of the API

  • The Kubernetes API can be set up to use your custom CA to validate client certs

k8s/authn-authz.md

1550/1692

Viewing our admin certificate

  • Let's inspect the certificate we've been using all this time!
  • This command will show the CN and O fields for our certificate:
    kubectl config view \
    --raw \
    -o json \
    | jq -r .users[0].user[\"client-certificate-data\"] \
    | openssl base64 -d -A \
    | openssl x509 -text \
    | grep Subject:

Let's break down that command together! 😅

k8s/authn-authz.md

1551/1692

Breaking down the command

  • kubectl config view shows the Kubernetes user configuration
  • --raw includes certificate information (which shows as REDACTED otherwise)
  • -o json outputs the information in JSON format
  • | jq ... extracts the field with the user certificate (in base64)
  • | openssl base64 -d -A decodes the base64 format (now we have a PEM file)
  • | openssl x509 -text parses the certificate and outputs it as plain text
  • | grep Subject: shows us the line that interests us

→ We are user kubernetes-admin, in group system:masters.

(We will see later how and why this gives us the permissions that we have.)

k8s/authn-authz.md

1552/1692

User certificates in practice

  • The Kubernetes API server does not support certificate revocation

    (see issue #18982)

  • As a result, we don't have an easy way to terminate someone's access

    (if their key is compromised, or they leave the organization)

  • Option 1: re-create a new CA and re-issue everyone's certificates
    → Maybe OK if we only have a few users; no way otherwise

  • Option 2: don't use groups; grant permissions to individual users
    → Inconvenient if we have many users and teams; error-prone

  • Option 3: issue short-lived certificates (e.g. 24 hours) and renew them often
    → This can be facilitated by e.g. Vault or by the Kubernetes CSR API

k8s/authn-authz.md

1553/1692

Authentication with tokens

  • Tokens are passed as HTTP headers:

    Authorization: Bearer and-then-here-comes-the-token

  • Tokens can be validated through a number of different methods:

    • static tokens hard-coded in a file on the API server

    • bootstrap tokens (special case to create a cluster or join nodes)

    • OpenID Connect tokens (to delegate authentication to compatible OAuth2 providers)

    • service accounts (these deserve more details, coming right up!)

k8s/authn-authz.md

1554/1692

Service accounts

  • A service account is a user that exists in the Kubernetes API

    (it is visible with e.g. kubectl get serviceaccounts)

  • Service accounts can therefore be created / updated dynamically

    (they don't require hand-editing a file and restarting the API server)

  • A service account is associated with a set of secrets

    (the kind that you can view with kubectl get secrets)

  • Service accounts are generally used to grant permissions to applications, services...

    (as opposed to humans)

k8s/authn-authz.md

1555/1692

Token authentication in practice

  • We are going to list existing service accounts

  • Then we will extract the token for a given service account

  • And we will use that token to authenticate with the API

k8s/authn-authz.md

1556/1692

Listing service accounts

  • The resource name is serviceaccount or sa for short:
    kubectl get sa

There should be just one service account in the default namespace: default.

k8s/authn-authz.md

1557/1692

Finding the secret

  • List the secrets for the default service account:
    kubectl get sa default -o yaml
    SECRET=$(kubectl get sa default -o json | jq -r .secrets[0].name)

It should be named default-token-XXXXX.

k8s/authn-authz.md

1558/1692

Extracting the token

  • The token is stored in the secret, wrapped with base64 encoding
  • View the secret:

    kubectl get secret $SECRET -o yaml
  • Extract the token and decode it:

    TOKEN=$(kubectl get secret $SECRET -o json \
    | jq -r .data.token | openssl base64 -d -A)

k8s/authn-authz.md

1559/1692

Using the token

  • Let's send a request to the API, without and with the token
  • Find the ClusterIP for the kubernetes service:

    kubectl get svc kubernetes
    API=$(kubectl get svc kubernetes -o json | jq -r .spec.clusterIP)
  • Connect without the token:

    curl -k https://$API
  • Connect with the token:

    curl -k -H "Authorization: Bearer $TOKEN" https://$API

k8s/authn-authz.md

1560/1692

Results

  • In both cases, we will get a "Forbidden" error

  • Without authentication, the user is system:anonymous

  • With authentication, it is shown as system:serviceaccount:default:default

  • The API "sees" us as a different user

  • But neither user has any rights, so we can't do nothin'

  • Let's change that!

k8s/authn-authz.md

1561/1692

Authorization in Kubernetes

k8s/authn-authz.md

1562/1692

Role-based access control

  • RBAC allows to specify fine-grained permissions

  • Permissions are expressed as rules

  • A rule is a combination of:

    • verbs like create, get, list, update, delete...

    • resources (as in "API resource," like pods, nodes, services...)

    • resource names (to specify e.g. one specific pod instead of all pods)

    • in some case, subresources (e.g. logs are subresources of pods)

k8s/authn-authz.md

1563/1692

From rules to roles to rolebindings

  • A role is an API object containing a list of rules

    Example: role "external-load-balancer-configurator" can:

    • [list, get] resources [endpoints, services, pods]
    • [update] resources [services]
  • A rolebinding associates a role with a user

    Example: rolebinding "external-load-balancer-configurator":

    • associates user "external-load-balancer-configurator"
    • with role "external-load-balancer-configurator"
  • Yes, there can be users, roles, and rolebindings with the same name

  • It's a good idea for 1-1-1 bindings; not so much for 1-N ones

k8s/authn-authz.md

1564/1692

Cluster-scope permissions

  • API resources Role and RoleBinding are for objects within a namespace

  • We can also define API resources ClusterRole and ClusterRoleBinding

  • These are a superset, allowing us to:

    • specify actions on cluster-wide objects (like nodes)

    • operate across all namespaces

  • We can create Role and RoleBinding resources within a namespace

  • ClusterRole and ClusterRoleBinding resources are global

k8s/authn-authz.md

1565/1692

Pods and service accounts

  • A pod can be associated with a service account

    • by default, it is associated with the default service account

    • as we saw earlier, this service account has no permissions anyway

  • The associated token is exposed to the pod's filesystem

    (in /var/run/secrets/kubernetes.io/serviceaccount/token)

  • Standard Kubernetes tooling (like kubectl) will look for it there

  • So Kubernetes tools running in a pod will automatically use the service account

k8s/authn-authz.md

1566/1692

In practice

  • We are going to create a service account

  • We will use a default cluster role (view)

  • We will bind together this role and this service account

  • Then we will run a pod using that service account

  • In this pod, we will install kubectl and check our permissions

k8s/authn-authz.md

1567/1692

Creating a service account

  • We will call the new service account viewer

    (note that nothing prevents us from calling it view, like the role)

  • Create the new service account:

    kubectl create serviceaccount viewer
  • List service accounts now:

    kubectl get serviceaccounts

k8s/authn-authz.md

1568/1692

Binding a role to the service account

  • Binding a role = creating a rolebinding object

  • We will call that object viewercanview

    (but again, we could call it view)

  • Create the new role binding:
    kubectl create rolebinding viewercanview \
    --clusterrole=view \
    --serviceaccount=default:viewer

It's important to note a couple of details in these flags...

k8s/authn-authz.md

1569/1692

Roles vs Cluster Roles

  • We used --clusterrole=view

  • What would have happened if we had used --role=view?

    • we would have bound the role view from the local namespace
      (instead of the cluster role view)

    • the command would have worked fine (no error)

    • but later, our API requests would have been denied

  • This is a deliberate design decision

    (we can reference roles that don't exist, and create/update them later)

k8s/authn-authz.md

1570/1692

Users vs Service Accounts

  • We used --serviceaccount=default:viewer

  • What would have happened if we had used --user=default:viewer?

    • we would have bound the role to a user instead of a service account

    • again, the command would have worked fine (no error)

    • ...but our API requests would have been denied later

  • What's about the default: prefix?

    • that's the namespace of the service account

    • yes, it could be inferred from context, but... kubectl requires it

k8s/authn-authz.md

1571/1692

Testing

  • We will run an alpine pod and install kubectl there
  • Run a one-time pod:

    kubectl run eyepod --rm -ti --restart=Never \
    --serviceaccount=viewer \
    --image alpine
  • Install curl, then use it to install kubectl:

    apk add --no-cache curl
    URLBASE=https://storage.googleapis.com/kubernetes-release/release
    KUBEVER=$(curl -s $URLBASE/stable.txt)
    curl -LO $URLBASE/$KUBEVER/bin/linux/amd64/kubectl
    chmod +x kubectl

k8s/authn-authz.md

1572/1692

Running kubectl in the pod

  • We'll try to use our view permissions, then to create an object
  • Check that we can, indeed, view things:

    ./kubectl get all
  • But that we can't create things:

    ./kubectl create deployment testrbac --image=nginx
  • Exit the container with exit or ^D

k8s/authn-authz.md

1573/1692

Testing directly with kubectl

  • We can also check for permission with kubectl auth can-i:

    kubectl auth can-i list nodes
    kubectl auth can-i create pods
    kubectl auth can-i get pod/name-of-pod
    kubectl auth can-i get /url-fragment-of-api-request/
    kubectl auth can-i '*' services
  • And we can check permissions on behalf of other users:

    kubectl auth can-i list nodes \
    --as some-user
    kubectl auth can-i list nodes \
    --as system:serviceaccount:<namespace>:<name-of-service-account>

k8s/authn-authz.md

1574/1692

Where does this view role come from?

  • Kubernetes defines a number of ClusterRoles intended to be bound to users

  • cluster-admin can do everything (think root on UNIX)

  • admin can do almost everything (except e.g. changing resource quotas and limits)

  • edit is similar to admin, but cannot view or edit permissions

  • view has read-only access to most resources, except permissions and secrets

In many situations, these roles will be all you need.

You can also customize them!

k8s/authn-authz.md

1575/1692

Customizing the default roles

  • If you need to add permissions to these default roles (or others),
    you can do it through the ClusterRole Aggregation mechanism

  • This happens by creating a ClusterRole with the following labels:

    metadata:
    labels:
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-view: "true"
  • This ClusterRole permissions will be added to admin/edit/view respectively

  • This is particulary useful when using CustomResourceDefinitions

    (since Kubernetes cannot guess which resources are sensitive and which ones aren't)

k8s/authn-authz.md

1576/1692

Where do our permissions come from?

  • When interacting with the Kubernetes API, we are using a client certificate

  • We saw previously that this client certificate contained:

    CN=kubernetes-admin and O=system:masters

  • Let's look for these in existing ClusterRoleBindings:

    kubectl get clusterrolebindings -o yaml |
    grep -e kubernetes-admin -e system:masters

    (system:masters should show up, but not kubernetes-admin.)

  • Where does this match come from?

k8s/authn-authz.md

1577/1692

The system:masters group

  • If we eyeball the output of kubectl get clusterrolebindings -o yaml, we'll find out!

  • It is in the cluster-admin binding:

    kubectl describe clusterrolebinding cluster-admin
  • This binding associates system:masters with the cluster role cluster-admin

  • And the cluster-admin is, basically, root:

    kubectl describe clusterrole cluster-admin

k8s/authn-authz.md

1578/1692

Figuring out who can do what

  • For auditing purposes, sometimes we want to know who can perform an action

  • There are a few tools to help us with that

  • Both are available as standalone programs, or as plugins for kubectl

    (kubectl plugins can be installed and managed with krew)

k8s/authn-authz.md

1579/1692

Image separating from the next chapter

1580/1692

Pod Security Policies

(automatically generated title slide)

1581/1692

Pod Security Policies

  • By default, our pods and containers can do everything

    (including taking over the entire cluster)

  • We are going to show an example of a malicious pod

  • Then we will explain how to avoid this with PodSecurityPolicies

  • We will enable PodSecurityPolicies on our cluster

  • We will create a couple of policies (restricted and permissive)

  • Finally we will see how to use them to improve security on our cluster

k8s/podsecuritypolicy.md

1582/1692

Setting up a namespace

  • For simplicity, let's work in a separate namespace

  • Let's create a new namespace called "green"

  • Create the "green" namespace:

    kubectl create namespace green
  • Change to that namespace:

    kns green

k8s/podsecuritypolicy.md

1583/1692

Creating a basic Deployment

  • Just to check that everything works correctly, deploy NGINX
  • Create a Deployment using the official NGINX image:

    kubectl create deployment web --image=nginx
  • Confirm that the Deployment, ReplicaSet, and Pod exist, and that the Pod is running:

    kubectl get all

k8s/podsecuritypolicy.md

1584/1692

One example of malicious pods

  • We will now show an escalation technique in action

  • We will deploy a DaemonSet that adds our SSH key to the root account

    (on each node of the cluster)

  • The Pods of the DaemonSet will do so by mounting /root from the host

  • Check the file k8s/hacktheplanet.yaml with a text editor:

    vim ~/container.training/k8s/hacktheplanet.yaml
  • If you would like, change the SSH key (by changing the GitHub user name)

k8s/podsecuritypolicy.md

1585/1692

Deploying the malicious pods

  • Let's deploy our "exploit"!
  • Create the DaemonSet:

    kubectl create -f ~/container.training/k8s/hacktheplanet.yaml
  • Check that the pods are running:

    kubectl get pods
  • Confirm that the SSH key was added to the node's root account:

    sudo cat /root/.ssh/authorized_keys

k8s/podsecuritypolicy.md

1586/1692

Cleaning up

  • Before setting up our PodSecurityPolicies, clean up that namespace
  • Remove the DaemonSet:

    kubectl delete daemonset hacktheplanet
  • Remove the Deployment:

    kubectl delete deployment web

k8s/podsecuritypolicy.md

1587/1692

Pod Security Policies in theory

  • To use PSPs, we need to activate their specific admission controller

  • That admission controller will intercept each pod creation attempt

  • It will look at:

    • who/what is creating the pod

    • which PodSecurityPolicies they can use

    • which PodSecurityPolicies can be used by the Pod's ServiceAccount

  • Then it will compare the Pod with each PodSecurityPolicy one by one

  • If a PodSecurityPolicy accepts all the parameters of the Pod, it is created

  • Otherwise, the Pod creation is denied and it won't even show up in kubectl get pods

k8s/podsecuritypolicy.md

1588/1692

Pod Security Policies fine print

  • With RBAC, using a PSP corresponds to the verb use on the PSP

    (that makes sense, right?)

  • If no PSP is defined, no Pod can be created

    (even by cluster admins)

  • Pods that are already running are not affected

  • If we create a Pod directly, it can use a PSP to which we have access

  • If the Pod is created by e.g. a ReplicaSet or DaemonSet, it's different:

    • the ReplicaSet / DaemonSet controllers don't have access to our policies

    • therefore, we need to give access to the PSP to the Pod's ServiceAccount

k8s/podsecuritypolicy.md

1589/1692

Pod Security Policies in practice

  • We are going to enable the PodSecurityPolicy admission controller

  • At that point, we won't be able to create any more pods (!)

  • Then we will create a couple of PodSecurityPolicies

  • ...And associated ClusterRoles (giving use access to the policies)

  • Then we will create RoleBindings to grant these roles to ServiceAccounts

  • We will verify that we can't run our "exploit" anymore

k8s/podsecuritypolicy.md

1590/1692

Enabling Pod Security Policies

  • To enable Pod Security Policies, we need to enable their admission plugin

  • This is done by adding a flag to the API server

  • On clusters deployed with kubeadm, the control plane runs in static pods

  • These pods are defined in YAML files located in /etc/kubernetes/manifests

  • Kubelet watches this directory

  • Each time a file is added/removed there, kubelet creates/deletes the corresponding pod

  • Updating a file causes the pod to be deleted and recreated

k8s/podsecuritypolicy.md

1591/1692

Updating the API server flags

  • Let's edit the manifest for the API server pod
  • Have a look at the static pods:

    ls -l /etc/kubernetes/manifests
  • Edit the one corresponding to the API server:

    sudo vim /etc/kubernetes/manifests/kube-apiserver.yaml

k8s/podsecuritypolicy.md

1592/1692

Adding the PSP admission plugin

  • There should already be a line with --enable-admission-plugins=...

  • Let's add PodSecurityPolicy on that line

  • Locate the line with --enable-admission-plugins=

  • Add PodSecurityPolicy

    It should read: --enable-admission-plugins=NodeRestriction,PodSecurityPolicy

  • Save, quit

k8s/podsecuritypolicy.md

1593/1692

Waiting for the API server to restart

  • The kubelet detects that the file was modified

  • It kills the API server pod, and starts a new one

  • During that time, the API server is unavailable

  • Wait until the API server is available again

k8s/podsecuritypolicy.md

1594/1692

Check that the admission plugin is active

  • Normally, we can't create any Pod at this point
  • Try to create a Pod directly:
    kubectl run testpsp1 --image=nginx --restart=Never
  • Try to create a Deployment:

    kubectl run testpsp2 --image=nginx
  • Look at existing resources:

    kubectl get all

We can get hints at what's happening by looking at the ReplicaSet and Events.

k8s/podsecuritypolicy.md

1595/1692

Introducing our Pod Security Policies

  • We will create two policies:

    • privileged (allows everything)

    • restricted (blocks some unsafe mechanisms)

  • For each policy, we also need an associated ClusterRole granting use

k8s/podsecuritypolicy.md

1596/1692

Creating our Pod Security Policies

  • We have a couple of files, each defining a PSP and associated ClusterRole:

    • k8s/psp-privileged.yaml: policy privileged, role psp:privileged
    • k8s/psp-restricted.yaml: policy restricted, role psp:restricted
  • Create both policies and their associated ClusterRoles:
    kubectl create -f ~/container.training/k8s/psp-restricted.yaml
    kubectl create -f ~/container.training/k8s/psp-privileged.yaml

k8s/podsecuritypolicy.md

1597/1692

Check that we can create Pods again

  • We haven't bound the policy to any user yet

  • But cluster-admin can implicitly use all policies

  • Check that we can now create a Pod directly:

    kubectl run testpsp3 --image=nginx --restart=Never
  • Create a Deployment as well:

    kubectl run testpsp4 --image=nginx
  • Confirm that the Deployment is not creating any Pods:

    kubectl get all

k8s/podsecuritypolicy.md

1598/1692

What's going on?

  • We can create Pods directly (thanks to our root-like permissions)

  • The Pods corresponding to a Deployment are created by the ReplicaSet controller

  • The ReplicaSet controller does not have root-like permissions

  • We need to either:

    • grant permissions to the ReplicaSet controller

    or

    • grant permissions to our Pods' ServiceAccount
  • The first option would allow anyone to create pods

  • The second option will allow us to scope the permissions better

k8s/podsecuritypolicy.md

1599/1692

Binding the restricted policy

  • Let's bind the role psp:restricted to ServiceAccount green:default

    (aka the default ServiceAccount in the green Namespace)

  • This will allow Pod creation in the green Namespace

    (because these Pods will be using that ServiceAccount automatically)

  • Create the following RoleBinding:
    kubectl create rolebinding psp:restricted \
    --clusterrole=psp:restricted \
    --serviceaccount=green:default

k8s/podsecuritypolicy.md

1600/1692

Trying it out

  • The Deployments that we created earlier will eventually recover

    (the ReplicaSet controller will retry to create Pods once in a while)

  • If we create a new Deployment now, it should work immediately

  • Create a simple Deployment:

    kubectl create deployment testpsp5 --image=nginx
  • Look at the Pods that have been created:

    kubectl get all

k8s/podsecuritypolicy.md

1601/1692

Trying to hack the cluster

  • Let's create the same DaemonSet we used earlier
  • Create a hostile DaemonSet:

    kubectl create -f ~/container.training/k8s/hacktheplanet.yaml
  • Look at the state of the namespace:

    kubectl get all

k8s/podsecuritypolicy.md

1602/1692

What's in our restricted policy?

  • The restricted PSP is similar to the one provided in the docs, but:

    • it allows containers to run as root

    • it doesn't drop capabilities

  • Many containers run as root by default, and would require additional tweaks

  • Many containers use e.g. chown, which requires a specific capability

    (that's the case for the NGINX official image, for instance)

  • We still block: hostPath, privileged containers, and much more!

k8s/podsecuritypolicy.md

1603/1692

The case of static pods

  • If we list the pods in the kube-system namespace, kube-apiserver is missing

  • However, the API server is obviously running

    (otherwise, kubectl get pods --namespace=kube-system wouldn't work)

  • The API server Pod is created directly by kubelet

    (without going through the PSP admission plugin)

  • Then, kubelet creates a "mirror pod" representing that Pod in etcd

  • That "mirror pod" creation goes through the PSP admission plugin

  • And it gets blocked!

  • This can be fixed by binding psp:privileged to group system:nodes

k8s/podsecuritypolicy.md

1604/1692

Before moving on...

  • Our cluster is currently broken

    (we can't create pods in namespaces kube-system, default, ...)

  • We need to either:

    • disable the PSP admission plugin

    • allow use of PSP to relevant users and groups

  • For instance, we could:

    • bind psp:restricted to the group system:authenticated

    • bind psp:privileged to the ServiceAccount kube-system:default

k8s/podsecuritypolicy.md

1605/1692

Fixing the cluster

  • Let's disable the PSP admission plugin
  • Edit the Kubernetes API server static pod manifest

  • Remove the PSP admission plugin

  • This can be done with this one-liner:

    sudo sed -i s/,PodSecurityPolicy// /etc/kubernetes/manifests/kube-apiserver.yaml

k8s/podsecuritypolicy.md

1606/1692

Image separating from the next chapter

1607/1692

The CSR API

(automatically generated title slide)

1608/1692

The CSR API

  • The Kubernetes API exposes CSR resources

  • We can use these resources to issue TLS certificates

  • First, we will go through a quick reminder about TLS certificates

  • Then, we will see how to obtain a certificate for a user

  • We will use that certificate to authenticate with the cluster

  • Finally, we will grant some privileges to that user

k8s/csr-api.md

1609/1692

Reminder about TLS

  • TLS (Transport Layer Security) is a protocol providing:

    • encryption (to prevent eavesdropping)

    • authentication (using public key cryptography)

  • When we access an https:// URL, the server authenticates itself

    (it proves its identity to us; as if it were "showing its ID")

  • But we can also have mutual TLS authentication (mTLS)

    (client proves its identity to server; server proves its identity to client)

k8s/csr-api.md

1610/1692

Authentication with certificates

  • To authenticate, someone (client or server) needs:

    • a private key (that remains known only to them)

    • a public key (that they can distribute)

    • a certificate (associating the public key with an identity)

  • A message encrypted with the private key can only be decrypted with the public key

    (and vice versa)

  • If I use someone's public key to encrypt/decrypt their messages,
    I can be certain that I am talking to them / they are talking to me

  • The certificate proves that I have the correct public key for them

k8s/csr-api.md

1611/1692

Certificate generation workflow

This is what I do if I want to obtain a certificate.

  1. Create public and private keys.

  2. Create a Certificate Signing Request (CSR).

    (The CSR contains the identity that I claim and a public key.)

  3. Send that CSR to the Certificate Authority (CA).

  4. The CA verifies that I can claim the identity in the CSR.

  5. The CA generates my certificate and gives it to me.

The CA (or anyone else) never needs to know my private key.

k8s/csr-api.md

1612/1692

The CSR API

  • The Kubernetes API has a CertificateSigningRequest resource type

    (we can list them with e.g. kubectl get csr)

  • We can create a CSR object

    (= upload a CSR to the Kubernetes API)

  • Then, using the Kubernetes API, we can approve/deny the request

  • If we approve the request, the Kubernetes API generates a certificate

  • The certificate gets attached to the CSR object and can be retrieved

k8s/csr-api.md

1613/1692

Using the CSR API

  • We will show how to use the CSR API to obtain user certificates

  • This will be a rather complex demo

  • ... And yet, we will take a few shortcuts to simplify it

    (but it will illustrate the general idea)

  • The demo also won't be automated

    (we would have to write extra code to make it fully functional)

k8s/csr-api.md

1614/1692

General idea

  • We will create a Namespace named "users"

  • Each user will get a ServiceAccount in that Namespace

  • That ServiceAccount will give read/write access to one CSR object

  • Users will use that ServiceAccount's token to submit a CSR

  • We will approve the CSR (or not)

  • Users can then retrieve their certificate from their CSR object

  • ...And use that certificate for subsequent interactions

k8s/csr-api.md

1615/1692

Resource naming

For a user named jean.doe, we will have:

  • ServiceAccount jean.doe in Namespace users

  • CertificateSigningRequest users:jean.doe

  • ClusterRole users:jean.doe giving read/write access to that CSR

  • ClusterRoleBinding users:jean.doe binding ClusterRole and ServiceAccount

k8s/csr-api.md

1616/1692

Creating the user's resources

If you want to use another name than jean.doe, update the YAML file!

  • Create the global namespace for all users:

    kubectl create namespace users
  • Create the ServiceAccount, ClusterRole, ClusterRoleBinding for jean.doe:

    kubectl apply -f ~/container.training/k8s/users:jean.doe.yaml

k8s/csr-api.md

1617/1692

Extracting the user's token

  • Let's obtain the user's token and give it to them

    (the token will be their password)

  • List the user's secrets:

    kubectl --namespace=users describe serviceaccount jean.doe
  • Show the user's token:

    kubectl --namespace=users describe secret jean.doe-token-xxxxx

k8s/csr-api.md

1618/1692

Configure kubectl to use the token

  • Let's create a new context that will use that token to access the API
  • Add a new identity to our kubeconfig file:

    kubectl config set-credentials token:jean.doe --token=...
  • Add a new context using that identity:

    kubectl config set-context jean.doe --user=token:jean.doe --cluster=kubernetes

k8s/csr-api.md

1619/1692

Access the API with the token

  • Let's check that our access rights are set properly
  • Try to access any resource:

    kubectl get pods

    (This should tell us "Forbidden")

  • Try to access "our" CertificateSigningRequest:

    kubectl get csr users:jean.doe

    (This should tell us "NotFound")

k8s/csr-api.md

1620/1692

Create a key and a CSR

  • There are many tools to generate TLS keys and CSRs

  • Let's use OpenSSL; it's not the best one, but it's installed everywhere

    (many people prefer cfssl, easyrsa, or other tools; that's fine too!)

  • Generate the key and certificate signing request:
    openssl req -newkey rsa:2048 -nodes -keyout key.pem \
    -new -subj /CN=jean.doe/O=devs/ -out csr.pem

The command above generates:

  • a 2048-bit RSA key, without encryption, stored in key.pem
  • a CSR for the name jean.doe in group devs

k8s/csr-api.md

1621/1692

Inside the Kubernetes CSR object

  • The Kubernetes CSR object is a thin wrapper around the CSR PEM file

  • The PEM file needs to be encoded to base64 on a single line

    (we will use base64 -w0 for that purpose)

  • The Kubernetes CSR object also needs to list the right "usages"

    (these are flags indicating how the certificate can be used)

k8s/csr-api.md

1622/1692

Sending the CSR to Kubernetes

  • Generate and create the CSR resource:
    kubectl apply -f - <<EOF
    apiVersion: certificates.k8s.io/v1beta1
    kind: CertificateSigningRequest
    metadata:
    name: users:jean.doe
    spec:
    request: $(base64 -w0 < csr.pem)
    usages:
    - digital signature
    - key encipherment
    - client auth
    EOF

k8s/csr-api.md

1623/1692

Adjusting certificate expiration

  • Edit the static pod definition for the controller manager:

    sudo vim /etc/kubernetes/manifests/kube-controller-manager.yaml
  • In the list of flags, add the following line:

    - --experimental-cluster-signing-duration=1h

k8s/csr-api.md

1624/1692

Verifying and approving the CSR

  • Let's inspect the CSR, and if it is valid, approve it
  • Switch back to cluster-admin:

    kctx -
  • Inspect the CSR:

    kubectl describe csr users:jean.doe
  • Approve it:

    kubectl certificate approve users:jean.doe

k8s/csr-api.md

1625/1692

Obtaining the certificate

  • Switch back to the user's identity:

    kctx -
  • Retrieve the updated CSR object and extract the certificate:

    kubectl get csr users:jean.doe \
    -o jsonpath={.status.certificate} \
    | base64 -d > cert.pem
  • Inspect the certificate:

    openssl x509 -in cert.pem -text -noout

k8s/csr-api.md

1626/1692

Using the certificate

  • Add the key and certificate to kubeconfig:

    kubectl config set-credentials cert:jean.doe --embed-certs \
    --client-certificate=cert.pem --client-key=key.pem
  • Update the user's context to use the key and cert to authenticate:

    kubectl config set-context jean.doe --user cert:jean.doe
  • Confirm that we are seen as jean.doe (but don't have permissions):

    kubectl get pods

k8s/csr-api.md

1627/1692

What's missing?

We have just shown, step by step, a method to issue short-lived certificates for users.

To be usable in real environments, we would need to add:

  • a kubectl helper to automatically generate the CSR and obtain the cert

    (and transparently renew the cert when needed)

  • a Kubernetes controller to automatically validate and approve CSRs

    (checking that the subject and groups are valid)

  • a way for the users to know the groups to add to their CSR

    (e.g.: annotations on their ServiceAccount + read access to the ServiceAccount)

k8s/csr-api.md

1628/1692

Is this realistic?

  • Larger organizations typically integrate with their own directory

  • The general principle, however, is the same:

    • users have long-term credentials (password, token, ...)

    • they use these credentials to obtain other, short-lived credentials

  • This provides enhanced security:

    • the long-term credentials can use long passphrases, 2FA, HSM...

    • the short-term credentials are more convenient to use

    • we get strong security and convenience

  • Systems like Vault also have certificate issuance mechanisms

k8s/csr-api.md

1629/1692

Image separating from the next chapter

1630/1692

OpenID Connect

(automatically generated title slide)

1631/1692

OpenID Connect

  • The Kubernetes API server can perform authentication with OpenID connect

  • This requires an OpenID provider

    (external authorization server using the OAuth 2.0 protocol)

  • We can use a third-party provider (e.g. Google) or run our own (e.g. Dex)

  • We are going to give an overview of the protocol

  • We will show it in action (in a simplified scenario)

k8s/openid-connect.md

1632/1692

Workflow overview

  • We want to access our resources (a Kubernetes cluster)

  • We authenticate with the OpenID provider

    • we can do this directly (e.g. by going to https://accounts.google.com)

    • or maybe a kubectl plugin can open a browser page on our behalf

  • After authenticating us, the OpenID provider gives us:

    • an id token (a short-lived signed JSON Web Token, see next slide)

    • a refresh token (to renew the id token when needed)

  • We can now issue requests to the Kubernetes API with the id token

  • The API server will verify that token's content to authenticate us

k8s/openid-connect.md

1633/1692

JSON Web Tokens

  • A JSON Web Token (JWT) has three parts:

    • a header specifying algorithms and token type

    • a payload (indicating who issued the token, for whom, which purposes...)

    • a signature generated by the issuer (the issuer = the OpenID provider)

  • Anyone can verify a JWT without contacting the issuer

    (except to obtain the issuer's public key)

  • Pro tip: we can inspect a JWT with https://jwt.io/

k8s/openid-connect.md

1634/1692

How the Kubernetes API uses JWT

  • Server side

    • enable OIDC authentication

    • indicate which issuer (provider) should be allowed

    • indicate which audience (or "client id") should be allowed

    • optionally, map or prefix user and group names

  • Client side

    • obtain JWT as described earlier

    • pass JWT as authentication token

    • renew JWT when needed (using the refresh token)

k8s/openid-connect.md

1635/1692

Demo time!

  • We will use Google Accounts as our OpenID provider

  • We will use the Google OAuth Playground as the "audience" or "client id"

  • We will obtain a JWT through Google Accounts and the OAuth Playground

  • We will enable OIDC in the Kubernetes API server

  • We will use the JWT to authenticate

If you can't or won't use a Google account, you can try to adapt this to another provider.

k8s/openid-connect.md

1636/1692

Checking the API server logs

  • The API server logs will be particularly useful in this section

    (they will indicate e.g. why a specific token is rejected)

  • Let's keep an eye on the API server output!

  • Tail the logs of the API server:
    kubectl logs kube-apiserver-node1 --follow --namespace=kube-system

k8s/openid-connect.md

1637/1692

Authenticate with the OpenID provider

  • We will use the Google OAuth Playground for convenience

  • In a real scenario, we would need our own OAuth client instead of the playground

    (even if we were still using Google as the OpenID provider)

  • Open the Google OAuth Playground:

    https://developers.google.com/oauthplayground/
  • Enter our own custom scope in the text field:

    https://www.googleapis.com/auth/userinfo.email
  • Click on "Authorize APIs" and allow the playground to access our email address

k8s/openid-connect.md

1638/1692

Obtain our JSON Web Token

  • The previous step gave us an "authorization code"

  • We will use it to obtain tokens

  • Click on "Exchange authorization code for tokens"
  • The JWT is the very long id_token that shows up on the right hand side

    (it is a base64-encoded JSON object, and should therefore start with eyJ)

k8s/openid-connect.md

1639/1692

Using our JSON Web Token

  • We need to create a context (in kubeconfig) for our token

    (if we just add the token or use kubectl --token, our certificate will still be used)

  • Create a new authentication section in kubeconfig:

    kubectl config set-credentials myjwt --token=eyJ...
  • Try to use it:

    kubectl --user=myjwt get nodes

We should get an Unauthorized response, since we haven't enabled OpenID Connect in the API server yet. We should also see invalid bearer token in the API server log output.

k8s/openid-connect.md

1640/1692

Enabling OpenID Connect

  • We need to add a few flags to the API server configuration

  • These two are mandatory:

    --oidc-issuer-url → URL of the OpenID provider

    --oidc-client-id → app requesting the authentication
    (in our case, that's the ID for the Google OAuth Playground)

  • This one is optional:

    --oidc-username-claim → which field should be used as user name
    (we will use the user's email address instead of an opaque ID)

  • See the API server documentation for more details about all available flags

k8s/openid-connect.md

1641/1692

Updating the API server configuration

  • The instructions below will work for clusters deployed with kubeadm

    (or where the control plane is deployed in static pods)

  • If your cluster is deployed differently, you will need to adapt them

  • Edit /etc/kubernetes/manifests/kube-apiserver.yaml

  • Add the following lines to the list of command-line flags:

    - --oidc-issuer-url=https://accounts.google.com
    - --oidc-client-id=407408718192.apps.googleusercontent.com
    - --oidc-username-claim=email

k8s/openid-connect.md

1642/1692

Restarting the API server

  • The kubelet monitors the files in /etc/kubernetes/manifests

  • When we save the pod manifest, kubelet will restart the corresponding pod

    (using the updated command line flags)

  • After making the changes described on the previous slide, save the file

  • Issue a simple command (like kubectl version) until the API server is back up

    (it might take between a few seconds and one minute for the API server to restart)

  • Restart the kubectl logs command to view the logs of the API server

k8s/openid-connect.md

1643/1692

Using our JSON Web Token

  • Now that the API server is set up to recognize our token, try again!
  • Try an API command with our token:
    kubectl --user=myjwt get nodes
    kubectl --user=myjwt get pods

We should see a message like:

Error from server (Forbidden): nodes is forbidden: User "[email protected]"
cannot list resource "nodes" in API group "" at the cluster scope

→ We were successfully authenticated, but not authorized.

k8s/openid-connect.md

1644/1692

Authorizing our user

  • As an extra step, let's grant read access to our user

  • We will use the pre-defined ClusterRole view

  • Create a ClusterRoleBinding allowing us to view resources:

    kubectl create clusterrolebinding i-can-view \
    --user=[email protected] --clusterrole=view

    (make sure to put your Google email address there)

  • Confirm that we can now list pods with our token:

    kubectl --user=myjwt get pods

k8s/openid-connect.md

1645/1692

From demo to production

This was a very simplified demo! In a real deployment...

  • We wouldn't use the Google OAuth Playground

  • We probably wouldn't even use Google at all

    (it doesn't seem to provide a way to include groups!)

  • Some popular alternatives:

  • We would use a helper (like the kubelogin plugin) to automatically obtain tokens

k8s/openid-connect.md

1646/1692

Service Account tokens

  • The tokens used by Service Accounts are JWT tokens as well

  • They are signed and verified using a special service account key pair

  • Extract the token of a service account in the current namespace:

    kubectl get secrets -o jsonpath={..token} | base64 -d
  • Copy-paste the token to a verification service like https://jwt.io

  • Notice that it says "Invalid Signature"

k8s/openid-connect.md

1647/1692

Verifying Service Account tokens

  • JSON Web Tokens embed the URL of the "issuer" (=OpenID provider)

  • The issuer provides its public key through a well-known discovery endpoint

    (similar to https://accounts.google.com/.well-known/openid-configuration)

  • There is no such endpoint for the Service Account key pair

  • But we can provide the public key ourselves for verification

k8s/openid-connect.md

1648/1692

Verifying a Service Account token

  • On clusters provisioned with kubeadm, the Service Account key pair is:

    /etc/kubernetes/pki/sa.key (used by the controller manager to generate tokens)

    /etc/kubernetes/pki/sa.pub (used by the API server to validate the same tokens)

  • Display the public key used to sign Service Account tokens:

    sudo cat /etc/kubernetes/pki/sa.pub
  • Copy-paste the key in the "verify signature" area on https://jwt.io

  • It should now say "Signature Verified"

k8s/openid-connect.md

1649/1692

Image separating from the next chapter

1650/1692

Securing the control plane

(automatically generated title slide)

1651/1692

Securing the control plane

  • Many components accept connections (and requests) from others:

    • API server

    • etcd

    • kubelet

  • We must secure these connections:

    • to deny unauthorized requests

    • to prevent eavesdropping secrets, tokens, and other sensitive information

  • Disabling authentication and/or authorization is strongly discouraged

    (but it's possible to do it, e.g. for learning / troubleshooting purposes)

k8s/control-plane-auth.md

1652/1692

Authentication and authorization

  • Authentication (checking "who you are") is done with mutual TLS

    (both the client and the server need to hold a valid certificate)

  • Authorization (checking "what you can do") is done in different ways

    • the API server implements a sophisticated permission logic (with RBAC)

    • some services will defer authorization to the API server (through webhooks)

    • some services require a certificate signed by a particular CA / sub-CA

k8s/control-plane-auth.md

1653/1692

In practice

  • We will review the various communication channels in the control plane

  • We will describe how they are secured

  • When TLS certificates are used, we will indicate:

    • which CA signs them

    • what their subject (CN) should be, when applicable

  • We will indicate how to configure security (client- and server-side)

k8s/control-plane-auth.md

1654/1692

etcd peers

  • Replication and coordination of etcd happens on a dedicated port

    (typically port 2380; the default port for normal client connections is 2379)

  • Authentication uses TLS certificates with a separate sub-CA

    (otherwise, anyone with a Kubernetes client certificate could access etcd!)

  • The etcd command line flags involved are:

    --peer-client-cert-auth=true to activate it

    --peer-cert-file, --peer-key-file, --peer-trusted-ca-file

k8s/control-plane-auth.md

1655/1692

etcd clients

  • The only¹ thing that connects to etcd is the API server

  • Authentication uses TLS certificates with a separate sub-CA

    (for the same reasons as for etcd inter-peer authentication)

  • The etcd command line flags involved are:

    --client-cert-auth=true to activate it

    --trusted-ca-file, --cert-file, --key-file

  • The API server command line flags involved are:

    --etcd-cafile, --etcd-certfile, --etcd-keyfile

¹Technically, there is also the etcd healthcheck. Let's ignore it for now.

k8s/control-plane-auth.md

1656/1692

API server clients

  • The API server has a sophisticated authentication and authorization system

  • For connections coming from other components of the control plane:

    • authentication uses certificates (trusting the certificates' subject or CN)

    • authorization uses whatever mechanism is enabled (most oftentimes, RBAC)

  • The relevant API server flags are:

    --client-ca-file, --tls-cert-file, --tls-private-key-file

  • Each component connecting to the API server takes a --kubeconfig flag

    (to specify a kubeconfig file containing the CA cert, client key, and client cert)

  • Yes, that kubeconfig file follows the same format as our ~/.kube/config file!

k8s/control-plane-auth.md

1657/1692

Kubelet and API server

  • Communication between kubelet and API server can be established both ways

  • Kubelet → API server:

    • kubelet registers itself ("hi, I'm node42, do you have work for me?")

    • connection is kept open and re-established if it breaks

    • that's how the kubelet knows which pods to start/stop

  • API server → kubelet:

    • used to retrieve logs, exec, attach to containers

k8s/control-plane-auth.md

1658/1692

Kubelet → API server

  • Kubelet is started with --kubeconfig with API server information

  • The client certificate of the kubelet will typically have:

    CN=system:node:<nodename> and groups O=system:nodes

  • Nothing special on the API server side

    (it will authenticate like any other client)

k8s/control-plane-auth.md

1659/1692

API server → kubelet

  • Kubelet is started with the flag --client-ca-file

    (typically using the same CA as the API server)

  • API server will use a dedicated key pair when contacting kubelet

    (specified with --kubelet-client-certificate and --kubelet-client-key)

  • Authorization uses webhooks

    (enabled with --authorization-mode=Webhook on kubelet)

  • The webhook server is the API server itself

    (the kubelet sends back a request to the API server to ask, "can this person do that?")

k8s/control-plane-auth.md

1660/1692

Scheduler

  • The scheduler connects to the API server like an ordinary client

  • The certificate of the scheduler will have CN=system:kube-scheduler

k8s/control-plane-auth.md

1661/1692

Controller manager

  • The controller manager is also a normal client to the API server

  • Its certificate will have CN=system:kube-controller-manager

  • If we use the CSR API, the controller manager needs the CA cert and key

    (passed with flags --cluster-signing-cert-file and --cluster-signing-key-file)

  • We usually want the controller manager to generate tokens for service accounts

  • These tokens deserve some details (on the next slide!)

k8s/control-plane-auth.md

1662/1692

Service account tokens

  • Each time we create a service account, the controller manager generates a token

  • These tokens are JWT tokens, signed with a particular key

  • These tokens are used for authentication with the API server

    (and therefore, the API server needs to be able to verify their integrity)

  • This uses another keypair:

    • the private key (used for signature) is passed to the controller manager
      (using flags --service-account-private-key-file and --root-ca-file)

    • the public key (used for verification) is passed to the API server
      (using flag --service-account-key-file)

k8s/control-plane-auth.md

1663/1692

kube-proxy

  • kube-proxy is "yet another API server client"

  • In many clusters, it runs as a Daemon Set

  • In that case, it will have its own Service Account and associated permissions

  • It will authenticate using the token of that Service Account

k8s/control-plane-auth.md

1664/1692

Webhooks

  • We mentioned webhooks earlier; how does that really work?

  • The Kubernetes API has special resource types to check permissions

  • One of them is SubjectAccessReview

  • To check if a particular user can do a particular action on a particular resource:

    • we prepare a SubjectAccessReview object

    • we send that object to the API server

    • the API server responds with allow/deny (and optional explanations)

  • Using webhooks for authorization = sending SAR to authorize each request

k8s/control-plane-auth.md

1665/1692

Subject Access Review

Here is an example showing how to check if jean.doe can get some pods in kube-system:

kubectl -v9 create -f- <<EOF
apiVersion: authorization.k8s.io/v1beta1
kind: SubjectAccessReview
spec:
user: jean.doe
group:
- foo
- bar
resourceAttributes:
#group: blah.k8s.io
namespace: kube-system
resource: pods
verb: get
#name: web-xyz1234567-pqr89
EOF

k8s/control-plane-auth.md

1666/1692

Image separating from the next chapter

1667/1692

Next steps

(automatically generated title slide)

1668/1692

Next steps

Alright, how do I get started and containerize my apps?

1669/1692

Next steps

Alright, how do I get started and containerize my apps?

Suggested containerization checklist:

  • write a Dockerfile for one service in one app
  • write Dockerfiles for the other (buildable) services
  • write a Compose file for that whole app
  • make sure that devs are empowered to run the app in containers
  • set up automated builds of container images from the code repo
  • set up a CI pipeline using these container images
  • set up a CD pipeline (for staging/QA) using these images

And then it is time to look at orchestration!

k8s/whatsnext.md

1670/1692

Options for our first production cluster

  • Get a managed cluster from a major cloud provider (AKS, EKS, GKE...)

    (price: $, difficulty: medium)

  • Hire someone to deploy it for us

    (price: $$, difficulty: easy)

  • Do it ourselves

    (price: $-$$$, difficulty: hard)

k8s/whatsnext.md

1671/1692

One big cluster vs. multiple small ones

  • Yes, it is possible to have prod+dev in a single cluster

    (and implement good isolation and security with RBAC, network policies...)

  • But it is not a good idea to do that for our first deployment

  • Start with a production cluster + at least a test cluster

  • Implement and check RBAC and isolation on the test cluster

    (e.g. deploy multiple test versions side-by-side)

  • Make sure that all our devs have usable dev clusters

    (whether it's a local minikube or a full-blown multi-node cluster)

k8s/whatsnext.md

1672/1692

Namespaces

  • Namespaces let you run multiple identical stacks side by side

  • Two namespaces (e.g. blue and green) can each have their own redis service

  • Each of the two redis services has its own ClusterIP

  • CoreDNS creates two entries, mapping to these two ClusterIP addresses:

    redis.blue.svc.cluster.local and redis.green.svc.cluster.local

  • Pods in the blue namespace get a search suffix of blue.svc.cluster.local

  • As a result, resolving redis from a pod in the blue namespace yields the "local" redis

This does not provide isolation! That would be the job of network policies.

k8s/whatsnext.md

1673/1692

Relevant sections

k8s/whatsnext.md

1674/1692

Stateful services (databases etc.)

  • As a first step, it is wiser to keep stateful services outside of the cluster

  • Exposing them to pods can be done with multiple solutions:

    • ExternalName services
      (redis.blue.svc.cluster.local will be a CNAME record)

    • ClusterIP services with explicit Endpoints
      (instead of letting Kubernetes generate the endpoints from a selector)

    • Ambassador services
      (application-level proxies that can provide credentials injection and more)

k8s/whatsnext.md

1675/1692

Stateful services (second take)

  • If we want to host stateful services on Kubernetes, we can use:

    • a storage provider

    • persistent volumes, persistent volume claims

    • stateful sets

  • Good questions to ask:

    • what's the operational cost of running this service ourselves?

    • what do we gain by deploying this stateful service on Kubernetes?

  • Relevant sections: Volumes | Stateful Sets | Persistent Volumes

  • Excellent blog post tackling the question: “Should I run Postgres on Kubernetes?”

k8s/whatsnext.md

1676/1692

HTTP traffic handling

  • Services are layer 4 constructs

  • HTTP is a layer 7 protocol

  • It is handled by ingresses (a different resource kind)

  • Ingresses allow:

    • virtual host routing
    • session stickiness
    • URI mapping
    • and much more!
  • This section shows how to expose multiple HTTP apps using Træfik

k8s/whatsnext.md

1677/1692

Logging

  • Logging is delegated to the container engine

  • Logs are exposed through the API

  • Logs are also accessible through local files (/var/log/containers)

  • Log shipping to a central platform is usually done through these files

    (e.g. with an agent bind-mounting the log directory)

  • This section shows how to do that with Fluentd and the EFK stack

k8s/whatsnext.md

1678/1692

Metrics

  • The kubelet embeds cAdvisor, which exposes container metrics

    (cAdvisor might be separated in the future for more flexibility)

  • It is a good idea to start with Prometheus

    (even if you end up using something else)

  • Starting from Kubernetes 1.8, we can use the Metrics API

  • Heapster was a popular add-on

    (but is being deprecated starting with Kubernetes 1.11)

k8s/whatsnext.md

1679/1692

Managing the configuration of our applications

  • Two constructs are particularly useful: secrets and config maps

  • They allow to expose arbitrary information to our containers

  • Avoid storing configuration in container images

    (There are some exceptions to that rule, but it's generally a Bad Idea)

  • Never store sensitive information in container images

    (It's the container equivalent of the password on a post-it note on your screen)

  • This section shows how to manage app config with config maps (among others)

k8s/whatsnext.md

1680/1692

Managing stack deployments

  • Applications are made of many resources

    (Deployments, Services, and much more)

  • We need to automate the creation / update / management of these resources

  • There is no "absolute best" tool or method; it depends on:

    • the size and complexity of our stack(s)
    • how often we change it (i.e. add/remove components)
    • the size and skills of our team

k8s/whatsnext.md

1681/1692

A few tools to manage stacks

  • Shell scripts invoking kubectl

  • YAML resource manifests committed to a repo

  • Kustomize (YAML manifests + patches applied on top)

  • Helm (YAML manifests + templating engine)

  • Spinnaker (Netflix' CD platform)

  • Brigade (event-driven scripting; no YAML)

k8s/whatsnext.md

1682/1692

Cluster federation

1683/1692

Cluster federation

Star Trek Federation

1684/1692

Cluster federation

Star Trek Federation

Sorry Star Trek fans, this is not the federation you're looking for!

1685/1692

Cluster federation

Star Trek Federation

Sorry Star Trek fans, this is not the federation you're looking for!

(If I add "Your cluster is in another federation" I might get a 3rd fandom wincing!)

k8s/whatsnext.md

1686/1692

Cluster federation

  • Kubernetes master operation relies on etcd

  • etcd uses the Raft protocol

  • Raft recommends low latency between nodes

  • What if our cluster spreads to multiple regions?

1687/1692

Cluster federation

  • Kubernetes master operation relies on etcd

  • etcd uses the Raft protocol

  • Raft recommends low latency between nodes

  • What if our cluster spreads to multiple regions?

  • Break it down in local clusters

  • Regroup them in a cluster federation

  • Synchronize resources across clusters

  • Discover resources across clusters

k8s/whatsnext.md

1688/1692

Image separating from the next chapter

1689/1692

Links and resources

All things Kubernetes:

All things Docker:

Everything else:

These slides (and future updates) are on → http://container.training/

k8s/links.md

1691/1692

That's all, folks!
Questions?

end

shared/thankyou.md

1692/1692

Chapter 1

(auto-generated TOC)

2/1692
Paused

Help

Keyboard shortcuts

, , Pg Up, k Go to previous slide
, , Pg Dn, Space, j Go to next slide
Home Go to first slide
End Go to last slide
Number + Return Go to specific slide
b / m / f Toggle blackout / mirrored / fullscreen mode
c Clone slideshow
p Toggle presenter mode
t Restart the presentation timer
?, h Toggle this help
Esc Back to slideshow