Asumptions and Clarifications

The main goal of this post is get hand on with kubeadm commands.

This is not for production or any kind of specific process.

Upgrade Master

1- Drain Node

ter $ kubectl get nodes
NAME     STATUS   ROLES    AGE   VERSION
master   Ready    master   55m   v1.16.0
node01   Ready    <none>   55m   v1.16.0
master $ kubectl drain master --ignore-daemonsets
node/master cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-proxy-jmwgk, kube-system/weave-net-xzrl6
evicting pod "coredns-5644d7b6d9-cjkkc"
pod/coredns-5644d7b6d9-cjkkc evicted
node/master evicted
master $ kubectl get nodes
NAME     STATUS                     ROLES    AGE   VERSION
master   Ready,SchedulingDisabled   master   56m   v1.16.0
node01   Ready                      <none>   55m   v1.16.0
master $

2- Update kubeadm version

master $ apt-get install kubeadm=1.17.0-00
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following package was automatically installed and is no longer required:
  libuv1
Use 'apt autoremove' to remove it.
The following packages will be upgraded:
  kubeadm
1 to upgrade, 0 to newly install, 0 to remove and 298 not to upgrade.
Need to get 8,059 kB of archives.
After this operation, 4,903 kB disk space will be freed.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubeadm amd64 1.17.0-00 [8,059 kB]
Fetched 8,059 kB in 0s (8,996 kB/s)
(Reading database ... 248896 files and directories currently installed.)
Preparing to unpack .../kubeadm_1.17.0-00_amd64.deb ...
Unpacking kubeadm (1.17.0-00) over (1.16.0-00) ...
Setting up kubeadm (1.17.0-00) ...
master $

3-kubeadm upgrade plan

master $ kubeadm upgrade plan v1.17.0
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.16.0
[upgrade/versions] kubeadm version: v1.17.0

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       AVAILABLE
Kubelet     2 x v1.16.0   v1.17.0

Upgrade to the latest version in the v1.16 series:

COMPONENT            CURRENT   AVAILABLE
API Server           v1.16.0   v1.17.0

4-kubeadm upgrade apply

master $ kubeadm  upgrade apply v1.17.0
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/version] You have chosen to change the cluster version to "v1.17.0"
[upgrade/versions] Cluster version: v1.16.0
[upgrade/versions] kubeadm version: v1.17.0
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]:
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-scheduler.
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.17.0"...
Static pod: kube-apiserver-master hash: d0568dcea2ec05f4c7a56e033283b7a9
Static pod: kube-controller-manager-master hash: 5c21306a65dd081e1f8bbec6db1a1610
Static pod: kube-scheduler-master hash: c18ee741ac4ad7b2bfda7d88116f3047
[upgrade/etcd] Upgrading to TLS for etcd
Static pod: etcd-master hash: 87b169df01a333b7b0856b64dd70855e
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Renewing etcd-server certificate
[upgrade/staticpods] Renewing etcd-peer certificate
[upgrade/staticpods] Renewing etcd-healthcheck-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-06-17-18-02-46/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: etcd-master hash: 87b169df01a333b7b0856b64dd70855e
Static pod: etcd-master hash: 87b169df01a333b7b0856b64dd70855e
Static pod: etcd-master hash: ea927991e33546634dcc366d1aa266ef
[apiclient] Found 1 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests858236646"
W0617 18:02:51.367775   18893 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC";using "Node,RBAC"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-06-17-18-02-46/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-master hash: d0568dcea2ec05f4c7a56e033283b7a9
Static pod: kube-apiserver-master hash: d0568dcea2ec05f4c7a56e033283b7a9
Static pod: kube-apiserver-master hash: 379039d03b3c91e55e70c8dbb8606af4
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-06-17-18-02-46/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-master hash: 5c21306a65dd081e1f8bbec6db1a1610
Static pod: kube-controller-manager-master hash: e7bc401dca2029dd58a272da618fc389
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-06-17-18-02-46/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-master hash: c18ee741ac4ad7b2bfda7d88116f3047
Static pod: kube-scheduler-master hash: ff67867321338ffd885039e188f6b424
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to getlong term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from aNode Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in thecluster
[addons]: Migrating CoreDNS Corefile
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.17.0". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if youhaven't already done so.
master $

5-Upgrade kubelet

master $ apt-get install kubelet=1.17.0-00
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following package was automatically installed and is no longer required:
  libuv1
Use 'apt autoremove' to remove it.
The following packages will be upgraded:
  kubelet
1 to upgrade, 0 to newly install, 0 to remove and 298 not to upgrade.
Need to get 19.2 MB of archives.
After this operation, 11.6 MB disk space will be freed.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubelet amd64 1.17.0-00 [19.2 MB]
Fetched 19.2 MB in 1s (10.4 MB/s)
(Reading database ... 248896 files and directories currently installed.)
Preparing to unpack .../kubelet_1.17.0-00_amd64.deb ...
Unpacking kubelet (1.17.0-00) over (1.16.0-00) ...
Setting up kubelet (1.17.0-00) ...
master $

6- Restore Master

master $ kubectl uncordon master
node/master uncordoned
master $ kubectl get nodes
NAME     STATUS   ROLES    AGE   VERSION
master   Ready    master   63m   v1.17.0
node01   Ready    <none>   63m   v1.16.0
master $

Upgrade Nodes

1- Drain Node

master $ kubectl drain node01 --ignore-daemonsets
node/node01 cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-proxy-nqr7f, kube-system/weave-net-gtds9
evicting pod "coredns-6955765f44-dd6pd"
evicting pod "gold-nginx-5f4cfc4945-jcfgw"
evicting pod "coredns-6955765f44-9jjgj"
pod/gold-nginx-5f4cfc4945-jcfgw evicted
pod/coredns-6955765f44-dd6pd evicted
pod/coredns-6955765f44-9jjgj evicted
node/node01 evicted
master $ kubectl get nodes
NAME     STATUS                     ROLES    AGE   VERSION
master   Ready                      master   86m   v1.17.0
node01   Ready,SchedulingDisabled   <none>   86m   v1.16.0
master $

2- Update kubeadm

node01 $ apt-get install kubeadm=1.17.0-00
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following package was automatically installed and is no longer required:
  libuv1
Use 'apt autoremove' to remove it.
The following packages will be upgraded:
  kubeadm
1 to upgrade, 0 to newly install, 0 to remove and 298 not to upgrade.
Need to get 8,059 kB of archives.
After this operation, 4,903 kB disk space will be freed.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubeadm amd64 1.17.0-00 [8,059 kB]
Fetched 8,059 kB in 0s (15.4 MB/s)
(Reading database ... 248903 files and directories currently installed.)
Preparing to unpack .../kubeadm_1.17.0-00_amd64.deb ...
Unpacking kubeadm (1.17.0-00) over (1.16.0-00) ...
Setting up kubeadm (1.17.0-00) ...
node01 $

3- Update kubeadm plan node

node01 $ kubeadm upgrade plan v1.17.0
couldn't create a Kubernetes client from file "/etc/kubernetes/admin.conf": failed to load admin kubeconfig: open /etc/kubernetes/admin.conf: no such file or directory
To see the stack trace of this error execute with --v=5 or higher
node01 $ kubeadm upgrade node --kubelet-version 1.17.0
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade] Skipping phase. Not a control plane node.
[upgrade] Using kubelet config version 1.17.0, while kubernetes-version is v1.17.0
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
node01 $
node01 $ apt-get install kubelet=1.17.0-00
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following package was automatically installed and is no longer required:
  libuv1
Use 'apt autoremove' to remove it.
The following packages will be upgraded:
  kubelet
1 to upgrade, 0 to newly install, 0 to remove and 298 not to upgrade.
Need to get 19.2 MB of archives.
After this operation, 11.6 MB disk space will be freed.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubelet amd64 1.17.0-00 [19.2 MB]
Fetched 19.2 MB in 1s (16.2 MB/s)
(Reading database ... 248903 files and directories currently installed.)
Preparing to unpack .../kubelet_1.17.0-00_amd64.deb ...
Unpacking kubelet (1.17.0-00) over (1.16.0-00) ...
Setting up kubelet (1.17.0-00) ...
node01 $

5- Restore node

master $ kubectl  uncordon node01
node/node01 uncordoned
master $ kubectl get nodes
NAME     STATUS   ROLES    AGE   VERSION
master   Ready    master   96m   v1.17.0
node01   Ready    <none>   95m   v1.17.0
master $