Technology

Ubuntu Server Battling Submit-Docker Kubernetes Installs – New Stack

I’ve a bone to select with. I truthfully do not know what to level this anger at, however there’s a huge downside with utilizing Ubuntu Server as a base for Kubernetes proper now.

Over the previous few days, I’ve tried repeatedly to get Kubernetes up and working on Ubuntu Server 22.04, and, irrespective of what number of occasions I attempt, it fails. Now, I can set up Kubernetes on an Ubuntu server with none issues, as I’ve accomplished many occasions earlier than. Now the one distinction is that as an alternative of utilizing docker I’ve to make use of runtime like containerd. Nonetheless, when attempting to begin the cluster, I (each time) encounter the error:

It would not matter if I am coming from a contemporary set up or accomplished. sudo kubeadm reset, this time ends and by no means begins. I’ve tried this 3 times (every with new situations of Ubuntu Server 22.04) and it by no means succeeded.

This difficulty despatched me down a rabbit gap that provided some promise that the newest model of Containerd had issues when putting in on an Ubuntu server, however even after attempting a brand new deployment with an older model of Containerd, I bumped into the identical downside.

Suffice it to say, I got here away upset from this little expertise. A Kubernetes cluster on Ubuntu Server 22.04 ought to be a no brainer. It isn’t. Though I can get a single occasion working effective and even set up an app with it. However the second I wish to go cluster route, issues get significantly tousled.

Drill down

The fascinating factor about this error is that Kublet is working. Nonetheless, when working:

sudo systemctl standing kubelet

I’m seeing errors like this:

Aug 04 13:56:19 kubecontroller kubelet[550949]: E0804 13:56:19.613305  550949 kubelet.go:2424] "Error getting node" err="node "kubecontroller" not discovered"

Aug 04 13:56:19 kubecontroller kubelet[550949]: E0804 13:56:19.714099  550949 kubelet.go:2424] "Error getting node" err="node "kubecontroller" not discovered"

Aug 04 13:56:19 kubecontroller kubelet[550949]: E0804 13:56:19.814923  550949 kubelet.go:2424] "Error getting node" err="node "kubecontroller" not discovered"

The subsequent rabbit gap was to do with the ~./kube/config file. Even after re-checking the permissions on this file, I bumped into issues. Repair the issues and restart Kublet with:

sudo systemctl restart kubelet

what do you suppose? Now, Kublet won’t begin.

Let the hair pulling start!

A fast reboot of the system to see if it could possibly clear up no matter mess is left. As soon as the machine rebooted, I ran on this command in order that I can see extra debugging info like this:

sudo kubeadm init

what do you suppose? New errors like:

error execution part wait-control-plane

k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1    cmd/kubeadm/app/cmd/phases/workflow/runner.go:235

k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll    cmd/kubeadm/app/cmd/phases/workflow/runner.go:421

k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run    cmd/kubeadm/app/cmd/phases/workflow/runner.go:207

k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1    cmd/kubeadm/app/cmd/init.go:153

k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute    vendor/github.com/spf13/cobra/command.go:856

k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC    vendor/github.com/spf13/cobra/command.go:974

k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute    vendor/github.com/spf13/cobra/command.go:902

k8s.io/kubernetes/cmd/kubeadm/app.Run    cmd/kubeadm/app/kubeadm.go:50

principal.principal    cmd/kubeadm/kubeadm.go:25

runtime.principal    /usr/native/go/src/runtime/proc.go:250

runtime.goexit    /usr/native/go/src/runtime/asm_amd64.s:1571

Clearly, that is zero assist. And irrespective of how a lot time I spend with my good friend Google, I can not discover the reply to what bothers me.

Again to the drafting board. One other set up and similar outcomes. And, after all, the official Kubernetes documentation is totally zero assist.

What’s the conclusion and what are you able to do (when Ubuntu Server is your go-to)?

It is all about Docker.

As soon as upon a time, deploying a Kubernetes cluster on an Ubuntu server was extremely simple and barely (if ever) failed. What distinction did it make?

In a phrase… Docker.

Different Kubernetes eliminated Docker assist, making cluster deployment on an Ubuntu server an entire nightmare. With this in thoughts, what are you able to do? Properly, you possibly can at all times set up microk8s through snap with the command:

sudo snap set up microk8s --classic --channel=1.24

After all, as everybody is aware of, snap set up can take a while and isn’t as responsive as an ordinary set up. Nonetheless… when in Rome.

After the set up is full, add your consumer to the required group with:

sudo usermod -a -G microk8s $USER

Change the permissions of the .kube listing with:

sudo chown -f -R $USER ~/.kube

Sign off and log again in after which run the command:

microk8s standing --wait-ready

Huge increase, all working. I can deploy an NGINX app with:

microk8s kubectl create deployment nginx --image=nginx

However it’s not a cluster. Ah, however microk8s has you lined. On the controller node, difficulty the command:

microk8s add-node

You may be given a be part of command to run on another machine you have put in MicroKey 8 on that appears like this:

microk8s be part of 192.168.1.43:25000/bad12d3d8966b646442087d6a1edde436/6407f3e21772

Oh, however guess what it’s going to do for you as effectively:

Contacting cluster at 192.168.1.43

Connection failed. Invalid token (500).

Rebooted each machines, ran once more with addnode command. -Give up-Verify. Command, and no cube.

Be certain that to set hostnames for each machines, that these hostnames are mapped in /and so on/hosts, and double verify that the time is right on each machines. No deal.

Nonetheless, after the second reboot of each machines, for no matter cause, the node was in a position to be part of the controller. The method took for much longer than it ought to have, however I attribute that to utilizing the Snap model of the service.

After working get microk8s kubectl nodesI can now see each nodes on my cluster.

Haza.

Why make it tough?

For these concerned… it should not be that onerous. significantly. Kubernetes is already a difficult platform to make use of. To make it simply as tough to stand up and working, with out fail, sends me again to the simplicity of Docker Swarm.

Certain, I’d migrate to a RHEL-based server for my Kubernetes deployments, however Ubuntu Server has been my go-to for years. Do not get me unsuitable, I do not thoughts the microk8s, but it surely will not work with the likes of Portener (which is my go-to for issues like this). For this, I want so as to add the Portner addon with:

microk8s allow group

microk8s allow portainer

Excellent… simply not. What’s the downside now? After enabling Portner, you’re prompted to entry it by nodeport, which you may get the deal with by utilizing the command:

export NODE_PORT=$(kubectl get --namespace portainer -o jsonpath="{.spec.ports[1].nodePort}" providers portainer)

However wait… cubical Not put in as a result of I am utilizing microk8s! I’ve to vary the command like this:

export NODE_PORT=$(microk8s kubectl get --namespace portainer -o jsonpath="{.spec.ports[1].nodePort}" providers portainer)

Then you definately run the instructions (once more, modifying the next to incorporate microk8s):

export NODE_IP=$(microk8s kubectl get nodes --namespace portainer -o jsonpath="{.objects[0].standing.addresses[0].deal with}")  echo https://$NODE_IP:$NODE_PORT

The ultimate command will report the deal with used to entry Portner.

Would not or not it’s nice if it labored? It did not occur.

what do you suppose? This was all from the official documentation (minus the addition of the 8s a part of the micro of the command which was conveniently unnoticed).

For these answerable for these bits of know-how, it should not be that tough. I notice that there could be unhealthy blood between Kubernetes and Canonical, however when it spills over into the consumer area, the true frustration is on the heads of directors and builders.

Please, please, please, repair these points in order that individuals who choose Ubuntu Server can get their Kubernetes working as simply as they as soon as may.

The New Stack is a completely owned subsidiary of Perception Companions, an investor within the following firms talked about on this article: Docker.

Picture by Monique Stokman from Pixabay.

About the author

elondonbuzz

Leave a Comment