Our journey through containerization infrastructure has been nothing short of transformative. Before Kubernetes emerged as an industry standard, enterprise application deployment was a labyrinth of manual configurations and high-risk infrastructure management.
The pre-Kubernetes era was characterized by complex, time-consuming server deployment processes. What previously consumed weeks of engineering time resources can now be accomplished in mere hours—sometimes even minutes. However, seasoned practitioners understand that technological simplification never eliminates complexity entirely.

The Strategic Imperative for Kubernetes
Personal Experience with Kubernetes:
• Let me put it into perspective: Back when I was trying to deploy servers manually, spending nights debugging server configs —this technology was a total game-changer.
• Think about it: before Kubernetes, scaling applications was a very complex task. You'd be manually spinning up servers, configuring load balancers, watching resource allocation. One traffic spike and your whole infrastructure could come down. Not anymore.
• What makes Kubernetes so brilliant is how it simplifies the complex. Automatic scaling? Check. Self-healing when something goes wrong? Absolutely. Consistent deployments across different environments? True.
At King Servers, we’ve helped countless clients set up Kubernetes clusters on our VPS/VDS infrastructure, and we walked through this process many times. In this guide, We’ll break it down into a clear, actionable plan—perfect for anyone with a solid grasp of Linux and container basics who wants to get a cluster up and running. We’ll use the most popular Kubernetes distribution (the official `k8s`) and pair it with Ubuntu 22.04 LTS, a rock-solid OS that plays nicely with our servers. Let’s dive in.

Why Kubernetes and King Servers?
Pair that with a VPS from King Servers, and you’ve got a setup that’s fast, customizable, and ready to grow. Our servers offer SSD performance, reliable networking, and full root access, which I’ve found invaluable when tweaking things to fit a project’s needs. Whether you’re running a dev environment or a production app, this combo delivers.

Step 1: Setting Up the Infrastructure
Every good build starts with a solid foundation, and for us, that’s the servers. For a basic Kubernetes cluster, you’ll need at least three nodes: one master and two workers. Sure, you could scale up later, but this is a great starting point.
• Picking the Servers. At King Servers, I’d suggest mid-tier VPS plans to begin with—think 2-4 GB of RAM and 2 vCPUs per node. The master can get by with 2 GB, but for workers, I’d lean toward 4 GB, especially if your apps are resource-hungry. Go for 40-50 GB SSDs to keep things snappy. Order them through our site; it’s quick, and you’ll have access in minutes.
• Keeping Track. Once your servers are live, you’ll get IP addresses and login details. It’s best to record these in a secure location:
○ Master: 192.168.1.10
○ Worker 1: 192.168.1.11
○ Worker 2: 192.168.1.12
• Updating the OS. SSH into each server—I use the terminal on my Linux box, but PuTTY works fine too. Make sure Ubuntu 22.04 is up to date:
sudo apt update && sudo apt upgrade -y
• Run this on all three nodes. This may take a few minutes, while it’s downloading.
•
Double-check that the nodes can ping each other (ping 192.168.1.11
from the master, for instance).
King Servers assigns static IPs by default, so you’re good there. If the pings work, we’re off to a solid start.

Step 2. Installing the Essentials
Kubernetes needs a few key pieces to function: a container runtime (we’ll use Docker), and the core tools — kubeadm
, kubelet
, and kubectl
. I like doing this manually; it gives me a better feel for what’s happening.
• Installing Docker. On each node, run:
sudo apt install docker.io -y
sudo systemctl enable docker
sudo systemctl start docker
Check the version with docker --version
— you should see something like 20.10 or newer.
• Adding the Kubernetes Repo. The official Kubernetes packages aren’t in Ubuntu’s default repos, so let’s add them:
sudo curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt update
We’re using version 1.28 here — it’s stable and widely adopted as of March 2025.
• Installing Kubernetes Tools. On all nodes:
sudo apt install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
The hold
command locks the versions so an accidental update doesn’t throw a wrench in things.
I always test kubectl version --client
to make sure it’s working before moving on. If it spits out a version number, you’re golden.

Step 3. Bootstrapping the Master Node
The master node is the brains of the operation, so we’ll set it up first. This is where things start to feel real.
• Disabling Swap. Kubernetes doesn’t play nice with swap memory, so turn it off:
sudo swapoff -a
Edit /etc/fstab
and comment out any swap lines to keep it off after a reboot.
• Initializing the Cluster. On the master node, run:
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
This sets up the pod network CIDR (Classless Inter-Domain Routing range)
—we’ll need it for the next step. It’ll take a minute or two, and when it’s done, you’ll see a kubeadm join
command with a token. Save it somewhere; it’ll look like:
kubeadm join 192.168.1.10:6443 --token abcdef.1234567890abcdef --discovery-token-ca-cert-hash sha256:...
• Setting Up kubectl. To manage the cluster, configure access for your user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Check the status with kubectl get nodes
. You’ll see the master listed as NotReady
— don’t worry, that’s because the network isn’t up yet.
This step always gives me a little buzz—the cluster’s taking shape, even if it’s not fully functional yet.

Step 4. Configuring the Network and Adding Workers
A cluster without a network is just a bunch of lonely servers. We’ll use Flannel, a straightforward and reliable networking option.
• Installing Flannel. On the master node:
kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
Make sure the Flannel URL points to the correct version and is updated for the 1.28 cluster compatibility.
Give it a minute, then check again with kubectl get nodes
. The master should now show as Ready
.
•
Joining the Workers.
On each worker node (192.168.1.11
and 192.168.1.12
), disable swap (sudo swapoff -a
) and run the kubeadm join
command from earlier, like:
sudo kubeadm join 192.168.1.10:6443 --token abcdef.1234567890abcdef --discovery-token-ca-cert-hash sha256:...
Wait a couple of minutes, then back on the master, run kubectl get nodes
. All three nodes should appear.
If something’s off—I’ve hit snags before—check the logs with journalctl -u kubelet
. Nine times out of ten, it’s a network hiccup or a mistyped token.

Step 5. Testing and Launching an App
The cluster’s up, but let’s make sure it’s not just sitting there looking pretty. Time to deploy a test app.
• Creating a Deployment. On the master, spin up an Nginx instance:
kubectl create deployment nginx-test --image=nginx
kubectl scale deployment nginx-test --replicas=3
This spreads three Nginx pods across your workers.
• Checking Pods. See what’s running:
kubectl get pods -o wide
You’ll get a list of three pods with their IPs and assigned nodes.
• Exposing the App. Make it accessible:
kubectl expose deployment nginx-test --port=80 --type=NodePort
kubectl get svc
Look for the assigned port (say, 32000), then hit http://192.168.1.11:32000
in your browser. If you see Nginx’s welcome page, you’re in business.
Seeing that page load is always a satisfying moment — it means the cluster’s doing its job.

Tips from King Servers
• Monitoring. Set up Prometheus and Grafana to keep an eye on things. It’s worth the effort.
•
Backups.
Regularly save your /etc/kubernetes
configs somewhere safe — external storage or the cloud works.
•
Scaling.
Need more power? Add nodes with kubeadm join
— our VPS scales up fast.
• Support. Stuck? Our team’s here 24/7 via chat or tickets.
• Memory Issues. Always allocate more RAM than you think you'll need. Containers always need more RAM.
• Network Configs. Double-check your pod network CIDR. One wrong digit and everything falls apart.
• Security. For the love of all that is holy, use network policies and pod security contexts.
Final Thoughts: It's Not Rocket Science
Kubernetes isn't perfect. It's complex, make you study more, and will make you question your life choices at 2 AM. But when it works? Pure magic.
The first Kubernetes cluster will take more time. Practice, patience, and a willingness to break things — that's the secret.
Need more details? Got questions? Drop a comment.
Wrapping Up
Setting up a Kubernetes cluster on King Servers VPS isn’t just a tech exercise — it’s a way to unlock serious potential for your projects. We’ve laid out the steps the way We’d do it: methodically, with an eye on the details that matter. You’ve now got a working cluster ready for development, testing, or even production. Give our servers a spin, fire up your own Kubernetes setup, and see how straightforward it can be with the right guidance. Happy deploying!
Disclaimer: No containers were harmed in the making of this guide. Probably.