Skip to content

Commit d507cb7

Browse files
author
sangam14
committed
update
1 parent dddde0f commit d507cb7

File tree

1 file changed

+39
-181
lines changed

1 file changed

+39
-181
lines changed

content/docs/kubernetes/kubeadm.md

Lines changed: 39 additions & 181 deletions
Original file line numberDiff line numberDiff line change
@@ -5,149 +5,25 @@ slug: "kubeadm"
55
weight: 3
66
---
77

8-
kubeadm is the reference installer for Kubernetes that sets up a minimally viable Kubernetes cluster using some best practices. It simplifies the initialization of control plane nodes, the addition (or removal) of nodes to a Kubernetes cluster, and also handles control plane and Kubelet configuration updates.
9-
10-
Kubeadm has a variety of commands and subcommands that will allow you to:
11-
- Create a control plane kubeadm init
12-
- Add a node kubeadm join
13-
- Regenerate certificates kubeadm certificates renew
14-
- Upgrade clusters kubeadm upgrade
15-
16-
A typical kubeadm setup consists of the following characteristics (which you are present in many Kubernetes distributions):
17-
- Control plane components (like the API Server or scheduler) running as pods
18-
- Certificate-based communication between the API server and its clients
19-
- kube-proxy to set up services
20-
- CoreDNS to provide in-cluster DNS
21-
In order to successfully use Kubeadm, the node must have a kubelet and container runtime installed on the machine:
22-
23-
```
24-
s#!/bin/bash
25-
26-
sudo apt update
27-
28-
sudo apt install docker.io -y
29-
sudo systemctl enable --now docker
30-
31-
sudo swapoff -a
32-
33-
sudo apt install -y apt-transport-https ca-certificates curl
34-
35-
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
36-
37-
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
38-
39-
sudo apt update
40-
41-
sudo apt install -y kubeadm=1.29.1-1.1 kubelet=1.29.1-1.1 kubectl=1.29.1-1.1
42-
43-
44-
```
45-
46-
47-
48-
### 1. **bridge-nf-call-iptables Does Not Exist**
49-
This error occurs because the `bridge-nf-call-iptables` is not enabled, which is necessary for the iptables proxy to see bridged traffic properly. You need to enable this setting to ensure that network packets are properly forwarded by the host.
50-
51-
**Enable `bridge-nf-call-iptables`**:
52-
1. Load the `br_netfilter` module:
53-
```bash
54-
sudo modprobe br_netfilter
55-
```
56-
2. Set `bridge-nf-call-iptables` to 1:
57-
```bash
58-
sudo sysctl net.bridge.bridge-nf-call-iptables=1
59-
```
60-
3. To make this change persistent across reboots, add it to your sysctl configuration:
61-
```bash
62-
echo "net.bridge.bridge-nf-call-iptables=1" | sudo tee -a /etc/sysctl.conf
63-
```
64-
65-
### 2. **IP Forwarding Not Enabled**
66-
Kubernetes requires IP forwarding to be enabled to allow containers to communicate with each other and the outside world.
67-
68-
**Enable IP forwarding**:
69-
1. Set IP forwarding to 1:
70-
```bash
71-
sudo sysctl net.ipv4.ip_forward=1
72-
```
73-
2. To make this setting permanent:
74-
```bash
75-
echo "net.ipv4.ip_forward=1" | sudo tee -a /etc/sysctl.conf
76-
```
77-
3. Apply the sysctl settings:
78-
```bash
79-
sudo sysctl -p
80-
```
81-
82-
### Final Steps
83-
After making these changes, re-run your `kubeadm init` command to proceed with the Kubernetes initialization:
84-
85-
```bash
86-
sudo kubeadm init --cri-socket=unix:///var/run/containerd/containerd.sock
87-
```
8+
````
9+
#!/bin/bash
8810
89-
These settings will ensure that your system is configured correctly for network traffic management, which is essential for a Kubernetes cluster to function properly. If you continue to experience issues or encounter new errors, rechecking the configurations and ensuring all prerequisites are met before initializing Kubernetes can be helpful.
11+
# Step 04 ) Cluster Creation : https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/
9012
13+
# Initialize control-plane node : https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#initializing-your-control-plane-node
9114
15+
# we dont have multiple control plane nodes
16+
# Choose POD Network Addon
17+
# --pod-network-cidr , --apiserver-advertise-address
18+
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.56.11
9219
93-
Once installed, kubeadm init will initialize a control plane for your cluster.
20+
#--------- Alhamdulillah: Cluster Configuration Completed------------------#
9421
95-
```
96-
sudo kubeadm init --cri-socket=unix:///var/run/containerd/containerd.sock
97-
I0418 14:20:15.325900 51055 version.go:256] remote version is much newer: v1.30.0; falling back to: stable-1.29
98-
[init] Using Kubernetes version: v1.29.4
99-
[preflight] Running pre-flight checks
100-
[preflight] Pulling images required for setting up a Kubernetes cluster
101-
[preflight] This might take a minute or two, depending on the speed of your internet connection
102-
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
103-
W0418 14:20:38.012478 51055 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.8" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI sandbox image.
104-
[certs] Using certificateDir folder "/etc/kubernetes/pki"
105-
[certs] Generating "ca" certificate and key
106-
[certs] Generating "apiserver" certificate and key
107-
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local sangam] and IPs [10.96.0.1 192.168.129.135]
108-
[certs] Generating "apiserver-kubelet-client" certificate and key
109-
[certs] Generating "front-proxy-ca" certificate and key
110-
[certs] Generating "front-proxy-client" certificate and key
111-
[certs] Generating "etcd/ca" certificate and key
112-
[certs] Generating "etcd/server" certificate and key
113-
[certs] etcd/server serving cert is signed for DNS names [localhost sangam] and IPs [192.168.129.135 127.0.0.1 ::1]
114-
[certs] Generating "etcd/peer" certificate and key
115-
[certs] etcd/peer serving cert is signed for DNS names [localhost sangam] and IPs [192.168.129.135 127.0.0.1 ::1]
116-
[certs] Generating "etcd/healthcheck-client" certificate and key
117-
[certs] Generating "apiserver-etcd-client" certificate and key
118-
[certs] Generating "sa" key and public key
119-
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
120-
[kubeconfig] Writing "admin.conf" kubeconfig file
121-
[kubeconfig] Writing "super-admin.conf" kubeconfig file
122-
[kubeconfig] Writing "kubelet.conf" kubeconfig file
123-
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
124-
[kubeconfig] Writing "scheduler.conf" kubeconfig file
125-
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
126-
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
127-
[control-plane] Creating static Pod manifest for "kube-apiserver"
128-
[control-plane] Creating static Pod manifest for "kube-controller-manager"
129-
[control-plane] Creating static Pod manifest for "kube-scheduler"
130-
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
131-
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
132-
[kubelet-start] Starting the kubelet
133-
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
134-
[apiclient] All control plane components are healthy after 7.004807 seconds
135-
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
136-
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
137-
[upload-certs] Skipping phase. Please see --upload-certs
138-
[mark-control-plane] Marking the node sangam as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
139-
[mark-control-plane] Marking the node sangam as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
140-
[bootstrap-token] Using token: lsz2er.aq8iqirypexwftb5
141-
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
142-
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
143-
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
144-
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
145-
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
146-
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
147-
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
148-
[addons] Applied essential addon: CoreDNS
149-
[addons] Applied essential addon: kube-proxy
22+
mkdir -p $HOME/.kube
23+
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
24+
sudo chown $(id -u):$(id -g) $HOME/.kube/config
15025
26+
# Message from Kubernetes Configuration:
15127
Your Kubernetes control-plane has initialized successfully!
15228
15329
To start using your cluster, you need to run the following as a regular user:
@@ -166,59 +42,41 @@ Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
16642
16743
Then you can join any number of worker nodes by running the following on each as root:
16844
169-
kubeadm join 192.168.129.135:6443 --token lsz2er.aq8iqirypexwftb5 \
170-
--discovery-token-ca-cert-hash sha256:99791033630ab01203be30b3306fdf36ec574f40fda2d908f54a976d4b4a3d27
171-
sangam@sangam:~$
45+
kubeadm join 192.168.56.11:6443 --token rfmw9v.exud3pc7riu0vnb4 \
46+
--discovery-token-ca-cert-hash sha256:d2e00be36a8b5e7b8034800fecd271355e6fd2af2c5d7618b5c98f64b510a0d9
17247
48+
kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s.yaml
49+
kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s.yaml
50+
kubectl apply -f https://github.com/weaveworks/weave/releases/download/latest_release/weave-daemonset-k8s-1.11.yaml
17351
174-
```
52+
kubectl get pods -A #now it shows the containres are up and running
53+
# What did I change : Make sure you remove ipv6 settings , if its an Azure environment makesure on NIC , IP Forwarding is enabled
17554
55+
# Now add weavenet address space
56+
# If you set --cluster-cidr option in kube-proxy make sure it matches the IPALLOC_RANGE given to Weavenet
57+
# ealier we passed : 10.244.0.0/16 as pod network, make sure to pass the same to enviorment variable of IPALLOC_RANGE
58+
container:
59+
- name: weave
60+
env:
61+
-name: IPALLOC_RANGE
62+
value: 10.244.0.0/16
17663
177-
```
178-
workernode@workernode:~$ sudo rm /etc/kubernetes/kubelet.conf
179-
sudo rm /etc/kubernetes/bootstrap-kubelet.conf
180-
sudo rm /etc/kubernetes/pki/ca.crt
181-
workernode@workernode:~$ sudo ss -ltnp | grep :10250
182-
LISTEN 0 4096 *:10250 *:* users:(("kubelet",pid=23209,fd=20))
183-
workernode@workernode:~$ sudo systemctl stop kubelet
184-
sudo systemctl disable kubelet
185-
Removed /etc/systemd/system/multi-user.target.wants/kubelet.service.
186-
workernode@workernode:~$ sudo kubeadm reset
187-
W0418 18:46:14.265856 23698 preflight.go:56] [reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
188-
[reset] Are you sure you want to proceed? [y/N]: y
189-
[preflight] Running pre-flight checks
190-
W0418 18:46:15.832641 23698 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
191-
[reset] Deleted contents of the etcd data directory: /var/lib/etcd
192-
[reset] Stopping the kubelet service
193-
[reset] Unmounting mounted directories in "/var/lib/kubelet"
194-
[reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
195-
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
196-
197-
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
198-
199-
The reset process does not reset or clean up iptables rules or IPVS tables.
200-
If you wish to reset iptables, you must do so manually by using the "iptables" command.
201-
202-
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
203-
to reset your system's IPVS tables.
204-
205-
The reset process does not clean your kubeconfig files and you must remove them manually.
206-
Please, check the contents of the $HOME/.kube/config file.
207-
workernode@workernode:~$ sudo kubeadm join 192.168.129.135:6443 --token dfs0h9.pru6ez9v84qbw98k --discovery-token-ca-cert-hash sha256:27e8c63c7355d79dd2b0dc98dadcd46e87b3ef05ab181caaddf2c1b2488ae474
208-
[preflight] Running pre-flight checks
209-
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
210-
[preflight] Reading configuration from the cluster...
211-
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
212-
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
213-
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
214-
[kubelet-start] Starting the kubelet
215-
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
64+
kubectl get ds -A
65+
kubectl edit ds weave-net -n kube-system
66+
# save above config
67+
kubectl get pods -A
68+
69+
#---- on Worker Nodes -------------------#
70+
sudo kubeadm join 192.168.56.11:6443 --token rfmw9v.exud3pc7riu0vnb4 \
71+
--discovery-token-ca-cert-hash sha256:d2e00be36a8b5e7b8034800fecd271355e6fd2af2c5d7618b5c98f64b510a0d9
72+
73+
#--- I have successfully spun an kubernetes cluster , I can -----------#
21674
21775
```
21876
21977
220-
> NOTE
22178
79+
> NOTE
22280
```
22381
#!/bin/bash
22482

0 commit comments

Comments
 (0)