K8s本地机器搭建

2024 年 3 月 4 日 星期一(已编辑)
3
这篇文章上次修改于 2024 年 3 月 5 日 星期二,可能部分内容已经不适用,如有疑问可询问作者。

K8s本地机器搭建

先简单的写一些我再部署过程中的记录,后面再补充具体说明

  • koh-local-master: 192.168.31.100
  • koh-local-node1:192.168.31.101
  • koh-local-node2:192.168.31.102
  • koh-local-node3:192.168.31.103

cat >> /etc/hosts << EOF
192.168.31.100 koh-local-master
192.168.31.101 koh-local-node1
192.168.31.102 koh-local-node2
192.168.31.103 koh-local-node3
EOF

关闭防火墙

systemctl stop firewalld && systemctl disable firewalld

关闭 SELINUX

注意:ARM 架构请勿执行,执行会出现 ip 无法获取问题。

sudo sed -i 's/SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

关闭 swap 分区

sudo swapoff -a && sed -ri 's/.swap./#&/' /etc/fstab

设置时区

timedatectl set-timezone Asia/Shanghai

systemctl restart rsyslog

安装 containerd

# 安装相关依赖
yum install -y yum-utils device-mapper-presistent-data lvm2

#添加 containerd 的 yum 源
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

# 安装 containerd

yum install -y containerd.io cri-tools

# 配置 containerd 的 config.toml

cat > /etc/containerd/config.toml << EOF
disable_plugins = ["restart"]
[plugins.linux]
shim_debug = true
[plugins.cri.registry.mirrors."docker.io"]
endpoint = ["https://57c8q0a1.mirror.aliyuncs.com"]
[plugins.cri]
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.2"
EOF

# 启动并设置开机启动
systemctl enable containerd && systemctl start containerd && systemctl status containerd

# 配置 containerd
cat > /etc/modules-load.d/containerd.conf <<EOF
overlay
br_netfilter
EOF

# 配置 k8s 网络
cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

#加载 overlay br_netfilter 模块
modprobe overlay
modprobe br_netfilter

#查看当前配置是否生效
sysctl -p /etc/sysctl.d/k8s.conf

#配置k8s的yum源地址
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
   http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装 k8s

yum install -y kubelet-1.28.2 kubeadm-1.28.2 kubectl-1.28.2

yum install -y kubelet kubeadm kubectl

启动kubelete systemctl enable kubelet && systemctl start kubelet && systemctl status kubelet

在master初始化集群

kubeadm init \ --apiserver-advertise-address=192.168.31.100 \ --control-plane-endpoint=koh-local-master \ --image-repository registry.aliyuncs.com/google_containers \ --kubernetes-version v1.28.2 \ --service-cidr=10.96.0.0/16 \ --pod-network-cidr=10.244.0.0/16 --v=5

这里的pod cidr最好不要修改,后面用flannel搭建网络插件


You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join koh-local-master:6443 --token xp7vyp.htdeg9pyhfqg606w \
        --discovery-token-ca-cert-hash sha256:0aab69db7b880b0de517a2b14ebe777734c88d5947de2fe7aa88c036f10b9e00 \
        --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join koh-local-master:6443 --token xp7vyp.htdeg9pyhfqg606w \
        --discovery-token-ca-cert-hash sha256:0aab69db7b880b0de517a2b14ebe777734c88d5947de2fe7aa88c036f10b9e00

mkdir -p HOME/.kube/config sudo chown (id -g) $HOME/.kube/config

安装网络插件

wget -e use_proxy=yes -e http_proxy=http://192.168.31.16:1080 -e https_proxy=http://192.168.31.16:1080 https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

kubectl apply -f kube-flannel.yml

Kubernetes命令行工具Kubectl智能补全安装 yum install -y bash-completion source /usr/share/bash-completion/bash_completion echo 'source <(kubectl completion bash)' >> ~/.bashrc source ~/.bashrc

问题,如果使用的是公网ip或者局域网ip,查看你想要的ip是哪个网卡,比如说我这边用的局域网的网卡,是eth2 那么就需要修改kube-flannel.yml 找到args,添加 - --iface=eth2 containers:

- args:
  - --ip-masq
  - --kube-subnet-mgr
  - --iface=eth2

创建了mysql的service,但是无法ping通过。 ping mysql-master.koh-mysql.svc.cluster.local 提示无法解析。

kubectl get svc -n kube-system kube-dns 通过这个获取CLUSTER-IP 修改/etc/resolv.conf,将集群 DNS 服务配置为 nameserver: nameserver 10.96.0.10

  • Loading...
  • Loading...
  • Loading...
  • Loading...
  • Loading...