作业:
1、部署kubernetes集群
2、运行起Nginx,确保能正常访问
作业:
前提:重新部署Kubernetes,将网络插件切换为calico;
1、手动配置Pod,通过环境变量生成默认配置;
mydb
打上一个标签
wordpress
打上一个标签
额外为mydb和wordpress分别创建一个Service资源;
mydb的客户端只是wordpress,类型使用ClusterIP
kubectl create service clusterip mydb --tcp=3306:3306 --dry-run=client -o yaml
wordpress的客户端可能来自于集群外部,类型要使用NodePort
kubectl create service nodeport wordpress --tcp=80:80 --dry-run=client -o yaml
2、尝试为mydb和wordpress分别添加livenessProbe和readinessProbe,并测试其效果;
3、尝试为mydb和wordpress分别添加资源需求和资源限制;
MySQL和WordPress挂载NFS卷
配置NFS服务器
#安装服务端
[root@k8s-Ansible ~]#apt -y install nfs-kernel-server
[root@k8s-Ansible ~]#mkdir /data/mysql
[root@k8s-Ansible ~]#mkdir /data/wordpress
[root@k8s-Ansible ~]#vim /etc/exports
# /etc/exports: the access control list for filesystems which may be exported
# to NFS clients. See exports(5).
#
# Example for NFSv2 and NFSv3:
# /srv/homes hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)
#
# Example for NFSv4:
# /srv/nfs4 gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
# /srv/nfs4/homes gss/krb5i(rw,sync,no_subtree_check)
#
/data/mysql 10.0.0.0/24(rw,no_subtree_check,no_root_squash)
/data/wordpress 10.0.0.0/24(rw,no_subtree_check,no_root_squash)
[root@k8s-Ansible ~]#exportfs -r
[root@k8s-Ansible ~]#exportfs -v
/data/mysql 10.0.0.0/24(rw,wdelay,no_root_squash,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash)
/data/wordpress 10.0.0.0/24(rw,wdelay,no_root_squash,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash)
#通过ansible在每个节点安装客户端
[root@k8s-Ansible ansible]#vim install_nfs_common.yml
---
- name: intallnfs
hosts: all
tasks:
- name: apt
apt:
name: nfs-common
state: present
[root@k8s-Ansible ansible]#ansible-playbook install_nfs_common.yml
PLAY [intallnfs] **************************************************************************************
TASK [Gathering Facts] ********************************************************************************
ok: [10.0.0.205]
ok: [10.0.0.204]
ok: [10.0.0.202]
ok: [10.0.0.203]
ok: [10.0.0.201]
ok: [10.0.0.207]
ok: [10.0.0.206]
TASK [apt] ********************************************************************************************
ok: [10.0.0.202]
changed: [10.0.0.205]
ok: [10.0.0.207]
changed: [10.0.0.206]
changed: [10.0.0.204]
changed: [10.0.0.203]
changed: [10.0.0.201]
PLAY RECAP ********************************************************************************************
10.0.0.201 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
10.0.0.202 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
10.0.0.203 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
10.0.0.204 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
10.0.0.205 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
10.0.0.206 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
10.0.0.207 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Pod: mydb
添加NFS卷,mysql把数据存储在卷上,/var/lib/mysql/
[root@k8s-Master-01 test]#cat mysql/01-service-mydb.yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: mydb
name: mydb
spec:
ports:
- name: 3306-3306
port: 3306
protocol: TCP
targetPort: 3306
selector:
app: mydb
type: ClusterIP
[root@k8s-Master-01 test]#cat mysql/02-pod-mydb.yaml
apiVersion: v1
kind: Pod
metadata:
name: mydb
namespace: default
labels:
app: mydb
spec:
containers:
- name: mysql
image: mysql:8.0
imagePullPolicy: IfNotPresent
env:
- name: MYSQL_ROOT_PASSWORD
value: "123456"
- name: MYSQL_DATABASE
value: wpdb
- name: MYSQL_USER
value: wpuser
- name: MYSQL_PASSWORD
value: "123456"
resources:
requests:
memory: "128M"
cpu: "200m"
limits:
memory: "512M"
cpu: "400m"
# securityContext:
# runAsUser: 999
volumeMounts:
- mountPath: /var/lib/mysql
name: nfs-volume-mysql
volumes:
- name: nfs-volume-mysql
nfs:
server: 10.0.0.207
path: /data/mysql
readOnly: false
[root@k8s-Master-01 test]#kubectl apply -f mysql/
Pod:wordpress
添加NFS卷,mysql把数据存储在卷上,/var/www/html/
[root@k8s-Master-01 test]#cat wordpress/01-service-wordpress.yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: wordpress
name: wordpress
spec:
ports:
- name: 80-80
port: 80
protocol: TCP
targetPort: 80
selector:
app: wordpress
type: NodePort
[root@k8s-Master-01 test]#cat wordpress/02-pod-wordpress.yaml
apiVersion: v1
kind: Pod
metadata:
name: wordpress
namespace: default
labels:
app: wordpress
spec:
containers:
- name: wordpress
image: wordpress:6.1-apache
env:
- name: WORDPRESS_DB_HOST
value: mydb
- name: WORDPRESS_DB_NAME
value: wpdb
- name: WORDPRESS_DB_USER
value: wpuser
- name: WORDPRESS_DB_PASSWORD
value: "123456"
resources:
requests:
memory: "128M"
cpu: "200m"
limits:
memory: "512M"
cpu: "400m"
# securityContext:
# runAsUser: 999
volumeMounts:
- mountPath: /var/www/html
name: nfs-volume-wordpress
volumes:
- name: nfs-volume-wordpress
nfs:
server: 10.0.0.207
path: /data/wordpress
readOnly: false
[root@k8s-Master-01 test]#kubectl apply -f wordpress/
验证
#节点Pods运行正常
[root@k8s-Master-01 test]#kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d13h
mydb ClusterIP 10.103.126.185 <none> 3306/TCP 16m
wordpress NodePort 10.106.191.44 <none> 80:31381/TCP 25m
[root@k8s-Master-01 test]#kubectl get pods
NAME READY STATUS RESTARTS AGE
mydb 1/1 Running 0 7m9s
wordpress 1/1 Running 0 5m54s
#NFS服务器验证
[root@k8s-Ansible data]#tree mysql/ wordpress/ -L 1
mysql/
├── auto.cnf
├── binlog.000001
├── binlog.index
├── ca-key.pem
├── ca.pem
├── client-cert.pem
├── client-key.pem
├── #ib_16384_0.dblwr
├── #ib_16384_1.dblwr
├── ib_buffer_pool
├── ibdata1
├── ibtmp1
├── #innodb_redo
├── #innodb_temp
├── mysql
├── mysql.ibd
├── mysql.sock -> /var/run/mysqld/mysqld.sock
├── performance_schema
├── private_key.pem
├── public_key.pem
├── server-cert.pem
├── server-key.pem
├── sys
├── undo_001
└── undo_002
wordpress/
├── wp-admin
├── wp-includes
└── wp-trackback.php
MySQL和WordPress添加PVC卷
Pod: mysql
/var/lib/mysql/,静态制备pv
[root@k8s-Master-01 test]#cat mysql/01-service-mydb.yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: mydb
name: mydb
spec:
ports:
- name: 3306-3306
port: 3306
protocol: TCP
targetPort: 3306
selector:
app: mydb
type: ClusterIP
[root@k8s-Master-01 test]#cat mysql/02-pod-mydb.yaml
apiVersion: v1
kind: Pod
metadata:
name: mydb
namespace: default
labels:
app: mydb
spec:
containers:
- name: mysql
image: mysql:8.0
imagePullPolicy: IfNotPresent
env:
- name: MYSQL_ROOT_PASSWORD
value: "123456"
- name: MYSQL_DATABASE
value: wpdb
- name: MYSQL_USER
value: wpuser
- name: MYSQL_PASSWORD
value: "123456"
resources:
requests:
memory: "128M"
cpu: "200m"
limits:
memory: "512M"
cpu: "400m"
# securityContext:
# runAsUser: 999
volumeMounts:
- mountPath: /var/lib/mysql
name: nfs-pvc-mydb
volumes:
- name: nfs-pvc-mydb
persistentVolumeClaim:
claimName: pvc-mydb
[root@k8s-Master-01 test]#cat mysql/03-pv-mydb.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-nfs-mydb
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: "/data/mysql"
server: 10.0.0.207
[root@k8s-Master-01 test]#cat mysql/04-pvc-mydb.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-mydb
namespace: default
spec:
accessModes: ["ReadWriteMany"]
volumeMode: Filesystem
resources:
requests:
storage: 3Gi
limits:
storage: 10Gi
[root@k8s-Master-01 test]#kubectl apply -f mysql/
service/mydb created
pod/mydb created
persistentvolume/pv-nfs-mydb created
persistentvolumeclaim/pvc-mydb created
[root@k8s-Master-01 test]#kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-nfs-mydb 5Gi RWX Retain Bound default/pvc-mydb 19s
[root@k8s-Master-01 test]#kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-mydb Bound pv-nfs-mydb 5Gi RWX 21s
[root@k8s-Master-01 test]#kubectl get pods
NAME READY STATUS RESTARTS AGE
mydb 1/1 Running 0 25s
Pod: wordpress
/var/www/html/,动态制备pv
#Set up a NFS Server on a Kubernetes cluster
[root@k8s-Master-01 test]#kubectl create namespace nfs
namespace/nfs created
[root@k8s-Master-01 test]#kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/deploy/example/nfs-provisioner/nfs-server.yaml --namespace nfs
service/nfs-server created
deployment.apps/nfs-server created
[root@k8s-Master-01 test]#kubectl get ns
NAME STATUS AGE
default Active 2d13h
demo Active 2d2h
kube-node-lease Active 2d13h
kube-public Active 2d13h
kube-system Active 2d13h
nfs Active 101s
[root@k8s-Master-01 test]#kubectl get svc -n nfs -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
nfs-server ClusterIP 10.102.79.4 <none> 2049/TCP,111/UDP 2m45s app=nfs-server
[root@k8s-Master-01 test]#kubectl get pods -n nfs -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nfs-server-5847b99d99-77bgq 1/1 Running 0 2m49s 192.168.127.5 k8s-node-01 <none> <none>
#Install NFS CSI driver v3.1.0 version on a kubernetes cluster
[root@k8s-Master-01 test]#curl -skSL https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/v3.1.0/deploy/install-driver.sh | bash -s v3.1.0 -
Installing NFS CSI driver, version: v3.1.0 ...
serviceaccount/csi-nfs-controller-sa created
clusterrole.rbac.authorization.k8s.io/nfs-external-provisioner-role created
clusterrolebinding.rbac.authorization.k8s.io/nfs-csi-provisioner-binding created
csidriver.storage.k8s.io/nfs.csi.k8s.io created
deployment.apps/csi-nfs-controller created
daemonset.apps/csi-nfs-node created
NFS CSI driver installed successfully.
#检查启动情况,发现镜像下载不到
[root@k8s-Master-01 test]#kubectl -n kube-system get pod -o wide -l 'app in (csi-nfs-node,csi-nfs-controller)'
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
csi-nfs-controller-65cf7d587-72ggp 1/3 ErrImagePull 0 109s 10.0.0.205 k8s-node-02 <none> <none>
csi-nfs-controller-65cf7d587-bzc6z 0/3 ContainerCreating 0 109s 10.0.0.204 k8s-node-01 <none> <none>
csi-nfs-node-dsgzn 0/3 ContainerCreating 0 62s 10.0.0.206 k8s-node-03 <none> <none>
csi-nfs-node-fn56s 1/3 ErrImagePull 0 62s 10.0.0.205 k8s-node-02 <none> <none>
csi-nfs-node-hmbnr 0/3 ContainerCreating 0 63s 10.0.0.201 k8s-master-01 <none> <none>
csi-nfs-node-nrjns 0/3 ErrImagePull 0 63s 10.0.0.203 k8s-master-03 <none> <none>
csi-nfs-node-zn59h 0/3 ContainerCreating 0 62s 10.0.0.204 k8s-node-01 <none> <none>
csi-nfs-node-zngz6 0/3 ContainerCreating 0 63s 10.0.0.202 k8s-master-02 <none> <none>
#手动上传镜像
[root@k8s-Ansible ansible]#vim nfs-images.yml
---
- name: images
hosts: all
tasks:
- name: copy
copy:
src: nfs-csi.tar
dest: /root/nfs-csi.tar
- name: shell
shell: docker image load -i nfs-csi.tar
[root@k8s-Ansible ansible]#ansible-playbook nfs-images.yml
[root@k8s-Ansible ansible]#ansible-playbook nfs-images.yml
PLAY [images] *****************************************************************************************************
TASK [Gathering Facts] ********************************************************************************************
ok: [10.0.0.204]
ok: [10.0.0.202]
ok: [10.0.0.205]
ok: [10.0.0.201]
ok: [10.0.0.203]
ok: [10.0.0.207]
ok: [10.0.0.206]
TASK [copy] *******************************************************************************************************
changed: [10.0.0.205]
changed: [10.0.0.204]
changed: [10.0.0.201]
changed: [10.0.0.203]
changed: [10.0.0.202]
changed: [10.0.0.206]
changed: [10.0.0.207]
TASK [shell] ******************************************************************************************************
changed: [10.0.0.205]
changed: [10.0.0.201]
changed: [10.0.0.204]
changed: [10.0.0.203]
changed: [10.0.0.202]
changed: [10.0.0.206]
changed: [10.0.0.207]
PLAY RECAP ********************************************************************************************************
10.0.0.201 : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
10.0.0.202 : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
10.0.0.203 : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
10.0.0.204 : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
10.0.0.205 : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
10.0.0.206 : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
10.0.0.207 : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
[root@k8s-Master-01 test]#kubectl -n kube-system get pod -o wide -l 'app in (csi-nfs-node,csi-nfs-controller)'
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
csi-nfs-controller-65cf7d587-72ggp 3/3 Running 2 (2m9s ago) 9m9s 10.0.0.205 k8s-node-02 <none> <none>
csi-nfs-controller-65cf7d587-bzc6z 3/3 Running 2 (69s ago) 9m9s 10.0.0.204 k8s-node-01 <none> <none>
csi-nfs-node-dsgzn 3/3 Running 2 (80s ago) 8m22s 10.0.0.206 k8s-node-03 <none> <none>
csi-nfs-node-fn56s 3/3 Running 2 (110s ago) 8m22s 10.0.0.205 k8s-node-02 <none> <none>
csi-nfs-node-hmbnr 3/3 Running 0 8m23s 10.0.0.201 k8s-master-01 <none> <none>
csi-nfs-node-nrjns 3/3 Running 1 (3m52s ago) 8m23s 10.0.0.203 k8s-master-03 <none> <none>
csi-nfs-node-zn59h 3/3 Running 1 (3m21s ago) 8m22s 10.0.0.204 k8s-node-01 <none> <none>
csi-nfs-node-zngz6 3/3 Running 2 (81s ago) 8m23s 10.0.0.202 k8s-master-02 <none> <none>
#Storage Class Usage (Dynamic Provisioning)
[root@k8s-Master-01 sc-pvc]#vim 01-sc.yaml
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-csi
provisioner: nfs.csi.k8s.io
parameters:
#server: nfs-server.default.svc.cluster.local
server: nfs-server.nfs.svc.cluster.local
share: /
#reclaimPolicy: Delete
reclaimPolicy: Retain
volumeBindingMode: Immediate
mountOptions:
- hard
- nfsvers=4.1
#create PVC
[root@k8s-Master-01 sc-pvc]#vim 02-pvc.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-nfs-sc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
storageClassName: nfs-csi
[root@k8s-Master-01 sc-pvc]#kubectl apply -f 01-sc.yaml
storageclass.storage.k8s.io/nfs-csi created
[root@k8s-Master-01 sc-pvc]#kubectl apply -f 02-pvc.yaml
persistentvolumeclaim/pvc-nfs-sc created
[root@k8s-Master-01 sc-pvc]#kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-csi nfs.csi.k8s.io Retain Immediate false 90s
[root@k8s-Master-01 sc-pvc]#kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-mydb Bound pv-nfs-mydb 5Gi RWX 36m
pvc-nfs-sc Bound pvc-7fd8c8e0-82fc-4beb-b51b-0343514cf6c4 10Gi RWX nfs-csi 87s
[root@k8s-Master-01 sc-pvc]#kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-nfs-mydb 5Gi RWX Retain Bound default/pvc-mydb 38m
pvc-7fd8c8e0-82fc-4beb-b51b-0343514cf6c4 10Gi RWX Retain Bound default/pvc-nfs-sc nfs-csi 2m39s
[root@k8s-Master-01 test]#cat wordpress/02-pod-wordpress.yaml
apiVersion: v1
kind: Pod
metadata:
name: wordpress
namespace: default
labels:
app: wordpress
spec:
containers:
- name: wordpress
image: wordpress:6.1-apache
env:
- name: WORDPRESS_DB_HOST
value: mydb
- name: WORDPRESS_DB_NAME
value: wpdb
- name: WORDPRESS_DB_USER
value: wpuser
- name: WORDPRESS_DB_PASSWORD
value: "123456"
resources:
requests:
memory: "128M"
cpu: "200m"
limits:
memory: "512M"
cpu: "400m"
# securityContext:
# runAsUser: 999
volumeMounts:
- mountPath: /var/www/html
name: nfs-pvc-wp
volumes:
- name: nfs-pvc-wp
persistentVolumeClaim:
claimName: pvc-nfs-sc
[root@k8s-Master-01 test]#kubectl apply -f wordpress/
[root@k8s-Master-01 chapter5]#kubectl get pods
NAME READY STATUS RESTARTS AGE
mydb 1/1 Running 0 46m
wordpress 1/1 Running 0 29s
[root@k8s-Master-01 chapter5]#kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d14h
mydb ClusterIP 10.106.130.102 <none> 3306/TCP 54m
wordpress NodePort 10.110.182.248 <none> 80:32713/TCP 41s
验证
[root@k8s-Ansible data]#ll mysql/
总用量 99680
drwxr-xr-x 8 systemd-coredump root 4096 11月 11 12:09 ./
drwxr-xr-x 5 root root 4096 11月 11 12:07 ../
-rw-r----- 1 systemd-coredump systemd-coredump 56 11月 11 12:07 auto.cnf
-rw-r----- 1 systemd-coredump systemd-coredump 3024100 11月 11 12:09 binlog.000001
-rw-r----- 1 systemd-coredump systemd-coredump 157 11月 11 12:09 binlog.000002
-rw-r----- 1 systemd-coredump systemd-coredump 32 11月 11 12:09 binlog.index
-rw------- 1 systemd-coredump systemd-coredump 1680 11月 11 12:07 ca-key.pem
-rw-r--r-- 1 systemd-coredump systemd-coredump 1112 11月 11 12:07 ca.pem
-rw-r--r-- 1 systemd-coredump systemd-coredump 1112 11月 11 12:07 client-cert.pem
-rw------- 1 systemd-coredump systemd-coredump 1676 11月 11 12:07 client-key.pem
-rw-r----- 1 systemd-coredump systemd-coredump 196608 11月 11 12:09 '#ib_16384_0.dblwr'
-rw-r----- 1 systemd-coredump systemd-coredump 8585216 11月 11 12:08 '#ib_16384_1.dblwr'
-rw-r----- 1 systemd-coredump systemd-coredump 5711 11月 11 12:09 ib_buffer_pool
-rw-r----- 1 systemd-coredump systemd-coredump 12582912 11月 11 12:09 ibdata1
-rw-r----- 1 systemd-coredump systemd-coredump 12582912 11月 11 12:10 ibtmp1
drwxr-x--- 2 systemd-coredump systemd-coredump 4096 11月 11 12:09 '#innodb_redo'/
drwxr-x--- 2 systemd-coredump systemd-coredump 4096 11月 11 12:09 '#innodb_temp'/
drwxr-x--- 2 systemd-coredump systemd-coredump 4096 11月 11 12:07 mysql/
-rw-r----- 1 systemd-coredump systemd-coredump 31457280 11月 11 12:09 mysql.ibd
lrwxrwxrwx 1 systemd-coredump systemd-coredump 27 11月 11 12:08 mysql.sock -> /var/run/mysqld/mysqld.sock
drwxr-x--- 2 systemd-coredump systemd-coredump 4096 11月 11 12:07 performance_schema/
-rw------- 1 systemd-coredump systemd-coredump 1676 11月 11 12:07 private_key.pem
-rw-r--r-- 1 systemd-coredump systemd-coredump 452 11月 11 12:07 public_key.pem
-rw-r--r-- 1 systemd-coredump systemd-coredump 1112 11月 11 12:07 server-cert.pem
-rw------- 1 systemd-coredump systemd-coredump 1676 11月 11 12:07 server-key.pem
drwxr-x--- 2 systemd-coredump systemd-coredump 4096 11月 11 12:08 sys/
-rw-r----- 1 systemd-coredump systemd-coredump 16777216 11月 11 12:09 undo_001
-rw-r----- 1 systemd-coredump systemd-coredump 16777216 11月 11 12:09 undo_002
drwxr-x--- 2 systemd-coredump systemd-coredump 4096 11月 11 12:09 wpdb/
[root@k8s-Ansible data]#ll wordpress/
总用量 8
drwxr-xr-x 2 root root 4096 11月 11 12:07 ./
drwxr-xr-x 5 root root 4096 11月 11 12:07 ../
基于secret引用的方式,为MySQL和WordPress环境变量提供值
[root@k8s-Master-01 test]#kubectl create secret generic mysql-secret --from-literal=root.pass=654321 --from-literal=db.name=wpdb --from-literal=db.user.name=wpuser --from-literal=db.user.pass=123456 --dry-run=client -o yaml > secret.yaml
[root@k8s-Master-01 test]#cat secret.yaml
apiVersion: v1
data:
db.name: d3BkYg==
db.user.name: d3B1c2Vy
db.user.pass: MTIzNDU2
root.pass: NjU0MzIx
kind: Secret
metadata:
creationTimestamp: null
name: mysql-secret
[root@k8s-Master-01 test]#kubectl apply -f secret.yaml
secret/mysql-secret created
[root@k8s-Master-01 test]#kubectl get secret
NAME TYPE DATA AGE
mysql-secret Opaque 4 14s
[root@k8s-Master-01 test]#cat mysql/02-pod-mydb.yaml
apiVersion: v1
kind: Pod
metadata:
name: mydb
namespace: default
labels:
app: mydb
spec:
containers:
- name: mysql
image: mysql:8.0
imagePullPolicy: IfNotPresent
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: root.pass
- name: MYSQL_DATABASE
valueFrom:
secretKeyRef:
name: mysql-secret
key: db.name
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: mysql-secret
key: db.user.name
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: db.user.pass
resources:
requests:
memory: "128M"
cpu: "200m"
limits:
memory: "512M"
cpu: "400m"
# securityContext:
# runAsUser: 999
volumeMounts:
- mountPath: /var/lib/mysql
name: nfs-pvc-mydb
volumes:
- name: nfs-pvc-mydb
persistentVolumeClaim:
claimName: pvc-mydb
[root@k8s-Master-01 test]#cat wordpress/02-pod-wordpress.yaml
apiVersion: v1
kind: Pod
metadata:
name: wordpress
namespace: default
labels:
app: wordpress
spec:
containers:
- name: wordpress
image: wordpress:6.1-apache
env:
- name: WORDPRESS_DB_HOST
value: mydb
- name: WORDPRESS_DB_NAME
valueFrom:
secretKeyRef:
name: mysql-secret
key: db.name
- name: WORDPRESS_DB_USER
valueFrom:
secretKeyRef:
name: mysql-secret
key: db.user.name
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: db.user.pass
resources:
requests:
memory: "128M"
cpu: "200m"
limits:
memory: "512M"
cpu: "400m"
# securityContext:
# runAsUser: 999
volumeMounts:
- mountPath: /var/www/html
name: nfs-pvc-wp
volumes:
- name: nfs-pvc-wp
persistentVolumeClaim:
claimName: pvc-nfs-sc
[root@k8s-Master-01 test]#kubectl apply -f mysql/
service/mydb configured
pod/mydb created
persistentvolume/pv-nfs-mydb unchanged
persistentvolumeclaim/pvc-mydb unchanged
[root@k8s-Master-01 test]#kubectl apply -f wordpress/
service/wordpress configured
pod/wordpress created
[root@k8s-Master-01 test]#kubectl get pods
NAME READY STATUS RESTARTS AGE
mydb 1/1 Running 0 4m12s
wordpress 1/1 Running 0 37s
[root@k8s-Master-01 test]#
[root@k8s-Master-01 test]#kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d16h
mydb ClusterIP 10.106.130.102 <none> 3306/TCP 174m
wordpress NodePort 10.110.182.248 <none> 80:32713/TCP 121m
创建运行一个nginx Pod
该Pod从Secret卷加载证书和私钥,从ConfigMap卷加载配置信息
#生成secret文件,注意此处证书www.shuhong.com.pem是由cat www.shuhong.com.crt ca.crt > www.shuhong.com.pem合成的
[root@k8s-Master-01 ssl-nginx]#kubectl create secret tls nginx-certs --cert=certs.d/www.shuhong.com.pem --key=certs.d/www.shuhong.com.key --dry-run=client -o yaml > 02-secret-ssl.yaml
#生成configmap文件
[root@k8s-Master-01 ssl-nginx]#kubectl create configmap nginx-sslvhosts-confs --from-file=nginx-ssl-conf.d/ --dry-run=client -o yaml > 03-configmap-nginx.yaml
[root@k8s-Master-01 ssl-nginx]#tree
.
├── 01-ssl-nginx-pod.yaml
├── 02-secret-ssl.yaml
├── 03-configmap-nginx.yaml
├── certs.d
│ ├── ca.crt
│ ├── ca.key
│ ├── ca.srl
│ ├── crts.sh
│ ├── v3.ext
│ ├── www.shuhong.com.crt
│ ├── www.shuhong.com.csr
│ ├── www.shuhong.com.key
│ └── www.shuhong.com.pem
└── nginx-ssl-conf.d
├── myserver.conf
├── myserver-gzip.cfg
└── myserver-status.cfg
[root@k8s-Master-01 ssl-nginx]#cat 01-ssl-nginx-pod.yaml
---
apiVersion: v1
kind: Pod
metadata:
name: ssl-nginx
namespace: default
spec:
containers:
- image: nginx:alpine
name: ngxserver
volumeMounts:
- name: nginxcerts
mountPath: /etc/nginx/certs/
readOnly: true
- name: nginxconf
mountPath: /etc/nginx/conf.d/
readOnly: true
volumes:
- name: nginxcerts
secret:
secretName: nginx-certs
- name: nginxconf
configMap:
name: nginx-sslvhosts-confs
optional: false
[root@k8s-Master-01 ssl-nginx]#cat 02-secret-ssl.yaml
apiVersion: v1
data:
tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUdDVENDQS9HZ0F3SUJBZ0lVREpJM3Axd0k2L2tOYzNzOVRPazNYSmFZSEI4d0RRWUpLb1pJaHZjTkFRRU4KQlFBd2NERUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdNQjBKbGFXcHBibWN4RURBT0JnTlZCQWNNQjBKbAphV3BwYm1jeEVEQU9CZ05WQkFvTUIyVjRZVzF3YkdVeEVUQVBCZ05WQkFzTUNGQmxjbk52Ym1Gc01SZ3dGZ1lEClZRUUREQTkzZDNjdWMyaDFhRzl1Wnk1amIyMHdIaGNOTWpJeE1URXhNRFl6TlRBNVdoY05Nekl4TVRBNE1EWXoKTlRBNVdqQndNUXN3Q1FZRFZRUUdFd0pEVGpFUU1BNEdBMVVFQ0F3SFFtVnBhbWx1WnpFUU1BNEdBMVVFQnd3SApRbVZwYW1sdVp6RVFNQTRHQTFVRUNnd0haWGhoYlhCc1pURVJNQThHQTFVRUN3d0lVR1Z5YzI5dVlXd3hHREFXCkJnTlZCQU1NRDNkM2R5NXphSFZvYjI1bkxtTnZiVENDQWlJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dJUEFEQ0MKQWdvQ2dnSUJBTTV0Z0I4KzZDM29YSDNQL3BQaDErRlRmbDdITDMxTmFWNC9rdVg0VzAxNFJ1YkcxTFUxVmFoZwpRK3NadHByeFRKUjliMlZWQm8vQ1lpUDRxR3MxNDh2dDJuZ0s4Z1RyZFZ3TnpIaEtPWC9tOUc0bFVRcFRCODk3CnNCVk1xWGZWZ2p4SENwOXJIbFhEZzYxRkxLa2FWVWgxNDloZXA3VzV3RFJxMk9iREZSNjltNzdmRXdvUStZZk8KeDI5QjludGZPZklpYzJBZzJSUUlwQldOcCtISWE3RlBwSUNOL1BuWTFqWDJCT09FSi9UT2oxSElMdU1CV0VMagpjR3RKclYvaUhPLzQ4aE1rOTZrMkp4RWovZnFPeGJrMjc1eENOYUxwRGNWeEtXRzZkcjd5Ry9WaFhSdFduMkp5CnliR0RieU8vM2QyNzRrTklKL3hsM0RnUXI1cHd0Zjg5N3RQNmJrd2lDQnZycnBBVzd4RnRJQXMxSnQ0WjRsSE4Ka2pPKy9YRGV2YjBpNTNkUXNTT1p1MWJNRlNQYVdKT0pNazg3a0pyTyt5aFAvUEdPdWhEU0hERHZ6YWxnVTFqYQpDaWVGKzJqQkg4UGRmRFJSMXAxcFZXZkNlZ0xTdjVoZFh2NXFCMkxGaytNUVFVUFE3cXNCR0Q2dEtlQ3k4Z2pOClBEOXVkenBwQ0VuUEcvTjhrWGRPQlVLZGJJajRhK0pJUy9WOEN0SWFGb3dwd3pma0s2Vlp6d2hYbWRzZ2ovWU4KUlJiQnBhQTdEZjd6K3UxNnlFa044RTZONko3blZMcVBSVUhEOWN5N1NsSUVhR2c2aXFxVFJ4TGxSTnE4UUhwTgpnSjdPZjhKTC9oNFhMZDhDNExlYzBYdHh0QjB0YkFrODE4WTM2TVlQVEE3VE9oUnRYaTluQWdNQkFBR2pnWm93CmdaY3dId1lEVlIwakJCZ3dGb0FVcGhPMlg2ODBSN21KREZVQW5ZUVA4TENMRkR3d0NRWURWUjBUQkFJd0FEQUwKQmdOVkhROEVCQU1DQlBBd0V3WURWUjBsQkF3d0NnWUlLd1lCQlFVSEF3RXdSd1lEVlIwUkJFQXdQb0lQZDNkMwpMbk5vZFdodmJtY3VZMjl0Z2hab1lYSmliM0l1ZDNkM0xuTm9kV2h2Ym1jdVkyOXRnaE4zZDNjdWQzZDNMbk5vCmRXaHZibWN1WTI5dE1BMEdDU3FHU0liM0RRRUJEUVVBQTRJQ0FRREluQzQ0NlR1N0YvM2IwOXNPV0NXVUVmMVoKRVc2NG1XMGcrbkpCRktHSE5DYWtQSVRyeXZnKzZNTGdYUndaK1pwR0ppc3RGVUJCeDZZM2ZCbjNva01tOVM5MAovYVpwNlpaam0xSFNmQ0JWZlcyaExCR1RLUkNiOGNhL3ZWOG15LzRNSGNzMjJzQUpjYUFVWWZmeEdJOHFGVDlnCm40ZUdlQzNKMUIvSWtPSDY2ckFvZUF5TTJ0RmN2MFJhakpHWGVaZG01T1IydkZieG83SG1QblBNa0Q0YjEzTysKbnppVW1wY2lZenl2Um05dys3VWhZb2pNTHN4Y1JYVTVVZ0RSVXZyM05nT3RDaVpNa0VyK2pyamx4M21BM3RvLworTmFkUFRucFVMOS9rNFhDYzBKVVNJekgrZHlGcU00TTAvZGFSVHlyWG1QNGR4aGNoOXVqeEdTSGx4WW10ZjEvClRqVXRIaVBJbTBrdk9sTG5xQUcvZ0toRk53N25WQ2pMTFBhM3dUWjhKVVkrQ25aeWJ3NWFrb0puZWdHbjRoM2oKdDdWaDVHSXhmV0NJdDFYMGtqajZaQ1MwZzNEQlRDK0ZLaWU1V0RGdDRwSUJRUzlDeGZlelRacEN2NDZCcFRVMQo3UjhaMk03RHUvaUhaZnZNUk4rZStHaCtnMjZZckp0eTFWWitWeUorRnQ4anJMakdXK3BBM0tUZDFqajhDbjFpCk85YWltTUFGSDVBYTFlSkZyL2JEeDNXYTdQTU5JaGlENHFYUXR5Q2lOT0JnZDNGWjVHYWNlVGE4dkJZaGhuS0sKR280RFZLaWszMGZCcmFPWTRqZ1BDaEVYZjlPV095Qjl0OVZldHdBZDFKek1zT0lpeWZlK0oxY2ZvS2VxWk9veQpzQ1U4cndaVWdNNEx3Q2JaNGc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCi0tLS0tQkVHSU4gQ0VSVElGSUNBVEUtLS0tLQpNSUlGd1RDQ0E2bWdBd0lCQWdJVVZiVjZRdlg4Y3BqOUdPZytpNWxuUGRZWDBLTXdEUVlKS29aSWh2Y05BUUVOCkJRQXdjREVMTUFrR0ExVUVCaE1DUTA0eEVEQU9CZ05WQkFnTUIwSmxhV3BwYm1jeEVEQU9CZ05WQkFjTUIwSmwKYVdwcGJtY3hFREFPQmdOVkJBb01CMlY0WVcxd2JHVXhFVEFQQmdOVkJBc01DRkJsY25OdmJtRnNNUmd3RmdZRApWUVFEREE5M2QzY3VjMmgxYUc5dVp5NWpiMjB3SGhjTk1qSXhNVEV4TURZek5UQTNXaGNOTXpJeE1UQTRNRFl6Ck5UQTNXakJ3TVFzd0NRWURWUVFHRXdKRFRqRVFNQTRHQTFVRUNBd0hRbVZwYW1sdVp6RVFNQTRHQTFVRUJ3d0gKUW1WcGFtbHVaekVRTUE0R0ExVUVDZ3dIWlhoaGJYQnNaVEVSTUE4R0ExVUVDd3dJVUdWeWMyOXVZV3d4R0RBVwpCZ05WQkFNTUQzZDNkeTV6YUhWb2IyNW5MbU52YlRDQ0FpSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnSVBBRENDCkFnb0NnZ0lCQU9nNnhTblhiUDNEYUltNENjZkNIdFN0YUhPRFhtOUpyZG4wdjBpcC82dTBYcFViZ0tJZlZ5VUQKMUQvU0RxRzQ0S3ZOamxxVkpTaUR2QTB0Y3UwWWhIOTJhL2xyaExSNXBabnBMemxKREt5azM3OXlGMU8yNWpkdwprV0JzR3RXN2pUVmYyNldhMUdONkNzOVlkdFRCT05OSzJ2cVJYaVowemFncCtDc2wvVkJNTldXMGYzSUtrWnV0Ck81T1I0YVAyUmJzQUVIOVNVZ0hUNDJkQXIySENWVWxGOW9NSmhlenJ5RGEycjAwWWp1dDBoYmdlYXFLVDlKS3YKa1RKU1hnUVhaTVZDMDgyN2k1dkFIdk5CaHVOekpFQ2x4b000VUlUL0kwMUhjWWx6ejIvNWppeGpRK29uSEVNTApXQ0JmcGxxK1QzcURuRVIwd01pRmc5bURJcW9WZXJmd0pQRVlqOEtFQTFiQisxck1kRWI2RjZHOFN4UGVBZGJHCkNLUGRIRSttZU8zYk5EZDJYZDFYbmVrK2Y2UHVBbzhYWUlvWktJWk9Eai8rdGdWNUdMUWVIODJLd1lFQy9aaUwKWkdmUUJjVjNZaUkvek1VVVY2SXVIN21jUjArOXZXdm9uVnpPcmZFYWNYNUpKb3ZvWnIvUGx2UUR6WmRWZWZsYQpXbFJ2QTZ3ankxQlVoSFBoR0xqb0xNRUNtdVNYOWU2dU5YN0ZFem1oajJTRkwwOE8rbWxkak5seEZJeXpOZEVtCkI4S1lqbFEzZVpoZFFtRjQ3aEVUVG9aTjlGRlV1M1dzbjRySzZOU3IwbXFsTmhCYXppN1VDUUFSdFo3T09weHgKMUttcThrRENJcXdBN1RMbG1zWkFTaEhadHowV0Fkc3RwaSsvL1pEWjFndm9zYldnZ2NSTEFnTUJBQUdqVXpCUgpNQjBHQTFVZERnUVdCQlNtRTdaZnJ6Ukh1WWtNVlFDZGhBL3dzSXNVUERBZkJnTlZIU01FR0RBV2dCU21FN1pmCnJ6Ukh1WWtNVlFDZGhBL3dzSXNVUERBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUEwR0NTcUdTSWIzRFFFQkRRVUEKQTRJQ0FRQjVEY21yTFBINDU3QU5rWSt0Rmp5UUQ5Y0NJb044aFN5NHJ4WDZhSlIyU0FIME15UmROWlJ4Z0c5NQp5aVF2eFZsZkFEeDZMRFRTejUrVW1YVTUzeDczQWZoNnUrNjdhbmtnUXVTa1BGRXlxUmZodjMwWlJhU3ZFUjFuCjhXSDJTVGJaVzJ5Z3NrVUFyYkNGSkNySjM3Q0xqK0dLRHI2N1V2QzNhb2Y2T0dXN2IyYUdERy83MWZWNkNmQ3AKczlMMFNBL21RMmRlbFpDSFV2aU5mZGo2b2JLMm9USlJjd0s3U0M3Mzhhc0RnS0QxV200WUlrL1NHbHU3VEVZMApqRW1nSEdtRW1jamlVRzlpVGk5MG52dFVHRkVMZUs0SGJtRGFFM0VQakxXWEVqZ1lBdzNnYm53VGlwbmpKSFg1Cmt5eHozakNIKzlURXVQMXQ1OGpXOEJyTmExdGo1aHNzNHpVenNFZjNaNnlLVmdqK3ZDVWlYV1ZqSnRqRmNsTEQKdlYzTmlXTVFvMXFyL2xEMTZNT1cxV3dJQUYxYmc2VUNjV0dhOHFyTFY4UVM4djBYT1VLVFBOZ0xPQ1Y3ZFA1cwpLOTc5Z20wclFEUzB4TGZWUEo1aUlhYXNRTlRyMnd4M2NFeDhGUE1rWkZmUTdqZVh6YmVBQmt3WDdINjdEeVl6CkNGVndFdnQxQXVkZy9waUZLbUcrdnhEUFpHVHY5dU9KWkFrL2N3SVlwWjZ1MmFFaTAyRllPbFNzWTYzNlZKRE0KL25CL2t6akZOQUJERmNnOGVnYTZlS053RVVzVkNJdGJtU05xRGFBOTczMmlOcnJNMUcrWUZUNDFiTmF3N0syZQpUME9KM0hpNnU4ZERsMGQ2S21DWlFXTnlKR0JGbEN0TTNUbk5qcUlOeXpBWEU4d2oyZz09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlKS2dJQkFBS0NBZ0VBem0yQUh6N29MZWhjZmMvK2srSFg0Vk4rWHNjdmZVMXBYaitTNWZoYlRYaEc1c2JVCnRUVlZxR0JENnhtMm12Rk1sSDF2WlZVR2o4SmlJL2lvYXpYankrM2FlQXJ5Qk90MVhBM01lRW81ZitiMGJpVlIKQ2xNSHozdXdGVXlwZDlXQ1BFY0tuMnNlVmNPRHJVVXNxUnBWU0hYajJGNm50Ym5BTkdyWTVzTVZIcjJidnQ4VApDaEQ1aDg3SGIwSDJlMTg1OGlKellDRFpGQWlrRlkybjRjaHJzVStrZ0kzOCtkaldOZllFNDRRbjlNNlBVY2d1CjR3RllRdU53YTBtdFgrSWM3L2p5RXlUM3FUWW5FU1A5K283RnVUYnZuRUkxb3VrTnhYRXBZYnAydnZJYjlXRmQKRzFhZlluTEpzWU52STcvZDNidmlRMGduL0dYY09CQ3ZtbkMxL3ozdTAvcHVUQ0lJRyt1dWtCYnZFVzBnQ3pVbQozaG5pVWMyU003NzljTjY5dlNMbmQxQ3hJNW03VnN3Vkk5cFlrNGt5VHp1UW1zNzdLRS84OFk2NkVOSWNNTy9OCnFXQlRXTm9LSjRYN2FNRWZ3OTE4TkZIV25XbFZaOEo2QXRLL21GMWUvbW9IWXNXVDR4QkJROUR1cXdFWVBxMHAKNExMeUNNMDhQMjUzT21rSVNjOGI4M3lSZDA0RlFwMXNpUGhyNGtoTDlYd0swaG9XakNuRE4rUXJwVm5QQ0ZlWgoyeUNQOWcxRkZzR2xvRHNOL3ZQNjdYcklTUTN3VG8zb251ZFV1bzlGUWNQMXpMdEtVZ1JvYURxS3FwTkhFdVZFCjJyeEFlazJBbnM1L3drditIaGN0M3dMZ3Q1elJlM0cwSFMxc0NUelh4amZveGc5TUR0TTZGRzFlTDJjQ0F3RUEKQVFLQ0FnQndVWDVVQWZ0ODl5QlVTSGJoYWhIM2hXR09HbDBKbGJST1Z0TU1GQzFCb3I4WlZIaHFQS0hsNHJNeAoyYVRVKzVSS2UxSEFWaG9pNElaYndqR0pYQ0lkVk1iNWFDTTFjQlJFU1RISEJjUHhodTNhZksxeXE2amxTUXlQCkdrNWZhS25iT0dCY1M0R084cm5UN242VmFFR2RFcUF0bTVzdk11bVUyOG8zRFZDUmtHT000SDNRalZub2ZpZGYKcndsNUtXQXpFbkdxalZUd0pKOTdKcitCQjNjcFhBZEs5M2I5VHZHSEhOeWVHc3RPMVpGLzB5ZEgxdlI2T0p4egprL3drM3JnV0RtTlE3VjFnRVpvQ0pvNUw1YUZKM00xVlBXVkh4Zno3UUU1ZTRZRTQ5aTBtUDVyVWhEWm03OFEwCnRTb2t6b0hlNHhzQ3R1RWk0UjJJMS9Oa1dnMTc0SDIrUktNcDh3NDIwN2RXQkJVSGwrSTJsL0M5REw2YnQyR3MKSlFTY0JnejR6a2wrbWZieTZJMFQrbUdpcVNXc3JGaW5VSXowM2RodHlLZ3pvQzRBRWNvZ21mU3Y4ZzRCeUNaMApZeVFNSmlZRnZ1NE9Dd2RmTUtMa1pDc05BeDUzUDBxWVpkVjJoeDNoV3hwM0FlMTVQKzZPOHFUTDhDWTliZitPClhOUkRWN0VEMDYrQWpBOVhiYXN3bjRuZXpIbDNKSFFqYVpjWmdqcVV1Nm5DRFd1UFBpRnhRd3IrakwxUGtCa0gKTittZXRoWHk2K1dQMFR0ZUxkNzhaMFJ0SUFycnF3cVF4V2JZU2V1R1JzeFk2OGdncGM0c3dFLzkvZWZWTGhQagpZZGNzM3lnOWZwL2Y0RkxzMTJnMzVQMG1OUVhPRVpybmY2ZGpoMjErUm1TRUdNaUpRUUtDQVFFQThNSXZyM0JlCkZBa3lWKy9BNmxBS2ovbldDRVJ6RXRHQ0E2Zk54dE1TN0svS3Nabmwyc3pLUWNXTHJxZnhQKzVRNTZjVDVPRk8KVTZoR0hsYUlmTmVRWWVWc3hCUG1kZXVGZUZOaG1GTUJ2Q25nbnpzb3o4YzhNeGhOQmpWdk9HdDhhUDlrTkVEMQpxOGhSN0YyR2dOTU5GdjdvRm50M3pQdFlYWmVKcHJIN3pEcEQrRnl6NHRsQ1B3blJKVzVFdHZqcm4rdlhNVEFQCkFKV0xnazNTR2Y1NTZmaHA1YlVjVFJBazJ0MU5FRDVUU2xGWFMzemZrTy9vanQ2ZFRZYUR3T0ttUDhMQ3NPZ0IKU1luTjQ4Tkcra0VYUkRsL1N4YUc2NEVSZi9CYVduYi81UFZLbXNyZFJ1cEh4ZEZycWVMYlJLc0c1ekJPeURCVgphVlZPMjY1UjE5TlRCd0tDQVFFQTIzN3dDMUUzdnlkUFpFMzZJWkNtUGFVbE53NExyYWI3SnBFRHU0M1RmQUlqCmNDeDVLY0RvR0wrRno4bUhKeldkL0M5YzlQbWZBYzdYUHNReHpXUTcwbWE0VDNDTXJBLy9ZM2JHVWFVNFVWM1gKakhxb1UvZVoxelZMR2NYTW9CVXdWbnY5cHUwVlJrWW5jd3c4Y0JaZUhqRW05OVMweGxUSUhISGJtclc2c2xicwp2aW5WeHJ2QUQza2VZd1Q5dER2bzNnSXp2dHV3QURZVTQrcmtuVndxcEdmMURyS28zOVdCUVNsVHluUGQzOXFtCk1kcWxPcWJvN3dhMDFRR20zN2Qzc1BxWTNLNmFGUnRkeThFMW5ET0VvcHJXQTBOMERydkJMYkxVM3NpYjRJRVgKMXRWd01NM2IvYVVkVGZPZldlNjlTa1N1WTNXWUVCMDBEZGF0aEx0SW9RS0NBUUVBa0p2U2tJbnB1QmNlQ2Z1VAo0Q2xiYnNjZGE3SFJmSWdpazVlQzNkMkNESEE2U3hxcEdSYlFsVmpXWVgyMlJqUWFuRW1haFd0ZTVKaTZKUmJNCnZFK3VCVjhNU1dtNmp6Rjc1WjRQakxLdTVCb3pOUEVQdmwxcEp6ZDliREZFTUpzL0NzSDdxZmNxbUplbHZWY2YKcHRrZGo2WmtPTHpJWkhMRHpOTnNkcGVKS2s0RTdYU2hCNngvUWVYZm5aL3gzZ1Q5WWYwQ01DVXhuYVExTzNzSwpxMXBTVjlwQm9SdDdlRDR1Sk5ldnBnWUplU1lLVE9rZ1Q2b0tBV1p0RFZleVkzUy9icVRJMUFGR1pLbEU1WDB4CmNMY1FCb2FTa3NOaEhxdFRtNGorZkQvbHk5d1poNGc2Q0pKSHNlWHJ5UXJkc1EwWkJGdmJ0aHB4OHVhdWl2elYKWTlFbW1RS0NBUUVBaS9rQ0dTVjgrR0NZSjIzMm9lcjlxSGdsS0Z2RHBNVEVpbzZWbzhoSTRsNzJ2SFVQKzBseQplVDNCbG9WOHM4dGthVXJHNjg0MzBVNVhRMGFZUDlPNHRtOGRBRVBVNFhEK095Nm1QN0N1SG0xS3BPSWZjQlNJCnZZM1Z5NlN3M2pGRTl4SHc2cjlyL3JtRU5NREwxZXJkc0VGR0NXdFNzTnVtRlVXaWRxR0hZbTArWWZLSnlrYzIKcm1kZHNtV2ZhSTEvN2Z2WGhkSFJCZ0YzQnZWblB0Wmt0eDA0VUZ3c2h6ay9TUStTeUp0bEZYajQzUGdDd0VscQpaK3VONi93MnI1bnZNU1JOMFFWamF5eGRmeTlDQWM5MHVNRW0wMFB6d2VXSHhwMnhWRFQzK280NFpwOE1BWU4xCjArVzByMTQ1ODM3a3BYVHhCS29jQThLcnpGdG5vaXBRb1FLQ0FRRUFtRGxoTHN1bXBpLzNPd3pRMStJcllUQ1YKSXUzNENlVEZRTnFOMXRPeGhweDMyQVh0ZnNoZ2kxNGgzeko4RDBSdFovc1lkbkxNNGxlTTR5V1kvWEp6dXRQOApPdFJkY0hYSDhFUFZWbFdtc2xBMmNKblQxSktjUk9EWW9PTnFUU2ZGeUpTSnFMSDIxNGJ4Szc0VStNWnVmNTBHCkFJbmduMU5wVjNxbEp5VWYyRnZYOExNV1p4eVNHSElBd1Vianl0NUJva0hOUTZ2R0s3RzVLV29ScWo3VUNRbTcKdVlDMnhHS0dicTJ1SnVHd09OS1E4czVKRU56Q0ovR1R5cG9veFNPQ0VaYmxmeWhFUzVIcllXYktReVVJbXdvdAprckVsWEtkaEMxVm1tT3ZCZUFET1NyWjZ6MS96QTd5YUM1TlJLRjRhZWZtaTFqakxrU0NJWGU4bGY3b2V6QT09Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
kind: Secret
metadata:
creationTimestamp: null
name: nginx-certs
type: kubernetes.io/tls
[root@k8s-Master-01 ssl-nginx]#cat 03-configmap-nginx.yaml
apiVersion: v1
data:
myserver-gzip.cfg: |
gzip on;
gzip_comp_level 5;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css application/xml text/javascript;
myserver-status.cfg: |
location /nginx-status {
stub_status on;
access_log off;
}
myserver.conf: "server {\n listen 443 ssl;\n server_name www.shuhong.com;\n\n
\ ssl_certificate /etc/nginx/certs/tls.crt; \n ssl_certificate_key /etc/nginx/certs/tls.key;\n\n
\ ssl_session_timeout 5m;\n\n ssl_protocols TLSv1 TLSv1.1 TLSv1.2; \n\n ssl_ciphers
ECDHE-RSA-AES128-GCM-SHA256:HIGH:!aNULL:!MD5:!RC4:!DHE; \n ssl_prefer_server_ciphers
on;\n\n include /etc/nginx/conf.d/myserver-*.cfg;\n\n location / {\n root
/usr/share/nginx/html;\n }\n}\n\nserver {\n listen 80;\n server_name
www.shuhong.com; \n return 301 https://$host$request_uri; \n}\n"
kind: ConfigMap
metadata:
creationTimestamp: null
name: nginx-sslvhosts-confs
[root@k8s-Master-01 ssl-nginx]#kubectl apply -f 03-configmap-nginx.yaml
configmap/nginx-sslvhosts-confs created
[root@k8s-Master-01 ssl-nginx]#kubectl apply -f 02-secret-ssl.yaml
secret/nginx-certs created
[root@k8s-Master-01 ssl-nginx]#kubectl apply -f 01-ssl-nginx-pod.yaml
pod/ssl-nginx created
[root@k8s-Master-01 ssl-nginx]#kubectl get pods
NAME READY STATUS RESTARTS AGE
mydb 1/1 Running 0 37m
ssl-nginx 1/1 Running 0 8s
wordpress 1/1 Running 0 27m
[root@k8s-Master-01 ssl-nginx]#kubectl get pods -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
mydb 1/1 Running 0 37m 192.168.204.5 k8s-node-03 <none> <none>
ssl-nginx 1/1 Running 0 17s 192.168.127.9 k8s-node-01 <none> <none>
wordpress 1/1 Running 0 28m 192.168.8.6 k8s-node-02 <none> <none>
验证效果
[root@k8s-Master-01 ssl-nginx]#curl -I 192.168.127.9
HTTP/1.1 301 Moved Permanently
Server: nginx/1.23.2
Date: Fri, 11 Nov 2022 06:53:04 GMT
Content-Type: text/html
Content-Length: 169
Connection: keep-alive
Location: https://192.168.127.9/
[root@k8s-Master-01 ssl-nginx]#curl https://192.168.127.9 -k
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@k8s-Master-01 ssl-nginx]#openssl s_client -connect 192.168.127.9:443
CONNECTED(00000003)
Can't use SSL_get_servername
depth=1 C = CN, ST = Beijing, L = Beijing, O = example, OU = Personal, CN = www.shuhong.com
verify error:num=19:self signed certificate in certificate chain
verify return:1
depth=1 C = CN, ST = Beijing, L = Beijing, O = example, OU = Personal, CN = www.shuhong.com
verify return:1
depth=0 C = CN, ST = Beijing, L = Beijing, O = example, OU = Personal, CN = www.shuhong.com
verify return:1
---
Certificate chain
0 s:C = CN, ST = Beijing, L = Beijing, O = example, OU = Personal, CN = www.shuhong.com
i:C = CN, ST = Beijing, L = Beijing, O = example, OU = Personal, CN = www.shuhong.com
1 s:C = CN, ST = Beijing, L = Beijing, O = example, OU = Personal, CN = www.shuhong.com
i:C = CN, ST = Beijing, L = Beijing, O = example, OU = Personal, CN = www.shuhong.com
---
Server certificate
-----BEGIN CERTIFICATE-----
MIIGCTCCA/GgAwIBAgIUDJI3p1wI6/kNc3s9TOk3XJaYHB8wDQYJKoZIhvcNAQEN
BQAwcDELMAkGA1UEBhMCQ04xEDAOBgNVBAgMB0JlaWppbmcxEDAOBgNVBAcMB0Jl
aWppbmcxEDAOBgNVBAoMB2V4YW1wbGUxETAPBgNVBAsMCFBlcnNvbmFsMRgwFgYD
VQQDDA93d3cuc2h1aG9uZy5jb20wHhcNMjIxMTExMDYzNTA5WhcNMzIxMTA4MDYz
NTA5WjBwMQswCQYDVQQGEwJDTjEQMA4GA1UECAwHQmVpamluZzEQMA4GA1UEBwwH
QmVpamluZzEQMA4GA1UECgwHZXhhbXBsZTERMA8GA1UECwwIUGVyc29uYWwxGDAW
BgNVBAMMD3d3dy5zaHVob25nLmNvbTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCC
AgoCggIBAM5tgB8+6C3oXH3P/pPh1+FTfl7HL31NaV4/kuX4W014RubG1LU1Vahg
Q+sZtprxTJR9b2VVBo/CYiP4qGs148vt2ngK8gTrdVwNzHhKOX/m9G4lUQpTB897
sBVMqXfVgjxHCp9rHlXDg61FLKkaVUh149hep7W5wDRq2ObDFR69m77fEwoQ+YfO
x29B9ntfOfIic2Ag2RQIpBWNp+HIa7FPpICN/PnY1jX2BOOEJ/TOj1HILuMBWELj
cGtJrV/iHO/48hMk96k2JxEj/fqOxbk275xCNaLpDcVxKWG6dr7yG/VhXRtWn2Jy
ybGDbyO/3d274kNIJ/xl3DgQr5pwtf897tP6bkwiCBvrrpAW7xFtIAs1Jt4Z4lHN
kjO+/XDevb0i53dQsSOZu1bMFSPaWJOJMk87kJrO+yhP/PGOuhDSHDDvzalgU1ja
CieF+2jBH8PdfDRR1p1pVWfCegLSv5hdXv5qB2LFk+MQQUPQ7qsBGD6tKeCy8gjN
PD9udzppCEnPG/N8kXdOBUKdbIj4a+JIS/V8CtIaFowpwzfkK6VZzwhXmdsgj/YN
RRbBpaA7Df7z+u16yEkN8E6N6J7nVLqPRUHD9cy7SlIEaGg6iqqTRxLlRNq8QHpN
gJ7Of8JL/h4XLd8C4Lec0XtxtB0tbAk818Y36MYPTA7TOhRtXi9nAgMBAAGjgZow
gZcwHwYDVR0jBBgwFoAUphO2X680R7mJDFUAnYQP8LCLFDwwCQYDVR0TBAIwADAL
BgNVHQ8EBAMCBPAwEwYDVR0lBAwwCgYIKwYBBQUHAwEwRwYDVR0RBEAwPoIPd3d3
LnNodWhvbmcuY29tghZoYXJib3Iud3d3LnNodWhvbmcuY29tghN3d3cud3d3LnNo
dWhvbmcuY29tMA0GCSqGSIb3DQEBDQUAA4ICAQDInC446Tu7F/3b09sOWCWUEf1Z
EW64mW0g+nJBFKGHNCakPITryvg+6MLgXRwZ+ZpGJistFUBBx6Y3fBn3okMm9S90
/aZp6ZZjm1HSfCBVfW2hLBGTKRCb8ca/vV8my/4MHcs22sAJcaAUYffxGI8qFT9g
n4eGeC3J1B/IkOH66rAoeAyM2tFcv0RajJGXeZdm5OR2vFbxo7HmPnPMkD4b13O+
nziUmpciYzyvRm9w+7UhYojMLsxcRXU5UgDRUvr3NgOtCiZMkEr+jrjlx3mA3to/
+NadPTnpUL9/k4XCc0JUSIzH+dyFqM4M0/daRTyrXmP4dxhch9ujxGSHlxYmtf1/
TjUtHiPIm0kvOlLnqAG/gKhFNw7nVCjLLPa3wTZ8JUY+CnZybw5akoJnegGn4h3j
t7Vh5GIxfWCIt1X0kjj6ZCS0g3DBTC+FKie5WDFt4pIBQS9CxfezTZpCv46BpTU1
7R8Z2M7Du/iHZfvMRN+e+Gh+g26YrJty1VZ+VyJ+Ft8jrLjGW+pA3KTd1jj8Cn1i
O9aimMAFH5Aa1eJFr/bDx3Wa7PMNIhiD4qXQtyCiNOBgd3FZ5GaceTa8vBYhhnKK
Go4DVKik30fBraOY4jgPChEXf9OWOyB9t9VetwAd1JzMsOIiyfe+J1cfoKeqZOoy
sCU8rwZUgM4LwCbZ4g==
-----END CERTIFICATE-----
subject=C = CN, ST = Beijing, L = Beijing, O = example, OU = Personal, CN = www.shuhong.com
issuer=C = CN, ST = Beijing, L = Beijing, O = example, OU = Personal, CN = www.shuhong.com
---
No client certificate CA names sent
Peer signing digest: SHA256
Peer signature type: RSA-PSS
Server Temp Key: X25519, 253 bits
---
SSL handshake has read 3926 bytes and written 376 bytes
Verification error: self signed certificate in certificate chain
---
New, TLSv1.2, Cipher is ECDHE-RSA-AES128-GCM-SHA256
Server public key is 4096 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1.2
Cipher : ECDHE-RSA-AES128-GCM-SHA256
Session-ID: 65C8A7BC079BB6946D246D71534F8F7AD21854F483CDC7279153AAAD133AC7AD
Session-ID-ctx:
Master-Key: 8DFB31B0AC527A96584990042CF90F6EAB70EED61F5A27BDC3668F8C590D290601F7E94A2D76FCCC46C9FA8B3BAE2145
PSK identity: None
PSK identity hint: None
SRP username: None
TLS session ticket lifetime hint: 300 (seconds)
TLS session ticket:
0000 - d1 5e 0e 9f 72 f9 0e 4a-f5 2a 15 5d b7 3b 6d e0 .^..r..J.*.].;m.
0010 - 53 3a ba bb 8c 29 4c 81-92 67 12 f9 85 96 0c 36 S:...)L..g.....6
0020 - f9 d2 39 58 38 d5 86 b1-dd 06 eb 9c 18 c8 61 25 ..9X8.........a%
0030 - 9c 7b 54 77 ae 67 49 9c-f4 5b d8 ff f3 e7 e0 47 .{Tw.gI..[.....G
0040 - 2c a9 61 aa b8 b4 ca 54-65 7a 46 7c 77 6a 7f 79 ,.a....TezF|wj.y
0050 - e6 e0 f8 d5 f2 20 b0 79-37 cd 9d 51 0a 99 33 7c ..... .y7..Q..3|
0060 - c7 60 aa ed 95 2e d4 11-88 87 84 40 9a 79 79 14 .`.........@.yy.
0070 - 1d 7e d8 ef 53 15 20 35-5b 59 17 67 d8 ef 23 0d .~..S. 5[Y.g..#.
0080 - c6 cf 09 f3 13 21 2e e8-4b 59 9a d8 dd b9 d8 6c .....!..KY.....l
0090 - 2c 7e a4 8b 23 03 88 77-eb 29 71 98 b3 15 c6 10 ,~..#..w.)q.....
00a0 - 72 8b 80 67 14 7a 20 3c-82 58 11 04 b1 bd 4c c2 r..g.z <.X....L.
Start Time: 1668149915
Timeout : 7200 (sec)
Verify return code: 19 (self signed certificate in certificate chain)
Extended master secret: yes
---
nginx + php-fpm wordpress + mysql
mysql
php-fpm wordpress
[root@k8s-Master-01 php-fpm-wordpress]#kubectl create svc clusterip phpwp -n lnmp --tcp 9000:9000 --dry-run=client -o yaml > 01-service-phpwp.yaml
[root@k8s-Master-01 php-fpm-wordpress]#vim 01-service-phpwp.yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: phpwp
name: phpwp
namespace: lnmp
spec:
ports:
- name: 9000-9000
port: 9000
protocol: TCP
targetPort: 9000
selector:
app: phpwp
type: ClusterIP
重新初始化集群将service模式修改为ipvs
[root@k8s-Master-01 lnmp]#cat ~/kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
kind: InitConfiguration
localAPIEndpoint:
# 这里的地址即为初始化的控制平面第一个节点的IP地址;
advertiseAddress: 10.0.0.201
bindPort: 6443
nodeRegistration:
criSocket: unix:///run/cri-dockerd.sock
imagePullPolicy: IfNotPresent
# 第一个控制平面节点的主机名称;
name: k8s-Master-01
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
# 控制平面的接入端点,我们这里选择适配到kubeapi.magedu.com这一域名上;
controlPlaneEndpoint: "kubeapi.shuhong.com:6443"
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.25.3
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
podSubnet: 192.168.0.0/16
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
# 用于配置kube-proxy上为Service指定的代理模式,默认为iptables;
mode: "ipvs"
[root@k8s-Master-01 ~]# kubeadm init --config kubeadm-config.yaml --upload-certs
[init] Using Kubernetes version: v1.25.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master-01 kubeapi.shuhong.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.201]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master-01 localhost] and IPs [10.0.0.201 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master-01 localhost] and IPs [10.0.0.201 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 26.101091 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
88933b924db0b819a6789ab790fb225f049259f4eb20d0b878d9f6d97c8faf3e
[mark-control-plane] Marking the node k8s-master-01 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master-01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: hixc6t.kn8t1rvkgwp44ysn
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
kubeadm join kubeapi.shuhong.com:6443 --token hixc6t.kn8t1rvkgwp44ysn \
--discovery-token-ca-cert-hash sha256:91b32c7e50dcf885d718837e8865b860375268231a89e462a006571002e75c69 \
--control-plane --certificate-key 88933b924db0b819a6789ab790fb225f049259f4eb20d0b878d9f6d97c8faf3e
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join kubeapi.shuhong.com:6443 --token hixc6t.kn8t1rvkgwp44ysn \
--discovery-token-ca-cert-hash sha256:91b32c7e50dcf885d718837e8865b860375268231a89e462a006571002e75c69
[root@k8s-Master-01 ~]#
[root@k8s-Master-01 ~]#cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
cp:是否覆盖'/root/.kube/config'? y
[root@k8s-Master-01 ~]#cd /data/
[root@k8s-Master-01 data]#kubectl apply -f calico.yaml
poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created
#主节点加入集群
kubeadm join kubeapi.shuhong.com:6443 --token hixc6t.kn8t1rvkgwp44ysn --discovery-token-ca-cert-hash sha256:91b32c7e50dcf885d718837e8865b860375268231a89e462a006571002e75c69 --control-plane --certificate-key 88933b924db0b819a6789ab790fb225f049259f4eb20d0b878d9f6d97c8faf3e --cri-socket unix:///run/cri-dockerd.sock
#从节点加入集群
kubeadm join kubeapi.shuhong.com:6443 --token hixc6t.kn8t1rvkgwp44ysn --discovery-token-ca-cert-hash sha256:91b32c7e50dcf885d718837e8865b860375268231a89e462a006571002e75c69 --cri-socket unix:///run/cri-dockerd.sock
#验证是否为ipvs模式
[root@k8s-Master-01 lnmp]#kubectl get cm -n kube-system kube-proxy -o yaml
apiVersion: v1
data:
config.conf: |-
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
bindAddressHardFail: false
clientConnection:
acceptContentTypes: ""
burst: 0
contentType: ""
kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
qps: 0
clusterCIDR: 192.168.0.0/16
configSyncPeriod: 0s
conntrack:
maxPerCore: null
min: null
tcpCloseWaitTimeout: null
tcpEstablishedTimeout: null
detectLocal:
bridgeInterface: ""
interfaceNamePrefix: ""
detectLocalMode: ""
enableProfiling: false
healthzBindAddress: ""
hostnameOverride: ""
iptables:
masqueradeAll: false
masqueradeBit: null
minSyncPeriod: 0s
syncPeriod: 0s
ipvs:
excludeCIDRs: null
minSyncPeriod: 0s
scheduler: ""
strictARP: false
syncPeriod: 0s
tcpFinTimeout: 0s
tcpTimeout: 0s
udpTimeout: 0s
kind: KubeProxyConfiguration
metricsBindAddress: ""
mode: ipvs
nodePortAddresses: null
oomScoreAdj: null
portRange: ""
showHiddenMetricsForVersion: ""
udpIdleTimeout: 0s
winkernel:
enableDSR: false
forwardHealthCheckVip: false
networkName: ""
rootHnsEndpointName: ""
sourceVip: ""
kubeconfig.conf: |-
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
server: https://kubeapi.shuhong.com:6443
name: default
contexts:
- context:
cluster: default
namespace: default
user: default
name: default
current-context: default
users:
- name: default
user:
tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
kind: ConfigMap
metadata:
creationTimestamp: "2022-11-13T01:36:14Z"
labels:
app: kube-proxy
name: kube-proxy
namespace: kube-system
resourceVersion: "282"
uid: 3ea05e1d-402f-409a-a006-7caa61b63b05
[root@k8s-Master-01 lnmp]#ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.96.0.1:443 rr
-> 10.0.0.201:6443 Masq 1 7 0
-> 10.0.0.202:6443 Masq 1 0 0
-> 10.0.0.203:6443 Masq 1 0 0
TCP 10.96.0.10:53 rr
-> 192.168.94.1:53 Masq 1 0 0
-> 192.168.94.2:53 Masq 1 0 0
TCP 10.96.0.10:9153 rr
-> 192.168.94.1:9153 Masq 1 0 0
-> 192.168.94.2:9153 Masq 1 0 0
TCP 10.103.63.245:3306 rr
-> 192.168.204.1:3306 Masq 1 0 0
UDP 10.96.0.10:53 rr
-> 192.168.94.1:53 Masq 1 0 0
-> 192.168.94.2:53 Masq 1 0 0
Pod/Wordpress-APACHE
部署在Kubernetes上,Service使用ExternalIP接入; MySQL要部署在集群外部,Wordpress基于Kubernetes Service名称访问MySQL;
[root@rocky8 ~]#mysql -p123456
mysql> create database wpdb ;
Query OK, 1 row affected (0.02 sec)
mysql> create user wpuser@'%' identified by '123456';
Query OK, 0 rows affected (0.01 sec)
mysql> grant all on wpdb.* to wpuser@'%';
Query OK, 0 rows affected (0.02 sec)
[root@k8s-Master-01 test]#cat mysql-service.yaml
apiVersion: v1
kind: Endpoints
metadata:
name: mysql-external
namespace: default
subsets:
- addresses:
- ip: 10.0.0.151
ports:
- name: mysql
port: 3306
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: mysql-external
namespace: default
spec:
type: ClusterIP
clusterIP: None
ports:
- name: mydb
port: 3306
targetPort: 3306
protocol: TCP
[root@k8s-Master-01 test]#kubectl apply -f mysql-service.yaml
[root@k8s-Master-01 test]#vim wordpress/01-service-wordpress.yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: wordpress
name: wordpress
spec:
ports:
- name: 80-80
port: 80
protocol: TCP
targetPort: 80
externalIPs:
- 10.0.0.220
selector:
app: wordpress
type: NodePort
[root@k8s-Master-01 test]#cat wordpress/02-pod-wordpress.yaml
apiVersion: v1
kind: Pod
metadata:
name: wordpress
namespace: default
labels:
app: wordpress
spec:
containers:
- name: wordpress
image: wordpress:6.1-apache
env:
- name: WORDPRESS_DB_HOST
value: mysql-external
- name: WORDPRESS_DB_NAME
valueFrom:
secretKeyRef:
name: mysql-secret
key: db.name
- name: WORDPRESS_DB_USER
valueFrom:
secretKeyRef:
name: mysql-secret
key: db.user.name
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: db.user.pass
resources:
requests:
memory: "128M"
cpu: "200m"
limits:
memory: "512M"
cpu: "400m"
# securityContext:
# runAsUser: 999
volumeMounts:
- mountPath: /var/www/html
name: nfs-pvc-wp
volumes:
- name: nfs-pvc-wp
persistentVolumeClaim:
claimName: pvc-nfs-sc
[root@k8s-Master-01 test]#kubectl apply -f wordpress/
#验证
[root@k8s-Master-01 test]#kubectl get pods
NAME READY STATUS RESTARTS AGE
wordpress 1/1 Running 0 12m
[root@k8s-Master-01 test]#kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 125m
mysql-external ClusterIP None <none> 3306/TCP 17m
wordpress NodePort 10.111.250.69 10.0.0.220 80:31401/TCP 13m
Pod/Wordpress-FPM和nginx
部署在Kubernetes上,Nginx Service使用ExternalIP接入,并反代给Wordpress;MySQL要部署在集群外部,Wordpress基于Kubernetes SErvice名称访问MySQL;
Pod/Tomcat-jpress和Nginx
部署在Kubernetes上,Nginx Service使用ExternalIP接入,并反代给tomcat;MySQL要部署在集群外部,jpress基于Kubernetes SErvice名称访问MySQL;
[root@k8s-Master-01 test]#cat jpress/01-service-jpress.yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: jpress
name: jpress
spec:
ports:
- name: 8080-8080
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: jpress
type: ClusterIP
[root@k8s-Master-01 test]#cat jpress/pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: jpress
namespace: default
labels:
app: jpress
spec:
containers:
- name: jpress
image: registry.cn-shenzhen.aliyuncs.com/shuzihan/warehouse:jpress5.0.2v1.0
volumeMounts:
- mountPath: /data/website/ROOT
name: nfs-pvc-jp
volumes:
- name: nfs-pvc-jp
persistentVolumeClaim:
claimName: pvc-nfs-sc-jp
[root@k8s-Master-01 test]#cat jpress/sc/01-sc.yaml
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-csi-jpress
provisioner: nfs.csi.k8s.io
parameters:
#server: nfs-server.default.svc.cluster.local
server: nfs-server.nfs.svc.cluster.local
share: /
#reclaimPolicy: Delete
reclaimPolicy: Retain
volumeBindingMode: Immediate
mountOptions:
- hard
- nfsvers=4.1
[root@k8s-Master-01 test]#cat jpress/sc/02-pvc.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-nfs-sc-jp
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
storageClassName: nfs-csi-jpress
[root@k8s-Master-01 test]#cat nginx-jpress/01-service-jpress.yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: nginx-jpress
name: nginx-jpress
spec:
ports:
- name: 8080-8080
port: 8080
protocol: TCP
targetPort: 8080
externalIPs:
- 10.0.0.220
selector:
app: nginx-jpress
type: NodePort
[root@k8s-Master-01 test]#cat nginx-jpress/02-nginx-pod.yaml
---
apiVersion: v1
kind: Pod
metadata:
name: nginx-jpress
namespace: default
labels:
app: nginx-jpress
spec:
containers:
- image: nginx:alpine
name: nginxserver
volumeMounts:
- name: nginxconf
mountPath: /etc/nginx/conf.d/
readOnly: true
- mountPath: /data/www/ROOT
name: jp
volumes:
- name: nginxconf
configMap:
name: nginx-jpress
optional: false
- name: jp
persistentVolumeClaim:
claimName: pvc-nfs-sc-jp
[root@k8s-Master-01 test]#cat nginx-jpress/03-configmap-nginx-jpress.yaml
apiVersion: v1
data:
myserver-status.cfg: |
location /nginx-status {
stub_status on;
access_log off;
}
myserver.conf: |
upstream tomcat {
server jpress:8080;
}
server {
listen 8080;
server_name jp.shuhong.com;
location / {
proxy_pass http://tomcat;
proxy_set_header Host $http_host;
}
}
kind: ConfigMap
metadata:
creationTimestamp: null
name: nginx-jpress
[root@k8s-Master-01 test]#kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
jpress ClusterIP 10.105.254.92 <none> 8080/TCP 114m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4h12m
mysql-external ClusterIP None <none> 3306/TCP 144m
nginx-jpress NodePort 10.109.235.64 10.0.0.220 8080:30674/TCP 90m
wordpress NodePort 10.111.250.69 10.0.0.220 80:31401/TCP 140m
[root@k8s-Master-01 test]#kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-17b3a376-521a-4e57-8449-f3393b1eaec9 10Gi RWX Retain Bound default/pvc-nfs-sc-jp nfs-csi-jpress 33m
pvc-1b60a273-4298-438e-a729-b63bebc74abe 10Gi RWX Retain Bound lnmp/pvc-wp nfs-csi-wp 3h36m
pvc-c4020aca-fc9e-45d9-8c3d-ef6dffc76ba0 10Gi RWX Retain Bound default/pvc-nfs-sc nfs-csi 147m
[root@k8s-Master-01 test]#kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-nfs-sc Bound pvc-c4020aca-fc9e-45d9-8c3d-ef6dffc76ba0 10Gi RWX nfs-csi 147m
pvc-nfs-sc-jp Bound pvc-17b3a376-521a-4e57-8449-f3393b1eaec9 10Gi RWX nfs-csi-jpress 34m
[root@k8s-Master-01 test]#kubectl get pods
NAME READY STATUS RESTARTS AGE
jpress 1/1 Running 0 2m47s
nginx-jpress 1/1 Running 0 2m42s
wordpress 1/1 Running 0 141m
Deployment:wordpress+apache
[root@k8s-Master-01 wordpress]#cat 03-deployment-wordpress.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
replicas: 1
selector:
matchLabels:
app: wordpress
template:
metadata:
labels:
app: wordpress
spec:
containers:
- name: wordpress
image: wordpress:6.1-apache
env:
- name: WORDPRESS_DB_HOST
value: mysql-external
- name: WORDPRESS_DB_NAME
valueFrom:
secretKeyRef:
name: mysql-secret
key: db.name
- name: WORDPRESS_DB_USER
valueFrom:
secretKeyRef:
name: mysql-secret
key: db.user.name
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: db.user.pass
resources:
requests:
memory: "128M"
cpu: "200m"
limits:
memory: "512M"
cpu: "400m"
# securityContext:
# runAsUser: 999
volumeMounts:
- mountPath: /var/www/html
name: nfs-pvc-wp
volumes:
- name: nfs-pvc-wp
persistentVolumeClaim:
claimName: pvc-nfs-sc
[root@k8s-Master-01 lnmp]#kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
wordpress 1/1 1 1 14m
[root@k8s-Master-01 lnmp]#kubectl get pods
NAME READY STATUS RESTARTS AGE
wordpress-97577cb54-ggc7x 1/1 Running 0 14m
Deployment:Wordpress-FPM和nginx
Deployment:Tomcat-jpress和Nginx
要求:把nginx或wordpress都做成多实例,测试滚动 更新过程,验证更新过程中,服务是否中断;并写出验证报告;
MySQL,基于statefulset编排运行
[root@k8s-Master-01 statefulset]#cat 01-sc.yaml
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-csi-statefulset
provisioner: nfs.csi.k8s.io
parameters:
#server: nfs-server.default.svc.cluster.local
server: nfs-server.nfs.svc.cluster.local
share: /
#reclaimPolicy: Delete
reclaimPolicy: Retain
volumeBindingMode: Immediate
mountOptions:
- hard
- nfsvers=4.1
[root@k8s-Master-01 statefulset]#vim statefulset-mysql.yaml
apiVersion: v1
kind: Service
metadata:
name: statefulset-mysql
namespace: default
spec:
clusterIP: None
ports:
- port: 3306
selector:
app: statefulset-mysql
controller: sts
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: sts
spec:
serviceName: demoapp-sts
replicas: 2
selector:
matchLabels:
app: statefulset-mysql
controller: sts
template:
metadata:
labels:
app: statefulset-mysql
controller: sts
spec:
containers:
- name: mysql-sts
image: mysql:8.0
imagePullPolicy: IfNotPresent
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: root.pass
- name: MYSQL_DATABASE
valueFrom:
secretKeyRef:
name: mysql-secret
key: db.name
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: mysql-secret
key: db.user.name
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: db.user.pass
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: appdata
mountPath: /var/lib/mysql
volumeClaimTemplates:
- metadata:
name: appdata
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: nfs-csi-statefulset
resources:
requests:
storage: 2Gi
[root@k8s-Master-01 statefulset]#kubectl get pods
NAME READY STATUS RESTARTS AGE
sts-0 1/1 Running 0 2m44s
sts-1 1/1 Running 0 27s
wordpress-97577cb54-ggc7x 1/1 Running 0 98m
[root@k8s-Master-01 statefulset]#kubectl get statefulsets
NAME READY AGE
sts 2/2 3m44s
[root@k8s-Master-01 statefulset]#kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
appdata-sts-0 Bound pvc-ff0e971f-4e85-420b-a436-3c70ed8440e0 2Gi RWO nfs-csi-statefulset 7m33s
appdata-sts-1 Bound pvc-7476fdf4-c0b7-4faf-9908-09a588177241 2Gi RWO nfs-csi-statefulset 6m34s
pvc-nfs-sc Bound pvc-c4020aca-fc9e-45d9-8c3d-ef6dffc76ba0 10Gi RWX nfs-csi 47h
pvc-nfs-sc-jp Bound pvc-58c698dc-28a2-49e3-8271-4c4a8d62b8c8 10Gi RWX nfs-csi-jpress 38h
基于Strimz Opertor部署霆一个Kafka集群,并测试消息的收发
Strizi Operator
https://strimzi.io/quickstarts/
[root@k8s-Master-01 kafka]#kubectl create ns kafka
namespace/kafka created
[root@k8s-Master-01 kafka]#kubectl create -f 'https://strimzi.io/install/latest?namespace=kafka' -n kafka
clusterrole.rbac.authorization.k8s.io/strimzi-kafka-broker created
clusterrole.rbac.authorization.k8s.io/strimzi-cluster-operator-namespaced created
customresourcedefinition.apiextensions.k8s.io/kafkamirrormaker2s.kafka.strimzi.io created
rolebinding.rbac.authorization.k8s.io/strimzi-cluster-operator-leader-election created
customresourcedefinition.apiextensions.k8s.io/kafkaconnectors.kafka.strimzi.io created
customresourcedefinition.apiextensions.k8s.io/kafkabridges.kafka.strimzi.io created
customresourcedefinition.apiextensions.k8s.io/kafkamirrormakers.kafka.strimzi.io created
clusterrolebinding.rbac.authorization.k8s.io/strimzi-cluster-operator-kafka-broker-delegation created
rolebinding.rbac.authorization.k8s.io/strimzi-cluster-operator-watched created
clusterrolebinding.rbac.authorization.k8s.io/strimzi-cluster-operator created
customresourcedefinition.apiextensions.k8s.io/kafkatopics.kafka.strimzi.io created
customresourcedefinition.apiextensions.k8s.io/kafkaconnects.kafka.strimzi.io created
deployment.apps/strimzi-cluster-operator created
clusterrole.rbac.authorization.k8s.io/strimzi-cluster-operator-global created
customresourcedefinition.apiextensions.k8s.io/kafkarebalances.kafka.strimzi.io created
clusterrole.rbac.authorization.k8s.io/strimzi-cluster-operator-leader-election created
clusterrolebinding.rbac.authorization.k8s.io/strimzi-cluster-operator-kafka-client-delegation created
customresourcedefinition.apiextensions.k8s.io/kafkausers.kafka.strimzi.io created
clusterrole.rbac.authorization.k8s.io/strimzi-cluster-operator-watched created
clusterrole.rbac.authorization.k8s.io/strimzi-kafka-client created
configmap/strimzi-cluster-operator created
clusterrole.rbac.authorization.k8s.io/strimzi-entity-operator created
rolebinding.rbac.authorization.k8s.io/strimzi-cluster-operator-entity-operator-delegation created
rolebinding.rbac.authorization.k8s.io/strimzi-cluster-operator created
serviceaccount/strimzi-cluster-operator created
customresourcedefinition.apiextensions.k8s.io/strimzipodsets.core.strimzi.io created
customresourcedefinition.apiextensions.k8s.io/kafkas.kafka.strimzi.io created
[root@k8s-Master-01 kafka]#kubectl api-versions
admissionregistration.k8s.io/v1
apiextensions.k8s.io/v1
apiregistration.k8s.io/v1
apps/v1
authentication.k8s.io/v1
authorization.k8s.io/v1
autoscaling/v1
autoscaling/v2
autoscaling/v2beta2
batch/v1
certificates.k8s.io/v1
coordination.k8s.io/v1
core.strimzi.io/v1beta2
crd.projectcalico.org/v1
discovery.k8s.io/v1
events.k8s.io/v1
flowcontrol.apiserver.k8s.io/v1beta1
flowcontrol.apiserver.k8s.io/v1beta2
kafka.strimzi.io/v1alpha1
kafka.strimzi.io/v1beta1
kafka.strimzi.io/v1beta2
networking.k8s.io/v1
node.k8s.io/v1
policy/v1
rbac.authorization.k8s.io/v1
scheduling.k8s.io/v1
storage.k8s.io/v1
storage.k8s.io/v1beta1
v1
[root@k8s-Master-01 kafka]#kubectl api-resources --api-group=kafka.strimzi.io
NAME SHORTNAMES APIVERSION NAMESPACED KIND
kafkabridges kb kafka.strimzi.io/v1beta2 true KafkaBridge
kafkaconnectors kctr kafka.strimzi.io/v1beta2 true KafkaConnector
kafkaconnects kc kafka.strimzi.io/v1beta2 true KafkaConnect
kafkamirrormaker2s kmm2 kafka.strimzi.io/v1beta2 true KafkaMirrorMaker2
kafkamirrormakers kmm kafka.strimzi.io/v1beta2 true KafkaMirrorMaker
kafkarebalances kr kafka.strimzi.io/v1beta2 true KafkaRebalance
kafkas k kafka.strimzi.io/v1beta2 true Kafka
kafkatopics kt kafka.strimzi.io/v1beta2 true KafkaTopic
kafkausers ku kafka.strimzi.io/v1beta2 true KafkaUser
[root@k8s-Master-01 kafka]#kubectl get pods -n kafka
NAME READY STATUS RESTARTS AGE
strimzi-cluster-operator-56d64c8584-j9rs7 1/1 Running 0 2m32s
#下载创建集群的配置文件https://github.com/strimzi/strimzi-kafka-operator/blob/0.32.0/examples/kafka/kafka-ephemeral.yaml,此处没有使用持久存储
部署示例Kafka集群
kubectl apply -f https://strimzi.io/examples/latest/kafka/kafka-ephemeral.yaml -n kafka
或
kubectl apply -f https://raw.githubusercontent.com/strimzi/strimzi-kafka-operator/0.32.0/examples/kafka/kafka-ephemeral.yaml -n kafka
[root@k8s-Master-01 kafka]#kubectl apply -f https://strimzi.io/examples/latest/kafka/kafka-ephemeral.yaml -n kafka
kafka.kafka.strimzi.io/my-cluster created
#注意内存,否则节点会崩溃
[root@k8s-Master-01 kafka]#kubectl get pods -n kafka -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
my-cluster-entity-operator-9d45b6b89-nk6cd 3/3 Running 1 (2m42s ago) 5m37s 192.168.204.23 k8s-node-03 <none> <none>
my-cluster-kafka-0 1/1 Running 0 4m33s 192.168.204.25 k8s-node-03 <none> <none>
my-cluster-kafka-1 1/1 Running 0 4m33s 192.168.204.30 k8s-node-03 <none> <none>
my-cluster-kafka-2 1/1 Running 0 4m33s 192.168.204.26 k8s-node-03 <none> <none>
my-cluster-zookeeper-0 1/1 Running 0 4m33s 192.168.204.29 k8s-node-03 <none> <none>
my-cluster-zookeeper-1 1/1 Running 0 4m33s 192.168.204.27 k8s-node-03 <none> <none>
my-cluster-zookeeper-2 1/1 Running 0 4m32s 192.168.204.28 k8s-node-03 <none> <none>
strimzi-cluster-operator-56d64c8584-spqcw 1/1 Running 0 5m39s 192.168.204.22 k8s-node-03 <none> <none>
#测试
[root@k8s-Master-01 chapter8]# kubectl -n kafka run kafka-producer -ti --image=quay.io/strimzi/kafka:0.32.0-kafka-3.3.1 --rm=true --restart=Never -- bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic
If you don't see a command prompt, try pressing enter.
>set kafka
>hello my k8s
[root@k8s-Master-01 ~]# kubectl -n kafka run kafka-consumer -ti --image=quay.io/strimzi/kafka:0.32.0-kafka-3.3.1 --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic --from-beginning
If you don't see a command prompt, try pressing enter.
[2022-11-15 06:33:45,100] WARN [Consumer clientId=console-consumer, groupId=console-consumer-98252] Error while fetching metadata with correlation id 2 : {my-topic=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2022-11-15 06:33:45,312] WARN [Consumer clientId=console-consumer, groupId=console-consumer-98252] Error while fetching metadata with correlation id 6 : {my-topic=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
set kafka
hello my k8s
[root@k8s-Master-01 kafka]#kubectl get pods -n kafka -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kafka-consumer 1/1 Running 0 58s 192.168.8.42 k8s-node-02 <none> <none>
kafka-producer 1/1 Running 0 70s 192.168.127.1 k8s-node-01 <none> <none>
my-cluster-entity-operator-9d45b6b89-nk6cd 3/3 Running 1 (8m5s ago) 11m 192.168.204.23 k8s-node-03 <none> <none>
my-cluster-kafka-0 1/1 Running 0 9m56s 192.168.204.25 k8s-node-03 <none> <none>
my-cluster-kafka-1 1/1 Running 0 9m56s 192.168.204.30 k8s-node-03 <none> <none>
my-cluster-kafka-2 1/1 Running 0 9m56s 192.168.204.26 k8s-node-03 <none> <none>
my-cluster-zookeeper-0 1/1 Running 0 9m56s 192.168.204.29 k8s-node-03 <none> <none>
my-cluster-zookeeper-1 1/1 Running 0 9m56s 192.168.204.27 k8s-node-03 <none> <none>
my-cluster-zookeeper-2 1/1 Running 0 9m55s 192.168.204.28 k8s-node-03 <none> <none>
strimzi-cluster-operator-56d64c8584-spqcw 1/1 Running 0 11m 192.168.204.22 k8s-node-03 <none> <none>
MySQL-Operator搭建集群
#https://github.com/mysql/mysql-operator
#First deploy the Custom Resource Definition (CRDs):
[root@k8s-Master-01 pki]#kubectl apply -f https://raw.githubusercontent.com/mysql/mysql-operator/trunk/deploy/deploy-crds.yaml
customresourcedefinition.apiextensions.k8s.io/innodbclusters.mysql.oracle.com created
customresourcedefinition.apiextensions.k8s.io/mysqlbackups.mysql.oracle.com created
customresourcedefinition.apiextensions.k8s.io/clusterkopfpeerings.zalando.org created
customresourcedefinition.apiextensions.k8s.io/kopfpeerings.zalando.org created
#Then deploy MySQL Operator for Kubernetes:
[root@k8s-Master-01 pki]#kubectl apply -f https://raw.githubusercontent.com/mysql/mysql-operator/trunk/deploy/deploy-operator.yaml
clusterrole.rbac.authorization.k8s.io/mysql-operator created
clusterrole.rbac.authorization.k8s.io/mysql-sidecar created
clusterrolebinding.rbac.authorization.k8s.io/mysql-operator-rolebinding created
clusterkopfpeering.zalando.org/mysql-operator created
namespace/mysql-operator created
serviceaccount/mysql-operator-sa created
deployment.apps/mysql-operator created
#Verify the operator is running by checking the deployment inside the mysql-operator namespace:
[root@k8s-Master-01 pki]#kubectl get pods -n mysql-operator
NAME READY STATUS RESTARTS AGE
mysql-operator-6b4b96dbb5-rb6km 1/1 Running 0 4m32s
[root@k8s-Master-01 pki]#kubectl get deployment -n mysql-operator mysql-operator
NAME READY UP-TO-DATE AVAILABLE AGE
mysql-operator 1/1 1 1 4m34s
kubectl create secret generic mypwds \
--from-literal=rootUser=root \
--from-literal=rootHost=% \
--from-literal=rootPassword="123456"
[root@k8s-Master-01 pki]#kubectl create secret generic mypwds \
> --from-literal=rootUser=root \
> --from-literal=rootHost=% \
> --from-literal=rootPassword="123456"
secret/mypwds created
[root@k8s-Master-01 pki]#kubectl get secret mypwds -o yaml
apiVersion: v1
data:
rootHost: JQ==
rootPassword: MTIzNDU2
rootUser: cm9vdA==
kind: Secret
metadata:
creationTimestamp: "2022-11-15T08:32:32Z"
name: mypwds
namespace: default
resourceVersion: "181771"
uid: 05639ccd-f304-4727-9df8-1f148b02815b
type: Opaque
[root@k8s-Master-01 ~]#cd /data/
[root@k8s-Master-01 data]#mkdir mysqlcluster
[root@k8s-Master-01 data]#cd mysqlcluster/
[root@k8s-Master-01 mysqlcluster]#vim mycluster.yaml
apiVersion: mysql.oracle.com/v2
kind: InnoDBCluster
metadata:
name: mycluster
spec:
secretName: mypwds
tlsUseSelfSigned: true
instances: 3
router:
instances: 1
[root@k8s-Master-01 mysqlcluster]#kubectl apply -f mycluster.yaml
innodbcluster.mysql.oracle.com/mycluster created
#此处因为没有持久存储所以无法启动
[root@k8s-Master-01 mysqlcluster]#kubectl apply -f mycluster.yaml
innodbcluster.mysql.oracle.com/mycluster created
[root@k8s-Master-01 mysqlcluster]#kubectl get pods
NAME READY STATUS RESTARTS AGE
mycluster-0 0/2 Pending 0 3s
mycluster-1 0/2 Pending 0 3s
mycluster-2 0/2 Pending 0 3s
[root@k8s-Master-01 auth]#kubectl describe pods mycluster-0
Name: mycluster-0
Namespace: default
Priority: 0
Service Account: mycluster-sidecar-sa
Node: <none>
Labels: app.kubernetes.io/component=database
app.kubernetes.io/created-by=mysql-operator
app.kubernetes.io/instance=mysql-innodbcluster-mycluster-mysql-server
app.kubernetes.io/managed-by=mysql-operator
app.kubernetes.io/name=mysql-innodbcluster-mysql-server
component=mysqld
controller-revision-hash=mycluster-59694c677d
mysql.oracle.com/cluster=mycluster
statefulset.kubernetes.io/pod-name=mycluster-0
tier=mysql
Annotations: kopf.zalando.org/on_pod_create:
{"started":"2022-11-15T09:00:14.087799","delayed":"2022-11-15T09:00:44.101263","purpose":"create","retries":1,"success":false,"failure":fa...
Status: Pending
IP:
IPs: <none>
Controlled By: StatefulSet/mycluster
Init Containers:
fixdatadir:
Image: mysql/mysql-operator:8.0.31-2.0.7
Port: <none>
Host Port: <none>
Command:
bash
-c
chown 27:27 /var/lib/mysql && chmod 0700 /var/lib/mysql
Environment: <none>
Mounts:
/var/lib/mysql from datadir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vh66b (ro)
initconf:
Image: mysql/mysql-operator:8.0.31-2.0.7
Port: <none>
Host Port: <none>
Command:
mysqlsh
--log-level=@INFO
--pym
mysqloperator
init
--pod-name
$(POD_NAME)
--pod-namespace
$(POD_NAMESPACE)
--datadir
/var/lib/mysql
Environment:
POD_NAME: mycluster-0 (v1:metadata.name)
POD_NAMESPACE: default (v1:metadata.namespace)
MYSQLSH_USER_CONFIG_HOME: /tmp
Mounts:
/mnt/initconf from initconfdir (ro)
/mnt/mycnfdata from mycnfdata (rw)
/var/lib/mysql from datadir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vh66b (ro)
initmysql:
Image: mysql/mysql-server:8.0.31
Port: <none>
Host Port: <none>
Args:
mysqld
--user=mysql
Environment:
MYSQL_INITIALIZE_ONLY: 1
MYSQL_ROOT_PASSWORD: <set to the key 'rootPassword' in secret 'mypwds'> Optional: false
MYSQLSH_USER_CONFIG_HOME: /tmp
Mounts:
/docker-entrypoint-initdb.d from mycnfdata (rw,path="docker-entrypoint-initdb.d")
/etc/my.cnf from mycnfdata (rw,path="my.cnf")
/etc/my.cnf.d from mycnfdata (rw,path="my.cnf.d")
/var/lib/mysql from datadir (rw)
/var/run/mysqld from rundir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vh66b (ro)
Containers:
sidecar:
Image: mysql/mysql-operator:8.0.31-2.0.7
Port: <none>
Host Port: <none>
Command:
mysqlsh
--pym
mysqloperator
sidecar
--pod-name
$(POD_NAME)
--pod-namespace
$(POD_NAMESPACE)
--datadir
/var/lib/mysql
Environment:
POD_NAME: mycluster-0 (v1:metadata.name)
POD_NAMESPACE: default (v1:metadata.namespace)
MYSQL_UNIX_PORT: /var/run/mysqld/mysql.sock
MYSQLSH_USER_CONFIG_HOME: /mysqlsh
Mounts:
/etc/my.cnf from mycnfdata (rw,path="my.cnf")
/etc/my.cnf.d from mycnfdata (rw,path="my.cnf.d")
/mysqlsh from shellhome (rw)
/var/run/mysqld from rundir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vh66b (ro)
mysql:
Image: mysql/mysql-server:8.0.31
Ports: 3306/TCP, 33060/TCP, 33061/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
Args:
mysqld
--user=mysql
Liveness: exec [/livenessprobe.sh] delay=15s timeout=1s period=15s #success=1 #failure=10
Readiness: exec [/readinessprobe.sh] delay=10s timeout=1s period=5s #success=1 #failure=10000
Startup: exec [/livenessprobe.sh 8] delay=5s timeout=1s period=3s #success=1 #failure=10000
Environment:
MYSQL_UNIX_PORT: /var/run/mysqld/mysql.sock
Mounts:
/etc/my.cnf from mycnfdata (rw,path="my.cnf")
/etc/my.cnf.d from mycnfdata (rw,path="my.cnf.d")
/livenessprobe.sh from initconfdir (rw,path="livenessprobe.sh")
/readinessprobe.sh from initconfdir (rw,path="readinessprobe.sh")
/var/lib/mysql from datadir (rw)
/var/run/mysqld from rundir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vh66b (ro)
Readiness Gates:
Type Status
mysql.oracle.com/configured <none>
mysql.oracle.com/ready <none>
Conditions:
Type Status
PodScheduled False
Volumes:
datadir:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: datadir-mycluster-0
ReadOnly: false
mycnfdata:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
rundir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
initconfdir:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: mycluster-initconf
Optional: false
shellhome:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
kube-api-access-vh66b:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Error Logging 12s kopf Handler 'on_pod_create' failed temporarily: Sidecar of mycluster-0 is not yet configured
Normal Logging 13s kopf POD CREATED: pod=mycluster-0 ContainersReady=None Ready=None gate[configured]=None
Warning FailedScheduling 12s (x2 over 14s) default-scheduler 0/6 nodes are available: 6 pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling.
基于静态令牌认证,添加三个用户; 并验证能成功完成认证;
#生成token
[root@k8s-Master-01 ~]#echo "$(openssl rand -hex 3).$(openssl rand -hex 8)"
3066b9.26f23d4eaaee67e2
[root@k8s-Master-01 ~]#echo "$(openssl rand -hex 3).$(openssl rand -hex 8)"
6aa266.649ddf3638ec55b8
[root@k8s-Master-01 ~]#echo "$(openssl rand -hex 3).$(openssl rand -hex 8)"
4d6438.36f3734b41f04bc3
#存进一个csv文件中,并指定用户名,id和所属组(每个主节点都需要做)
[root@k8s-Master-01 ~]#cd /etc/kubernetes/
[root@k8s-Master-01 kubernetes]#ls
admin.conf controller-manager.conf kubelet.conf manifests pki scheduler.conf
[root@k8s-Master-01 kubernetes]#mkdir auth
[root@k8s-Master-01 kubernetes]#cd auth/
[root@k8s-Master-01 auth]#vim token.csv
3066b9.26f23d4eaaee67e2,xiaoshu,1001,kubeusers
6aa266.649ddf3638ec55b8,xiaohu,1002,kubeusers
4d6438.36f3734b41f04bc3,xiaohong,1003,kubeusers
#修改配置文件(每个主节点都需要做)
[root@k8s-Master-01 auth]#vim /etc/kubernetes/manifests/kube-apiserver.yaml
....
- --token-auth-file=/etc/kubernetes/auth/token.csv
....
- mountPath: /etc/kubernetes/auth
name: static-auth-token
readOnly: true
....
- hostPath:
path: /etc/kubernetes/auth
type: DirectoryOrCreate
name: static-auth-token
....
[root@k8s-Node-01 ~]#curl -k -H "Authorization: Bearer 3066b9.26f23d4eaaee67e2" http://kubeapi.shuhong.com/api/v1/namespaces/default/pods
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "pods is forbidden: User \"xiaoshu\" cannot list resource \"pods\" in API group \"\" in the namespace \"default\"",
"reason": "Forbidden",
"details": {
"kind": "pods"
},
"code": 403
}
[root@k8s-Node-01 ~]#curl -k -H "Authorization: Bearer 4d6438.36f3734b41f04bc3" http://kubeapi.shuhong.com/api/v1/namespaces/default/pods
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "pods is forbidden: User \"xiaohong\" cannot list resource \"pods\" in API group \"\" in the namespace \"default\"",
"reason": "Forbidden",
"details": {
"kind": "pods"
},
"code": 403
}
[root@k8s-Node-01 ~]#kubectl -s https://10.0.0.201:6443 --token="3066b9.26f23d4eaaee67e2" --insecure-skip-tls-verify=true get pods -n default
Error from server (Forbidden): pods is forbidden: User "xiaoshu" cannot list resource "pods" in API group "" in the namespace "default"
[root@k8s-Node-01 ~]#kubectl -s https://10.0.0.201:6443 --token="4d6438.36f3734b41f04bc3" --insecure-skip-tls-verify=true get pods -n default
Error from server (Forbidden): pods is forbidden: User "xiaohong" cannot list resource "pods" in API group "" in the namespace "default"
基于数字证书认证,添加一个用户,并验证能成功完成认证;
[root@k8s-Master-01 ~]#cd /etc/kubernetes/pki/
[root@k8s-Master-01 pki]#(umask 077; openssl genrsa -out mason.key 4096)
Generating RSA private key, 4096 bit long modulus (2 primes)
.................................................++++
...........................................................................................................................................++++
e is 65537 (0x010001)
[root@k8s-Master-01 pki]#openssl req -new -key ./mason.key -out ./mason.csr -subj '/CN=mason/O=kubeadmin'
[root@k8s-Master-01 pki]#openssl x509 -req -days 3655 -CAkey ./ca.key -CA ./ca.crt -CAcreateserial -in ./mason.csr -out ./mason.crt
Signature ok
subject=CN = mason, O = kubeadmin
Getting CA Private Key
[root@k8s-Master-01 pki]#scp -p mason.key mason.crt k8s-Node-01:/etc/kubernetes/pki/
mason.key 100% 3247 4.0MB/s 00:00
mason.crt 100% 1363 1.0MB/s 00:00
[root@k8s-Master-01 pki]#scp -p mason.key mason.crt k8s-Node-02:/etc/kubernetes/pki/
mason.key 100% 3247 1.7MB/s 00:00
mason.crt 100% 1363 593.4KB/s 00:00
[root@k8s-Master-01 pki]#scp -p mason.key mason.crt k8s-Node-03:/etc/kubernetes/pki/.
mason.key 100% 3247 1.4MB/s 00:00
mason.crt 100% 1363 342.1KB/s 00:00
[root@k8s-Master-01 pki]#(umask 077; openssl genrsa -out joe.key 4096)
Generating RSA private key, 4096 bit long modulus (2 primes)
......................................++++
..............................................++++
e is 65537 (0x010001)
[root@k8s-Master-01 pki]#openssl req -new -key ./joe.key -out ./joe.csr -subj '/CN=joe/O=kubeadmin'
[root@k8s-Master-01 pki]#openssl x509 -req -days 3655 -CAkey ./ca.key -CA ./ca.crt -CAcreateserial -in ./joe.csr -out ./joe.crt
Signature ok
subject=CN = joe, O = kubeadmin
Getting CA Private Key
[root@k8s-Master-01 pki]#scp -p joe.key joe.crt k8s-Node-03:/etc/kubernetes/pki/
joe.key 100% 3243 1.6MB/s 00:00
joe.crt 100% 1359 987.5KB/s 00:00
[root@k8s-Master-01 pki]#scp -p joe.key joe.crt k8s-Node-02:/etc/kubernetes/pki/
joe.key 100% 3243 2.1MB/s 00:00
joe.crt 100% 1359 1.0MB/s 00:00
[root@k8s-Master-01 pki]#scp -p joe.key joe.crt k8s-Node-01:/etc/kubernetes/pki/
joe.key 100% 3243 2.0MB/s 00:00
joe.crt 100% 1359 1.0MB/s 00:00
#三种验证方法验证
[root@k8s-Node-01 pki]#kubectl -s https://10.0.0.201:6443 --client-certificate=./mason.crt --client-key=./mason.key --certificate-authority=./ca.crt get pods
Error from server (Forbidden): pods is forbidden: User "mason" cannot list resource "pods" in API group "" in the namespace "default"
[root@k8s-Node-01 pki]#kubectl -s https://10.0.0.201:6443 --client-certificate=./joe.crt --client-key=./joe.key --insecure-skip-tls-verify=true get pods
Error from server (Forbidden): pods is forbidden: User "joe" cannot list resource "pods" in API group "" in the namespace "default"
[root@k8s-Node-01 pki]#curl --cert ./mason.crt --key ./mason.key --cacert ./ca.crt https://10.0.0.201:6443/api/v1/namespaces/default/pods
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "pods is forbidden: User \"mason\" cannot list resource \"pods\" in API group \"\" in the namespace \"default\"",
"reason": "Forbidden",
"details": {
"kind": "pods"
},
"code": 403
}
将创建的用户账号,并入到一个kubeconfig文件中;
将静态令牌认证信息保存为kubeconfig文件:
[root@k8s-Master-01 ~]#kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://kubeapi.shuhong.com:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
[root@k8s-Master-01 pki]#kubectl config set-cluster mykube --embed-certs=true --certificate-authority=./ca.crt --server="https://10.0.0.201:6443" --kubeconfig=$HOME/.kube/mykube.conf
Cluster "mykube" set.
[root@k8s-Master-01 pki]#kubectl config view --kubeconfig=$HOME/.kube/mykube.conf
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://10.0.0.201:6443
name: mykube
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null
[root@k8s-Master-01 pki]#kubectl config set-credentials xiaoshu --token="3066b9.26f23d4eaaee67e2" --kubeconfig=$HOME/.kube/mykube.conf
User "xiaoshu" set.
[root@k8s-Master-01 pki]#kubectl config view --kubeconfig=$HOME/.kube/mykube.conf
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://10.0.0.201:6443
name: mykube
contexts: null
current-context: ""
kind: Config
preferences: {}
users:
- name: xiaoshu
user:
token: REDACTED
[root@k8s-Master-01 pki]#kubectl config set-context xiaoshu@mykube --cluster=mykube --user=xiaoshu --kubeconfig=$HOME/.kube/mykube.conf
Context "xiaoshu@mykube" created.
[root@k8s-Master-01 pki]#kubectl config view --kubeconfig=$HOME/.kube/mykube.conf
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://10.0.0.201:6443
name: mykube
contexts:
- context:
cluster: mykube
user: xiaoshu
name: xiaoshu@mykube
current-context: ""
kind: Config
preferences: {}
users:
- name: xiaoshu
user:
token: REDACTED
[root@k8s-Master-01 pki]#kubectl config use-context xiaoshu@mykube --kubeconfig=$HOME/.kube/mykube.conf
Switched to context "xiaoshu@mykube".
[root@k8s-Master-01 pki]#kubectl config view --kubeconfig=$HOME/.kube/mykube.conf
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://10.0.0.201:6443
name: mykube
contexts:
- context:
cluster: mykube
user: xiaoshu
name: xiaoshu@mykube
current-context: xiaoshu@mykube
kind: Config
preferences: {}
users:
- name: xiaoshu
user:
token: REDACTED
[root@k8s-Master-01 pki]#kubectl get pods --context='xiaoshu@mykube' --kubeconfig=$HOME/.kube/mykube.conf
Error from server (Forbidden): pods is forbidden: User "xiaoshu" cannot list resource "pods" in API group "" in the namespace "default"
[root@k8s-Master-01 pki]#export KUBECONFIG="$HOME/.kube/mykube.conf"
[root@k8s-Master-01 pki]#kubectl get pods
Error from server (Forbidden): pods is forbidden: User "xiaoshu" cannot list resource "pods" in API group "" in the namespace "default"
将数字证书认证的信息保存为kubeconfig文件,以下过程中没有新增集群,只添加用户和context:
[root@k8s-Master-01 pki]#kubectl config set-credentials joe --embed-certs=true --client-certificate=./joe.crt --client-key=./joe.key --kubeconfig=$HOME/.kube/mykube.conf
User "joe" set.
[root@k8s-Master-01 pki]#kubectl config view --kubeconfig=$HOME/.kube/mykube.conf
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://10.0.0.201:6443
name: mykube
contexts:
- context:
cluster: mykube
user: xiaoshu
name: xiaoshu@mykube
current-context: xiaoshu@mykube
kind: Config
preferences: {}
users:
- name: joe
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
- name: xiaoshu
user:
token: REDACTED
[root@k8s-Master-01 pki]#kubectl config view --kubeconfig=$HOME/.kube/mykube.conf
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://10.0.0.201:6443
name: mykube
contexts:
- context:
cluster: mykube
user: joe
name: joe@mykube
- context:
cluster: mykube
user: xiaoshu
name: xiaoshu@mykube
current-context: xiaoshu@mykube
kind: Config
preferences: {}
users:
- name: joe
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
- name: xiaoshu
user:
token: REDACTED
[root@k8s-Master-01 pki]#kubectl get pods
NAME READY STATUS RESTARTS AGE
sts-0 0/1 ContainerCreating 0 59m
wordpress-97577cb54-m9jbl 0/1 ContainerCreating 0 67m
[root@k8s-Master-01 pki]#kubectl --context='joe@mykube' get pods
Error from server (Forbidden): pods is forbidden: User "joe" cannot list resource "pods" in API group "" in the namespace "default"
[root@k8s-Master-01 pki]#kubectl config view --kubeconfig=$HOME/.kube/mykube.conf
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://10.0.0.201:6443
name: mykube
contexts:
- context:
cluster: mykube
user: joe
name: joe@mykube
- context:
cluster: mykube
user: xiaoshu
name: xiaoshu@mykube
current-context: joe@mykube
kind: Config
preferences: {}
users:
- name: joe
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
- name: xiaoshu
user:
token: REDACTED
[root@k8s-Master-01 pki]#kubectl get pods
Error from server (Forbidden): pods is forbidden: User "joe" cannot list resource "pods" in API group "" in the namespace "default"
#验证KUBECONFIG环境变量合并kubeconfig文件的方法
[root@k8s-Master-01 pki]#echo $KUBECONFIG
/root/.kube/mykube.conf
[root@k8s-Master-01 pki]#export KUBECONFIG="/root/.kube/mykube.conf:/etc/kubernetes/admin.conf"
[root@k8s-Master-01 pki]#kubectl get pods
Error from server (Forbidden): pods is forbidden: User "joe" cannot list resource "pods" in API group "" in the namespace "default"
[root@k8s-Master-01 pki]#kubectl --context="joe@mykube" get pods
Error from server (Forbidden): pods is forbidden: User "joe" cannot list resource "pods" in API group "" in the namespace "default"
[root@k8s-Master-01 pki]#kubectl --context="kubernetes-admin@kubernetes" get pods
NAME READY STATUS RESTARTS AGE
sts-0 0/1 ContainerCreating 0 64m
wordpress-97577cb54-m9jbl 0/1 ContainerCreating 0 72m
创建使用ServiceAcconut
[root@k8s-Master-01 ~]#kubectl create serviceaccount mysa -o yaml --dry-run=client
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: null
name: mysa
[root@k8s-Master-01 ~]#kubectl create serviceaccount mysa
serviceaccount/mysa created
[root@k8s-Master-01 ~]#kubectl get sa
NAME SECRETS AGE
default 0 3d23h
mysa 0 6s
[root@k8s-Master-01 ~]#kubectl get sa -o yaml
apiVersion: v1
items:
- apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "2022-11-13T01:36:25Z"
name: default
namespace: default
resourceVersion: "331"
uid: 331b44ff-5626-474b-8627-d9284b24c15c
- apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "2022-11-17T01:08:28Z"
name: mysa
namespace: default
resourceVersion: "192601"
uid: 61f358aa-d64b-45d7-9103-16519f01b8d0
kind: List
metadata:
resourceVersion: ""
#pod的内证书挂载路径
/var/run/secrets/kubernetes.io/serviceaccount
[root@k8s-Master-01 ~]#kubectl get pods -o yaml
apiVersion: v1
items:
- apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2022-11-17T01:17:29Z"
generateName: demoapp-55c5f88dcb-
labels:
app: demoapp
pod-template-hash: 55c5f88dcb
name: demoapp-55c5f88dcb-6drzm
namespace: default
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: demoapp-55c5f88dcb
uid: f399d167-0f6b-44c7-9f83-6fa4295221e2
resourceVersion: "194029"
uid: d7dd5a2b-91ed-4933-81d4-71966cc61bd0
spec:
containers:
- image: ikubernetes/demoapp:v1.0
imagePullPolicy: IfNotPresent
name: demoapp
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount #挂在路径
name: kube-api-access-42sg6
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: k8s-node-03
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: kube-api-access-42sg6
projected: #特殊的挂载类型
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap: #config方式加载了证书
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI: #名称空间
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2022-11-17T01:17:30Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2022-11-17T01:17:30Z"
message: 'containers with unready status: [demoapp]'
reason: ContainersNotReady
status: "False"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2022-11-17T01:17:30Z"
message: 'containers with unready status: [demoapp]'
reason: ContainersNotReady
status: "False"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2022-11-17T01:17:30Z"
status: "True"
type: PodScheduled
containerStatuses:
- image: ikubernetes/demoapp:v1.0
imageID: ""
lastState: {}
name: demoapp
ready: false
restartCount: 0
started: false
state:
waiting:
reason: ContainerCreating
hostIP: 10.0.0.206
phase: Pending
qosClass: BestEffort
startTime: "2022-11-17T01:17:30Z"
kind: List
metadata:
resourceVersion: ""
[root@k8s-Master-01 ~]#kubectl exec -it demoapp-55c5f88dcb-b692x -- /bin/sh
[root@demoapp-55c5f88dcb-b692x /]# ls /var/run/secrets/kubernetes.io/serviceaccount/
ca.crt namespace token
[root@demoapp-55c5f88dcb-b692x /]# ls -l /var/run/secrets/kubernetes.io/serviceaccount/
total 0
lrwxrwxrwx 1 root root 13 Nov 17 01:22 ca.crt -> ..data/ca.crt
lrwxrwxrwx 1 root root 16 Nov 17 01:22 namespace -> ..data/namespace
lrwxrwxrwx 1 root root 12 Nov 17 01:22 token -> ..data/token
[root@demoapp-55c5f88dcb-b692x /]# cd /var/run/secrets/kubernetes.io/serviceaccount/
[root@demoapp-55c5f88dcb-b692x /run/secrets/kubernetes.io/serviceaccount]# cat ca.crt
-----BEGIN CERTIFICATE-----
MIIC/jCCAeagAwIBAgIBADANBgkqhkiG9w0BAQsFADAVMRMwEQYDVQQDEwprdWJl
cm5ldGVzMB4XDTIyMTExMzAxMzUzOVoXDTMyMTExMDAxMzUzOVowFTETMBEGA1UE
AxMKa3ViZXJuZXRlczCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAKTu
X2Ety2fgmztDDlM+f9fEtIjAoohVJn4wgqt+OtyaR5A8/QcxD9YV9RXPGbQvTiR6
BIc6qy97C6Vfv1C9MCl/8u0icaFf/FZt+UoC1yroKcV6uJTf3ASpjN045rU/r52B
ldDtIaEBZVyzrK4+6hmGpgkPUKCR4vKaVgWur3NZK3YMpKbEu6s10EjCuxPBbuBn
9r6qwuTzWflIX9UVtEftRc9N2KxUYtCGpRk32RlyLkii4qC6iSZAaSQZqPrFs2RE
+BrabMBTMFnLoePqQKwsHiHTim3z3se3iXT9apSW7jFGEG8klIYABfH8Qp/h+YAQ
8sdT77fOp64S1vjcUZcCAwEAAaNZMFcwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB
/wQFMAMBAf8wHQYDVR0OBBYEFFOR+ntkBkFAKAQZVPqQQ70oHTpfMBUGA1UdEQQO
MAyCCmt1YmVybmV0ZXMwDQYJKoZIhvcNAQELBQADggEBAE33jQnD4dStmk/ksqr/
dOZ7lGNjaIV+aPY3T74Vl5wh2c63gRruCCMPiRjqluABIYxOZM4QzDbYZtAPIXQh
a5Q5mXVl6mG+5kH/up02plIjT1TociaI1ZEMjlnlONONo7u9an03bMHcjHbLRdVR
DPHkwLKxMiaYf/digXRrT8/tItiOuO83cosfxG8RdI15XgADx2/yp4+iZshBLio0
MQIN0Wr2Q4PweMi9bSY8bJlfBHxzSboSZRZKyG0f3T4VCumMCkfJSyNyhulwNjKa
5P9nLv+Co1jzLZ6EKcs70+iCmj0PljNnksGp6JKYb1eIcf9WkN69SHg+drKQLc4V
LQI=
-----END CERTIFICATE-----
[root@demoapp-55c5f88dcb-b692x /run/secrets/kubernetes.io/serviceaccount]# cat namespace
[root@demoapp-55c5f88dcb-b692x /run/secrets/kubernetes.io/serviceaccount]# cat token
eyJhbGciOiJSUzI1NiIsImtpZCI6IjdvUGFKcmRheDBRcks4c2UtekFBLWpxN2t2ZzE2VDNLVXdqS3M4LXZ6eGMifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzAwMTg0MTIxLCJpYXQiOjE2Njg2NDgxMjEsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJkZWZhdWx0IiwicG9kIjp7Im5hbWUiOiJkZW1vYXBwLTU1YzVmODhkY2ItYjY5MngiLCJ1aWQiOiJmYjA1YjA4OC1lMTQ2LTRkMWEtOGJmZS0xY2E0YjE2ZGNmNzIifSwic2VydmljZWFjY291bnQiOnsibmFtZSI6ImRlZmF1bHQiLCJ1aWQiOiIzMzFiNDRmZi01NjI2LTQ3NGItODYyNy1kOTI4NGIyNGMxNWMifSwid2FybmFmdGVyIjoxNjY4NjUxNzI4fSwibmJmIjoxNjY4NjQ4MTIxLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDpkZWZhdWx0In0.db2M8vGwSlGBRb-PHqh6UsOFUdlry2FL6jOZ7f0p_ck5JyepP3QQcmxmc_KtN7oMCkdTkcXqhWrcieXiASN6gE3UJwisCvWw6EGc6e-_xyfNNEmtSa4F7RGHq1J8_64X9EhnQHw4iwNd4J7aJMV9dAnPWO3HWLqht0IIrBT4pX-yWNy3OU3tIPe_y4f3WOsp8RK-626xot54aqlBTrpq84ciOSdE-Pracdg4zCzBA4UbJX6wZjzj0tlWL1TxArq98SOaS2C0JFf2mlUJYUimqUwqdlMtA_PmlYOpLSgoH3ekJ0nskO-sfKzv4GFUYPrqiPi-GcKjOFH1t8kXyK4irA[root@demoapp-55c5f88dcb-b692x /run/secrets/kubernetes.io/serviceaccount]#
[root@k8s-Node-01 ~]#kubectl --insecure-skip-tls-verify=true -s https://10.0.0.201:6443 --token=${TOKEN} get pods -n default
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:default:default" cannot list resource "pods" in API group "" in the namespace "default"
Role和ClusterRole,RoleBinding和ClusterRoleBinding
#在test名称空间下创建角色reader
[root@k8s-Master-01 ~]#kubectl create ns test
namespace/test created
[root@k8s-Master-01 ~]#kubectl create role reader --verb=get,list,watch --resource=pods,service -o yaml --dry-run=client
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
creationTimestamp: null
name: reader
rules:
- apiGroups:
- ""
resources:
- pods
- services
verbs:
- get
- list
- watch
[root@k8s-Master-01 ~]#mkdir role
[root@k8s-Master-01 ~]#kubectl create role reader --verb=get,list,watch --resource=pods,service -o yaml --dry-run=client >role/role-pod-service-reader.yaml
[root@k8s-Master-01 ~]#kubectl apply -f role/role-pod-service-reader.yaml -n test
role.rbac.authorization.k8s.io/reader created
[root@k8s-Master-01 ~]#kubectl get role
No resources found in default namespace.
[root@k8s-Master-01 ~]#kubectl get role -n test
NAME CREATED AT
reader 2022-11-17T01:48:35Z
#创建ClusterRole
[root@k8s-Master-01 ~]#kubectl create clusterrole clusterreader --verb=get,list,watch --resource=storageclass,persistentvolumes,namespaces -o yaml --dry-run=client
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
creationTimestamp: null
name: clusterreader
rules:
- apiGroups:
- ""
resources:
- persistentvolumes
- namespaces
verbs:
- get
- list
- watch
- apiGroups:
- storage.k8s.io
resources:
- storageclasses
verbs:
- get
- list
- watch
[root@k8s-Master-01 ~]#kubectl create clusterrole clusterreader --verb=get,list,watch --resource=storageclass,persistentvolumes,namespaces -o yaml --dry-run=client >role/clusterrole-pod-service-reader.yaml
[root@k8s-Master-01 ~]#kubectl apply -f role/clusterrole-pod-service-reader.yaml
clusterrole.rbac.authorization.k8s.io/clusterreader created
[root@k8s-Master-01 ~]#kubectl get clusterrole
NAME CREATED AT
admin 2022-11-13T01:36:02Z
calico-kube-controllers 2022-11-13T01:39:07Z
calico-node 2022-11-13T01:39:07Z
cluster-admin 2022-11-13T01:36:01Z
clusterreader 2022-11-17T01:53:23Z
edit 2022-11-13T01:36:02Z
kubeadm:get-nodes 2022-11-13T01:36:12Z
mysql-operator 2022-11-15T08:26:21Z
mysql-sidecar 2022-11-15T08:26:21Z
nfs-external-provisioner-role 2022-11-13T02:10:32Z
strimzi-cluster-operator-global 2022-11-15T03:14:01Z
strimzi-cluster-operator-leader-election 2022-11-15T03:14:01Z
strimzi-cluster-operator-namespaced 2022-11-15T03:13:58Z
strimzi-cluster-operator-watched 2022-11-15T03:14:02Z
strimzi-entity-operator 2022-11-15T03:14:02Z
strimzi-kafka-broker 2022-11-15T03:13:58Z
strimzi-kafka-client 2022-11-15T03:14:02Z
system:aggregate-to-admin 2022-11-13T01:36:03Z
system:aggregate-to-edit 2022-11-13T01:36:03Z
system:aggregate-to-view 2022-11-13T01:36:03Z
system:auth-delegator 2022-11-13T01:36:03Z
system:basic-user 2022-11-13T01:36:01Z
system:certificates.k8s.io:certificatesigningrequests:nodeclient 2022-11-13T01:36:04Z
system:certificates.k8s.io:certificatesigningrequests:selfnodeclient 2022-11-13T01:36:04Z
system:certificates.k8s.io:kube-apiserver-client-approver 2022-11-13T01:36:04Z
system:certificates.k8s.io:kube-apiserver-client-kubelet-approver 2022-11-13T01:36:04Z
system:certificates.k8s.io:kubelet-serving-approver 2022-11-13T01:36:04Z
system:certificates.k8s.io:legacy-unknown-approver 2022-11-13T01:36:04Z
system:controller:attachdetach-controller 2022-11-13T01:36:05Z
system:controller:certificate-controller 2022-11-13T01:36:07Z
system:controller:clusterrole-aggregation-controller 2022-11-13T01:36:05Z
system:controller:cronjob-controller 2022-11-13T01:36:05Z
system:controller:daemon-set-controller 2022-11-13T01:36:05Z
system:controller:deployment-controller 2022-11-13T01:36:05Z
system:controller:disruption-controller 2022-11-13T01:36:05Z
system:controller:endpoint-controller 2022-11-13T01:36:05Z
system:controller:endpointslice-controller 2022-11-13T01:36:06Z
system:controller:endpointslicemirroring-controller 2022-11-13T01:36:06Z
system:controller:ephemeral-volume-controller 2022-11-13T01:36:06Z
system:controller:expand-controller 2022-11-13T01:36:06Z
system:controller:generic-garbage-collector 2022-11-13T01:36:06Z
system:controller:horizontal-pod-autoscaler 2022-11-13T01:36:06Z
system:controller:job-controller 2022-11-13T01:36:07Z
system:controller:namespace-controller 2022-11-13T01:36:07Z
system:controller:node-controller 2022-11-13T01:36:07Z
system:controller:persistent-volume-binder 2022-11-13T01:36:07Z
system:controller:pod-garbage-collector 2022-11-13T01:36:07Z
system:controller:pv-protection-controller 2022-11-13T01:36:07Z
system:controller:pvc-protection-controller 2022-11-13T01:36:07Z
system:controller:replicaset-controller 2022-11-13T01:36:07Z
system:controller:replication-controller 2022-11-13T01:36:07Z
system:controller:resourcequota-controller 2022-11-13T01:36:07Z
system:controller:root-ca-cert-publisher 2022-11-13T01:36:07Z
system:controller:route-controller 2022-11-13T01:36:07Z
system:controller:service-account-controller 2022-11-13T01:36:07Z
system:controller:service-controller 2022-11-13T01:36:07Z
system:controller:statefulset-controller 2022-11-13T01:36:07Z
system:controller:ttl-after-finished-controller 2022-11-13T01:36:07Z
system:controller:ttl-controller 2022-11-13T01:36:07Z
system:coredns 2022-11-13T01:36:13Z
system:discovery 2022-11-13T01:36:01Z
system:heapster 2022-11-13T01:36:03Z
system:kube-aggregator 2022-11-13T01:36:03Z
system:kube-controller-manager 2022-11-13T01:36:04Z
system:kube-dns 2022-11-13T01:36:04Z
system:kube-scheduler 2022-11-13T01:36:05Z
system:kubelet-api-admin 2022-11-13T01:36:03Z
system:monitoring 2022-11-13T01:36:01Z
system:node 2022-11-13T01:36:03Z
system:node-bootstrapper 2022-11-13T01:36:03Z
system:node-problem-detector 2022-11-13T01:36:03Z
system:node-proxier 2022-11-13T01:36:04Z
system:persistent-volume-provisioner 2022-11-13T01:36:04Z
system:public-info-viewer 2022-11-13T01:36:02Z
system:service-account-issuer-discovery 2022-11-13T01:36:04Z
system:volume-scheduler 2022-11-13T01:36:04Z
view 2022-11-13T01:36:02Z
[root@k8s-Master-01 ~]#kubectl get clusterrole clusterreader -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{},"creationTimestamp":null,"name":"clusterreader"},"rules":[{"apiGroups":[""],"resources":["persistentvolumes","namespaces"],"verbs":["get","list","watch"]},{"apiGroups":["storage.k8s.io"],"resources":["storageclasses"],"verbs":["get","list","watch"]}]}
creationTimestamp: "2022-11-17T01:53:23Z"
name: clusterreader
resourceVersion: "200049"
uid: c2ce397f-9cc2-46f4-9dba-42c256b484ff
rules:
- apiGroups:
- ""
resources:
- persistentvolumes
- namespaces
verbs:
- get
- list
- watch
- apiGroups:
- storage.k8s.io
resources:
- storageclasses
verbs:
- get
- list
- watch
#创建rolebind
[root@k8s-Node-01 ~]#ls
cri-dockerd_0.2.6.3-0.ubuntu-focal_amd64.deb mykube.conf nfs-csi.tar snap
[root@k8s-Node-01 ~]#export KUBECONFIG=/root/mykube.conf
#node01加载了joe用户没有权限查看pods
[root@k8s-Node-01 ~]#kubectl get pods -n test
Error from server (Forbidden): pods is forbidden: User "joe" cannot list resource "pods" in API group "" in the namespace "test"
[root@k8s-Master-01 ~]#kubectl create rolebinding joe-reader --role=reader --user=joe -n test -o yaml --dry-run=client
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
creationTimestamp: null
name: joe-reader
namespace: test
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: reader
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: joe
[root@k8s-Master-01 ~]#kubectl create rolebinding joe-reader --role=reader --user=joe -n test -o yaml --dry-run=client > role/rolebinding-joe-reader.yaml
[root@k8s-Master-01 ~]#kubectl apply -f role/rolebinding-joe-reader.yaml
rolebinding.rbac.authorization.k8s.io/joe-reader created
[root@k8s-Master-01 ~]#kubectl get rolebinding -n test
NAME ROLE AGE
joe-reader Role/reader 35s
#测试节点node01不再显示没权限
[root@k8s-Node-01 ~]#kubectl get pods -n test
No resources found in test namespace.
[root@k8s-Node-01 ~]#kubectl get service -n test
No resources found in test namespace.
#没有授权的资源和操作依然没有权限
[root@k8s-Node-01 ~]#kubectl get deployment -n test
Error from server (Forbidden): deployments.apps is forbidden: User "joe" cannot list resource "deployments" in API group "apps" in the namespace "test"
[root@k8s-Node-01 ~]#kubectl create deployment demoapp --image=ikubernetes/demoapp:v1.0 -n test
error: failed to create deployment: deployments.apps is forbidden: User "joe" cannot create resource "deployments" in API group "apps" in the namespace "test"
#使用rolebinding绑定clusterrole(cluster-admin),此处joe将拥有test名称空间下的所有管理权限
[root@k8s-Master-01 ~]#kubectl create rolebinding joe-cluster-test --clusterrole=cluster-admin --user=joe -n test -o yaml --dry-run=client > role/rolebinding-joe-cluster-test.yaml
[root@k8s-Master-01 ~]#kubectl apply -f role/rolebinding-joe-cluster-test.yaml
rolebinding.rbac.authorization.k8s.io/joe-cluster-test created
[root@k8s-Master-01 ~]#kubectl get rolebinding -n test
NAME ROLE AGE
joe-cluster-test ClusterRole/cluster-admin 28s
#验证权限(注意这里雨admin角色的区别在于对名称空间的权限,cluster-admin可以管理自身的名称空间甚至可以删除,admin不行)
[root@k8s-Node-01 ~]#kubectl create deployment demoapp2 --image=ikubernetes/demoapp:v1.1 -n test
deployment.apps/demoapp2 created
[root@k8s-Node-01 ~]#kubectl get pods -n test
NAME READY STATUS RESTARTS AGE
demoapp-55c5f88dcb-8zdjz 1/1 Running 0 8m56s
demoapp2-5c8cc4bb55-sz6nk 0/1 ContainerCreating 0 5s
[root@k8s-Node-01 ~]#kubectl get ns test
NAME STATUS AGE
test Active 33m
#不具有test名称空间以外的权限
[root@k8s-Node-01 ~]#kubectl get pods
Error from server (Forbidden): pods is forbidden: User "joe" cannot list resource "pods" in API group "" in the namespace "default"
#使用clusterrolebinding绑定clusterrole
[root@k8s-Master-01 ~]#kubectl create clusterrolebinding joe-cluster --clusterrole=cluster-admin --user=joe -o yaml --dry-run=client > role/rolebinding-joe-cluster.yaml
[root@k8s-Master-01 ~]#kubectl apply -f role/rolebinding-joe-cluster.yaml
clusterrolebinding.rbac.authorization.k8s.io/joe-cluster created
[root@k8s-Master-01 ~]#kubectl get clusterrolebinding
NAME ROLE AGE
calico-kube-controllers ClusterRole/calico-kube-controllers 4d
calico-node ClusterRole/calico-node 4d
cluster-admin ClusterRole/cluster-admin 4d
joe-cluster ClusterRole/cluster-admin 26s
...
[root@k8s-Node-01 ~]#kubectl get ns
NAME STATUS AGE
default Active 4d
kafka Active 47h
kube-node-lease Active 4d
kube-public Active 4d
kube-system Active 4d
lnmp Active 4d
mysql-operator Active 41h
nfs Active 4d
test Active 38m
[root@k8s-Node-01 ~]#kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default demoapp-55c5f88dcb-b692x 1/1 Running 0 63m
kafka my-cluster-entity-operator-9d45b6b89-t7ws6 3/3 Running 3 (43h ago) 43h
kafka my-cluster-kafka-0 1/1 Running 0 43h
kafka my-cluster-kafka-1 1/1 Running 0 43h
kafka my-cluster-kafka-2 1/1 Running 0 43h
kafka my-cluster-zookeeper-0 1/1 Running 0 43h
kafka my-cluster-zookeeper-1 1/1 Running 0 43h
kafka my-cluster-zookeeper-2 1/1 Running 0 43h
kafka strimzi-cluster-operator-56d64c8584-795gp 1/1 Running 1 (99m ago) 43h
kube-system calico-kube-controllers-f79f7749d-8gdsj 1/1 Running 0 60m
kube-system calico-node-7rqsb 1/1 Running 0 60m
kube-system calico-node-fm4jx 1/1 Running 0 60m
kube-system calico-node-h456t 1/1 Running 0 60m
kube-system calico-node-hntvk 1/1 Running 0 60m
kube-system calico-node-hqpmr 1/1 Running 0 60m
kube-system calico-node-w8mf7 1/1 Running 0 60m
kube-system coredns-c676cc86f-k9d27 1/1 Running 0 4d
kube-system coredns-c676cc86f-w88hv 1/1 Running 0 4d
kube-system csi-nfs-controller-65cf7d587-9j65x 3/3 Running 0 43h
kube-system csi-nfs-controller-65cf7d587-hcgcq 3/3 Running 0 43h
kube-system csi-nfs-node-bq6wv 3/3 Running 0 44h
kube-system csi-nfs-node-dnvsm 3/3 Running 0 43h
kube-system csi-nfs-node-fwghn 3/3 Running 0 4d
kube-system csi-nfs-node-pmcng 3/3 Running 0 44h
kube-system csi-nfs-node-vrxbg 3/3 Running 0 4d
kube-system csi-nfs-node-zt4g6 3/3 Running 0 4d
kube-system etcd-k8s-master-01 1/1 Running 0 4d
kube-system etcd-k8s-master-02 1/1 Running 0 4d
kube-system etcd-k8s-master-03 1/1 Running 0 4d
kube-system kube-apiserver-k8s-master-01 1/1 Running 0 42h
kube-system kube-apiserver-k8s-master-02 1/1 Running 1 (4d ago) 4d
kube-system kube-apiserver-k8s-master-03 1/1 Running 0 4d
kube-system kube-controller-manager-k8s-master-01 1/1 Running 2 (43h ago) 4d
kube-system kube-controller-manager-k8s-master-02 1/1 Running 0 4d
kube-system kube-controller-manager-k8s-master-03 1/1 Running 0 4d
kube-system kube-proxy-4r7zf 1/1 Running 0 4d
kube-system kube-proxy-652dh 1/1 Running 0 4d
kube-system kube-proxy-8n5ql 1/1 Running 0 44h
kube-system kube-proxy-blwrx 1/1 Running 0 43h
kube-system kube-proxy-x2szb 1/1 Running 0 44h
kube-system kube-proxy-x49d7 1/1 Running 0 4d
kube-system kube-scheduler-k8s-master-01 1/1 Running 2 (43h ago) 4d
kube-system kube-scheduler-k8s-master-02 1/1 Running 0 4d
kube-system kube-scheduler-k8s-master-03 1/1 Running 0 4d
mysql-operator mysql-operator-6b4b96dbb5-rb6km 1/1 Running 0 41h
nfs nfs-server-5847b99d99-ncscm 1/1 Running 0 43h
test demoapp-55c5f88dcb-8zdjz 1/1 Running 0 15m
test demoapp2-5c8cc4bb55-sz6nk 1/1 Running 0 6m20s
部署Ingress Controller;
#项目地址https://github.com/kubernetes/ingress-nginx
[root@k8s-Node-01 ~]#kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.5.1/deploy/static/provider/cloud/deploy.yaml
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
serviceaccount/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
configmap/ingress-nginx-controller created
service/ingress-nginx-controller created
service/ingress-nginx-controller-admission created
deployment.apps/ingress-nginx-controller created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
ingressclass.networking.k8s.io/nginx created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
[root@k8s-Node-01 ~]#kubectl get ns
NAME STATUS AGE
default Active 4d1h
ingress-nginx Active 22s
kafka Active 47h
kube-node-lease Active 4d1h
kube-public Active 4d1h
kube-system Active 4d1h
lnmp Active 4d
mysql-operator Active 42h
nfs Active 4d
test Active 54m
#此处需要自行下载镜像,或科学配置网络,才能正常启动
[root@k8s-Master-01 ~]#kubectl get pods -n ingress-nginx -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ingress-nginx-admission-create-vlgmh 0/1 Completed 0 6m10s 192.168.204.51 k8s-node-03 <none> <none>
ingress-nginx-admission-patch-bmchf 0/1 Completed 0 6m10s 192.168.8.54 k8s-node-02 <none> <none>
ingress-nginx-controller-575f7cf88b-vhgnv 1/1 Running 0 6m10s 192.168.8.55 k8s-node-02 <none> <none>
[root@k8s-Master-01 ~]#kubectl get svc -n ingress-nginx -owide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
ingress-nginx-controller LoadBalancer 10.105.164.230 <pending> 80:31204/TCP,443:31172/TCP 132m app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
ingress-nginx-controller-admission ClusterIP 10.102.123.136 <none> 443/TCP 132m app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
[root@k8s-Master-01 ~]#kubectl get pods -n ingress-nginx -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ingress-nginx-admission-create-vlgmh 0/1 Completed 0 133m 192.168.204.51 k8s-node-03 <none> <none>
ingress-nginx-admission-patch-bmchf 0/1 Completed 0 133m 192.168.8.54 k8s-node-02 <none> <none>
ingress-nginx-controller-575f7cf88b-vhgnv 1/1 Running 0 133m 192.168.8.55 k8s-node-02 <none> <none>
#此处查看svc发现ingress-nginx-controller是LoadBalancer的类型,所以此处ingress只能从ingress-nginx-controller的节点接入流量,生产中配合lb组件使用,实验室可修改为cluster使用
[root@k8s-Master-01 ~]#kubectl edit svc ingress-nginx-controller -n ingress-nginx
...
externalTrafficPolicy: Cluster
externalIPs:
- 10.0.0.220
...
[root@k8s-Master-01 ~]#kubectl get svc -n ingress-nginx -owide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
ingress-nginx-controller LoadBalancer 10.105.164.230 10.0.0.220 80:31204/TCP,443:31172/TCP 141m app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
ingress-nginx-controller-admission ClusterIP 10.102.123.136 <none> 443/TCP 141m app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
[root@k8s-Master-01 ~]#kubectl logs ingress-nginx-controller-575f7cf88b-vhgnv -n ingress-nginx
-------------------------------------------------------------------------------
NGINX Ingress controller
Release: v1.5.1
Build: d003aae913cc25f375deb74f898c7f3c65c06f05
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.21.6
-------------------------------------------------------------------------------
W1117 04:00:55.834749 7 client_config.go:617] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I1117 04:00:55.834895 7 main.go:209] "Creating API client" host="https://10.96.0.1:443"
I1117 04:00:55.847682 7 main.go:253] "Running in Kubernetes cluster" major="1" minor="25" git="v1.25.3" state="clean" commit="434bfd82814af038ad94d62ebe59b133fcb50506" platform="linux/amd64"
I1117 04:00:56.401505 7 main.go:104] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
I1117 04:00:56.507855 7 ssl.go:533] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
I1117 04:00:56.576369 7 nginx.go:260] "Starting NGINX Ingress controller"
I1117 04:00:56.628537 7 event.go:285] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"6d192253-5cb2-4120-8508-3cdebe572347", APIVersion:"v1", ResourceVersion:"221137", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
I1117 04:00:57.778898 7 nginx.go:303] "Starting NGINX process"
I1117 04:00:57.778940 7 leaderelection.go:248] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
I1117 04:00:57.780098 7 nginx.go:323] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
I1117 04:00:57.780520 7 controller.go:168] "Configuration changes detected, backend reload required"
I1117 04:00:57.836211 7 leaderelection.go:258] successfully acquired lease ingress-nginx/ingress-nginx-leader
I1117 04:00:57.836342 7 status.go:84] "New leader elected" identity="ingress-nginx-controller-575f7cf88b-vhgnv"
I1117 04:00:58.037936 7 controller.go:185] "Backend successfully reloaded"
I1117 04:00:58.038038 7 controller.go:196] "Initial sync, sleeping for 1 second"
I1117 04:00:58.038483 7 event.go:285] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-575f7cf88b-vhgnv", UID:"5eb56d53-82e5-432f-a1d3-4e7e1f768a96", APIVersion:"v1", ResourceVersion:"221332", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
10.0.0.1 - - [17/Nov/2022:06:13:40 +0000] "GET / HTTP/1.1" 400 650 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36" 470 0.000 [] [] - - - - 6e200b8dfb9d796ec0539ecca433d1c0
10.0.0.1 - - [17/Nov/2022:06:13:40 +0000] "GET /favicon.ico HTTP/1.1" 400 650 "http://10.0.0.205:31172/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36" 415 0.000 [] [] - - - - 3ed4e8af50b94f526380a2c1d18ae5cb
Ingress验证多种流量发布逻辑
Simple fanout
[root@k8s-Master-01 ~]#kubectl create deployment demoapp10 --image=ikubernetes/demoapp:v1.0 --replicas=2
deployment.apps/demoapp10 created
[root@k8s-Master-01 ~]#kubectl create deployment demoapp11 --image=ikubernetes/demoapp:v1.1 --replicas=2
deployment.apps/demoapp11 created
[root@k8s-Master-01 ~]#kubectl get pods -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
demoapp10-845686d545-f7wf8 1/1 Running 0 31s 192.168.204.60 k8s-node-03 <none> <none>
demoapp10-845686d545-z5w88 1/1 Running 0 31s 192.168.8.58 k8s-node-02 <none> <none>
demoapp11-5457978bc9-5lfrg 1/1 Running 0 27s 192.168.8.59 k8s-node-02 <none> <none>
demoapp11-5457978bc9-rhn58 1/1 Running 0 27s 192.168.127.11 k8s-node-01 <none> <none>
[root@k8s-Master-01 ~]#kubectl create service clusterip demoapp10 --tcp=80:80
service/demoapp10 created
[root@k8s-Master-01 ~]#kubectl create service clusterip demoapp11 --tcp=80:80
service/demoapp11 created
[root@k8s-Master-01 ~]#kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demoapp-sts ClusterIP None <none> 80/TCP 2d3h
demoapp10 ClusterIP 10.105.201.223 <none> 80/TCP 9s
demoapp11 ClusterIP 10.107.134.32 <none> 80/TCP 6s
jpress ClusterIP 10.105.254.92 <none> 8080/TCP 4d2h
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d5h
mysql-external ClusterIP None <none> 3306/TCP 4d3h
nginx-jpress NodePort 10.109.235.64 10.0.0.220 8080:30674/TCP 4d2h
statefulset-mysql ClusterIP None <none> 3306/TCP 2d4h
wordpress NodePort 10.111.250.69 10.0.0.220 80:31401/TCP 4d3h
[root@k8s-Master-01 ~]#kubectl get endpoints
NAME ENDPOINTS AGE
demoapp-sts <none> 2d3h
demoapp10 192.168.204.60:80,192.168.8.58:80 24s
demoapp11 192.168.127.11:80,192.168.8.59:80 20s
jpress <none> 4d2h
kubernetes 10.0.0.201:6443,10.0.0.202:6443,10.0.0.203:6443 4d5h
mysql-external 10.0.0.151:3306 4d3h
nginx-jpress <none> 4d2h
statefulset-mysql <none> 2d4h
wordpress <none> 4d3h
[root@k8s-Master-01 ~]#kubectl create ingress demoapp --rule="demoapp.shuhong.com/v10=demoapp10:80" --rule="demoapp.shuhong.com/v11=demoapp11:80" --class=nginx --annotation nginx.ingress.kubernetes.io/rewrite-target="/"
ingress.networking.k8s.io/demoapp created
[root@k8s-Master-01 ~]#kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
demoapp nginx demoapp.shuhong.com 10.0.0.220 80 6m14s
[root@k8s-Master-01 ~]#kubectl get ingress -o yaml
apiVersion: v1
items:
- apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
creationTimestamp: "2022-11-17T06:44:12Z"
generation: 1
name: demoapp
namespace: default
resourceVersion: "248717"
uid: 12b34d5b-b319-4ec6-8cda-e2b18ec871a4
spec:
ingressClassName: nginx
rules:
- host: demoapp.shuhong.com
http:
paths:
- backend:
service:
name: demoapp10
port:
number: 80
path: /v10
pathType: Exact
- backend:
service:
name: demoapp11
port:
number: 80
path: /v11
pathType: Exact
status:
loadBalancer:
ingress:
- ip: 10.0.0.220
kind: List
metadata:
resourceVersion: ""
[root@k8s-Master-01 ~]#kubectl exec -it ingress-nginx-controller-575f7cf88b-vhgnv -n ingress-nginx -- /bin/sh
/etc/nginx $ nginx -T |less
...
server {
server_name demoapp.shuhong.com ;
listen 80 ;
listen 443 ssl http2 ;
set $proxy_upstream_name "-";
ssl_certificate_by_lua_block {
certificate.call()
}
location ~* "^/v11" {
set $namespace "default";
set $ingress_name "demoapp";
set $service_name "demoapp11";
set $service_port "80";
....
#带通配符
[root@k8s-Master-01 ~]#kubectl create ingress demoapp --rule="demoapp.shuhong.com/v10(/|$)(.*)=demoapp10:80" --rule="demoapp.shuhong.com/v11(/|$)(.*)=demoapp11:80" --class=nginx --annotation nginx.ingress.kubernetes.io/rewrite-target="/$2"
ingress.networking.k8s.io/demoapp created
Name base virtual hosting
[root@k8s-Master-01 ~]#kubectl create ingress demoapp --rule="demoapp10.shuhong.com/*=demoapp10:80" --rule="demoapp11.shuhong.com/*=demoapp11:80" --class=nginx
ingress.networking.k8s.io/demoapp created
TLS
[root@k8s-Master-01 certs.d]#(umask 077; openssl genrsa -out shuhong.key 2048)
Generating RSA private key, 2048 bit long modulus (2 primes)
.+++++
.+++++
e is 65537 (0x010001)
[root@k8s-Master-01 certs.d]#openssl req -new -x509 -key shuhong.key -out shuhong.crt -subj /C=CN/ST=Beijing/L=Beijing/O=DevOps/CN=services.shuhong.com
[root@k8s-Master-01 certs.d]#ls
magedu.key shuhong.crt shuhong.key
[root@k8s-Master-01 certs.d]#kubectl create secret tls tls-shuhong --cert=./shuhong.crt --key=./shuhong.key
secret/tls-shuhong created
[root@k8s-Master-01 certs.d]#kubectl create ingress tls-demo --rule='demoapp.shuhong.com/*=demoapp10:80,tls=tls-shuhong' --class=nginx
ingress.networking.k8s.io/tls-demo created
部署wordpress,把nginx通过Ingress Nginx发布到集群外部
[root@k8s-Master-01 wordpress]#kubectl apply -f 03-deployment-wordpress.yaml
[root@k8s-Master-01 wordpress]#kubectl create ingress wordpress --rule="www.shuhong.com/*=wordpress:80" --class=nginx
ingress.networking.k8s.io/wordpress created
发布策略
nginx.ingress.kubernetes.io/canary-by-header:
基于该Annotation中指定Request Header进行流量切分,适用于灰度发布以及A/B测试
[root@k8s-Master-01 ingress]#cat demoapp.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: demoapp
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: demoapp.shuhong.com
http:
paths:
- backend:
service:
name: demoapp10
port:
number: 80
path: /
pathType: Prefix
[root@k8s-Master-01 ingress]#cat demoapp-canary-by-header.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-by-header: "X-Canary"
name: demoapp-canary-by-header
spec:
rules:
- host: demoapp.shuhong.com
http:
paths:
- backend:
service:
name: demoapp11
port:
number: 80
path: /
pathType: Prefix
[root@rocky8 ~]#curl -H "X-Canary: always" demoapp.shuhong.com
iKubernetes demoapp v1.1 !! ClientIP: 192.168.8.55, ServerName: demoapp11-5457978bc9-rhn58, ServerIP: 192.168.127.11!
[root@rocky8 ~]#curl -H "X-Canary: Never" demoapp.shuhong.com
iKubernetes demoapp v1.0 !! ClientIP: 192.168.8.55, ServerName: demoapp10-845686d545-z5w88, ServerIP: 192.168.8.58!
[root@rocky8 ~]#curl -H "X-Canary: " demoapp.shuhong.com
iKubernetes demoapp v1.0 !! ClientIP: 192.168.8.55, ServerName: demoapp10-845686d545-z5w88, ServerIP: 192.168.8.58!
[root@rocky8 ~]#curl demoapp.shuhong.com
iKubernetes demoapp v1.0 !! ClientIP: 192.168.8.55, ServerName: demoapp10-845686d545-f7wf8, ServerIP: 192.168.204.60!
nginx.ingress.kubernetes.io/canary-by-header-value:
基于该Annotation中指定的Request Header的值进行流量切分,标头名称则由前一个Annotation(nginx.ingress.kubernetes.io/canary-by-header)进行指定
[root@k8s-Master-01 ingress]#cat demoapp.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: demoapp
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: demoapp.shuhong.com
http:
paths:
- backend:
service:
name: demoapp10
port:
number: 80
path: /
pathType: Prefix
[root@k8s-Master-01 ingress]#cat canary-by-header-value.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-by-header: "IsVIP"
nginx.ingress.kubernetes.io/canary-by-header-value: "false"
name: demoapp-canary-by-header-value
spec:
rules:
- host: demoapp.shuhong.com
http:
paths:
- backend:
service:
name: demoapp11
port:
number: 80
path: /
pathType: Prefix
[root@rocky8 ~]#curl -H "IsVIP: false" demoapp.shuhong.com
iKubernetes demoapp v1.1 !! ClientIP: 192.168.8.55, ServerName: demoapp11-5457978bc9-rhn58, ServerIP: 192.168.127.11!
[root@rocky8 ~]#curl -H "IsVIP: true" demoapp.shuhong.com
iKubernetes demoapp v1.0 !! ClientIP: 192.168.8.55, ServerName: demoapp10-845686d545-z5w88, ServerIP: 192.168.8.58!
[root@rocky8 ~]#curl demoapp.shuhong.com
iKubernetes demoapp v1.0 !! ClientIP: 192.168.8.55, ServerName: demoapp10-845686d545-f7wf8, ServerIP: 192.168.204.60!
nginx.ingress.kubernetes.io/canary-by-header-pattern:
[root@k8s-Master-01 ingress]#cat demoapp.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: demoapp
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: demoapp.shuhong.com
http:
paths:
- backend:
service:
name: demoapp10
port:
number: 80
path: /
pathType: Prefix
[root@k8s-Master-01 ingress]#cat demoapp-canary-by-header-pattern.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-by-header: "Username"
nginx.ingress.kubernetes.io/canary-by-header-pattern: "(vip|VIP)_.*"
name: demoapp-canary-by-header-pattern
spec:
rules:
- host: demoapp.shuhong.com
http:
paths:
- backend:
service:
name: demoapp11
port:
number: 80
path: /
pathType: Prefix
[root@rocky8 ~]#curl demoapp.shuhong.com
iKubernetes demoapp v1.0 !! ClientIP: 192.168.8.55, ServerName: demoapp10-845686d545-z5w88, ServerIP: 192.168.8.58!
[root@rocky8 ~]#curl -H "Username: vip_ss" demoapp.shuhong.com
iKubernetes demoapp v1.1 !! ClientIP: 192.168.8.55, ServerName: demoapp11-5457978bc9-rhn58, ServerIP: 192.168.127.11!
nginx.ingress.kubernetes.io/canary-weight:
基于服务权重进行流量切分,适用于蓝绿部署,权重范围0 – 100按百分比将请求路由到Canary Ingress中指定的服务(可以做蓝绿发布)
[root@k8s-Master-01 ingress]#cat demoapp.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: demoapp
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: demoapp.shuhong.com
http:
paths:
- backend:
service:
name: demoapp10
port:
number: 80
path: /
pathType: Prefix
[root@k8s-Master-01 ingress]#cat demoapp-canary-by-weight.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-weight: "10"
name: demoapp-canary-by-weight
spec:
rules:
- host: demoapp.shuhong.com
http:
paths:
- backend:
service:
name: demoapp11
port:
number: 80
path: /
pathType: Prefix
[root@rocky8 ~]#while true ;do curl demoapp.shuhong.com;sleep 1;done
iKubernetes demoapp v1.0 !! ClientIP: 192.168.8.55, ServerName: demoapp10-845686d545-z5w88, ServerIP: 192.168.8.58!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.8.55, ServerName: demoapp11-5457978bc9-5lfrg, ServerIP: 192.168.8.59!
iKubernetes demoapp v1.0 !! ClientIP: 192.168.8.55, ServerName: demoapp10-845686d545-f7wf8, ServerIP: 192.168.204.60!
iKubernetes demoapp v1.0 !! ClientIP: 192.168.8.55, ServerName: demoapp10-845686d545-f7wf8, ServerIP: 192.168.204.60!
iKubernetes demoapp v1.0 !! ClientIP: 192.168.8.55, ServerName: demoapp10-845686d545-z5w88, ServerIP: 192.168.8.58!
iKubernetes demoapp v1.0 !! ClientIP: 192.168.8.55, ServerName: demoapp10-845686d545-z5w88, ServerIP: 192.168.8.58!
iKubernetes demoapp v1.0 !! ClientIP: 192.168.8.55, ServerName: demoapp10-845686d545-f7wf8, ServerIP: 192.168.204.60!
iKubernetes demoapp v1.0 !! ClientIP: 192.168.8.55, ServerName: demoapp10-845686d545-f7wf8, ServerIP: 192.168.204.60!
iKubernetes demoapp v1.0 !! ClientIP: 192.168.8.55, ServerName: demoapp10-845686d545-f7wf8, ServerIP: 192.168.204.60!
iKubernetes demoapp v1.0 !! ClientIP: 192.168.8.55, ServerName: demoapp10-845686d545-f7wf8, ServerIP: 192.168.204.60!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.8.55, ServerName: demoapp11-5457978bc9-rhn58, ServerIP: 192.168.127.11!
iKubernetes demoapp v1.0 !! ClientIP: 192.168.8.55, ServerName: demoapp10-845686d545-z5w88, ServerIP: 192.168.8.58!
iKubernetes demoapp v1.0 !! ClientIP: 192.168.8.55, ServerName: demoapp10-845686d545-z5w88, ServerIP: 192.168.8.58!
nginx.ingress.kubernetes.io/canary-by-cookie:
基于 cookie 的流量切分,适用于灰度发布与 A/B 测试
[root@k8s-Master-01 ingress]#cat demoapp.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: demoapp
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: demoapp.shuhong.com
http:
paths:
- backend:
service:
name: demoapp10
port:
number: 80
path: /
pathType: Prefix
[root@k8s-Master-01 ingress]#cat demoapp-canary-by-cookie.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-by-cookie: "vip_user"
name: demoapp-canary-by-cookie
spec:
rules:
- host: demoapp.shuhong.com
http:
paths:
- backend:
service:
name: demoapp11
port:
number: 80
path: /
pathType: Prefix
[root@rocky8 ~]#curl -b "vip_user=always" demoapp.shuhong.com
iKubernetes demoapp v1.1 !! ClientIP: 192.168.8.55, ServerName: demoapp11-5457978bc9-rhn58, ServerIP: 192.168.127.11!
[root@rocky8 ~]#curl demoapp.shuhong.com
iKubernetes demoapp v1.0 !! ClientIP: 192.168.8.55, ServerName: demoapp10-845686d545-f7wf8, ServerIP: 192.168.204.60!
[root@rocky8 ~]#curl -b "vip_user=a" demoapp.shuhong.com
iKubernetes demoapp v1.0 !! ClientIP: 192.168.8.55, ServerName: demoapp10-845686d545-f7wf8, ServerIP: 192.168.204.60!
注意:
Canary规则会按特定的次序进行评估
次序:canary-by-header -> canary-by-cookie
helm部署wordpress并完成通过ingress发布
[root@k8s-Master-01 ~]#kubectl create ns wordpress
namespace/wordpress created
[root@k8s-Master-01 ~]#helm install mysql \
> --set auth.rootPassword=shuhong \
> --set global.storageClass=nfs-csi \
> --set architecture=replication \
> --set auth.database=wpdb \
> --set auth.username=wpuser \
> --set auth.password='shuhong' \
> --set secondary.replicaCount=2 \
> --set auth.replicationPassword='replpass' \
> bitnami/mysql \
> -n wordpress
NAME: mysql
LAST DEPLOYED: Sat Nov 19 14:24:37 2022
NAMESPACE: wordpress
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: mysql
CHART VERSION: 9.4.4
APP VERSION: 8.0.31
** Please be patient while the chart is being deployed **
Tip:
Watch the deployment status using the command: kubectl get pods -w --namespace wordpress
Services:
echo Primary: mysql-primary.wordpress.svc.cluster.local:3306
echo Secondary: mysql-secondary.wordpress.svc.cluster.local:3306
Execute the following to get the administrator credentials:
echo Username: root
MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace wordpress mysql -o jsonpath="{.data.mysql-root-password}" | base64 -d)
To connect to your database:
1. Run a pod that you can use as a client:
kubectl run mysql-client --rm --tty -i --restart='Never' --image docker.io/bitnami/mysql:8.0.31-debian-11-r10 --namespace wordpress --env MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD --command -- bash
2. To connect to primary service (read/write):
mysql -h mysql-primary.wordpress.svc.cluster.local -uroot -p"$MYSQL_ROOT_PASSWORD"
3. To connect to secondary service (read-only):
mysql -h mysql-secondary.wordpress.svc.cluster.local -uroot -p"$MYSQL_ROOT_PASSWORD"