在这篇文章中,我们将为您详细介绍如何通过kube-apiserver访问K8s集群中的App的内容,并且讨论关于k8s访问流程的相关问题。此外,我们还会涉及一些关于009.Kubernetes二进制部
在这篇文章中,我们将为您详细介绍如何通过 kube-apiserver 访问 K8s 集群中的 App的内容,并且讨论关于k8s访问流程的相关问题。此外,我们还会涉及一些关于009.Kubernetes 二进制部署 kube-apiserver、docker amd64 镜像和没加amd64的有什么区别?比如kube-apiserver-amd64 和 kube-apiserver、Error getting ConfigMap kube-system:kube-dns err: configmaps “kube-dns“ not found、K8S 性能优化 - K8S APIServer 调优的知识,以帮助您更全面地了解这个主题。
本文目录一览:- 如何通过 kube-apiserver 访问 K8s 集群中的 App(k8s访问流程)
- 009.Kubernetes 二进制部署 kube-apiserver
- docker amd64 镜像和没加amd64的有什么区别?比如kube-apiserver-amd64 和 kube-apiserver
- Error getting ConfigMap kube-system:kube-dns err: configmaps “kube-dns“ not found
- K8S 性能优化 - K8S APIServer 调优
如何通过 kube-apiserver 访问 K8s 集群中的 App(k8s访问流程)
本文分享自华为云社区《通过 kube-apiserver 访问 K8s 集群中的 App》,作者: tsjsdbd。
K8s 集群中的 App(或者 svc),通常使用 ClusterIP,NodePort,Loadbalancer 这些方式访问,但是你也可以通过 Kube-apiserver(管理面)来访问 App。
在《跟唐老师学习云网络 - Kubernetes 网络实现》里面,提到 K8s 集群里面的容器,有几种访问方法:
- LoadBalancer
- Ingress
- ClusterIP
- NodePort
这里就不再分析,直接看如何通过 Kube-apiserver 来访问容器里面的 App。下图(5)
一、启动 App
创建文件 ng-dp.yaml,内容如下:
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:stable-perl ports: - containerPort: 80
执行命令:
kubectl apply -f ng-dp.yaml
这样会启动一个 nginx 容器,并在容器里面监听 80 端口。
二、设置 svc 访问
创建文件 ng-svc.yaml,内容如下:
apiVersion: v1 kind: Service metadata: name: my-nginx spec: ports: - port: 80 protocol: TCP selector: app: nginx
执行命令:
kubectl apply -f ng-svc.yaml
这样就会为 App 开启集群内可访问的 svc 通道。
kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.247.0.1 <none> 443/TCP 7d my-nginx ClusterIP 10.247.124.234 <none> 80/TCP 4h4m
有了 svc 后,我们就可以通过 Kube-apiserver 访问该 App 了。
三、访问 App
查询 Kube-apiserver 的地址:
kubectl cluster-info Kubernetes control plane is running at https://192.168.0.116:5443 CoreDNS is running at https://192.168.0.116:5443/api/v1/namespaces/kube-system/services/coredns:dns/proxy
1)通过 token 方式访问:
查询 sa(service-account)
kubectl get sa NAME SECRETS AGE default 1 7d
然后查询 sa 内容
kubectl describe sa default Name: default Namespace: default Mountable secrets: default-token-vztbc Tokens: default-token-vztbc
接着查询 secret 内容,获得 token 值。
kubectl describe secret default-token-vztbc Name: default-token-vztbc Namespace: default Type: kubernetes.io/service-account-token ==== token: eyJhbGciOiJSUzI1NiIsImtpZCI6InJlRWUxSFpvektO <== 取这个内容
设置 env 后,就可以访问 App 了
export TOKEN=eyJhbGciOiJSUzI1NiIsImtpZCI6InJlRWUxSFpvektO curl --noproxy ''*'' -kv -H "Authorization: Bearer $TOKEN" \ https://192.168.0.116:5443/api/v1/namespaces/default/services/http:my-nginx:80/proxy/
如果权限不够,说明要提高 sa 的权限,比如:
kubectl create clusterrolebinding sa-tsj --clusterrole=cluster-admin --serviceaccount=default:default
2)通过证书方式访问:
除了获取 token,也可以直接配置证书方式来访问。我们从 kubeconfig 文件里面,取出对应的证书。
grep -A1 ''client-certificate-data: '' /root/.kube/config | tail -n 1 | sed ''s/ *//'' | base64 -d >cert.pem grep -A1 ''client-key-data: '' /root/.kube/config | tail -n 1 | sed ''s/ *//'' | base64 -d >key.pem grep -A1 ''certificate-authority-data: '' /root/.kube/config | tail -n 1 | sed ''s/ *//'' | base64 -d >ca.pem
然后,配置证书后访问:
curl --noproxy ''*'' -kv --cacert ./ca.pem --key ./key.pem --cert ./cert.pem \ https://192.168.0.116:5443/api/v1/namespaces/default/services/http:my-nginx:80/proxy/
四、URL 格式说明
Kube-apiserver 提供代理 URL 格式如下:
http://api_addr/api/v1/namespaces/namespace_name/services/service_name/proxy
其中,你可以将 App 的 url 后缀,parameter 参数等附加到尾部。如:
http://api_addr/api/v1/namespaces/namespace_name/services/service_name[:port_name]/proxy
如果没有指定「端口名」,也可以使用「端口号」,如:
http://api_addr/api/v1/namespaces/namespace_name/services/service_name[:port_num]/proxy
反正不管有没有指定「端口名」,用「端口号」肯定是可以的。
默认情况,Kube-apiserver 是使用 http 来访问你的 App,如果要使用 https 的话,则要指定,如下:
http://api_addr/api/v1/namespaces/namespace_name/services/https:service_name:[port_name]/proxy
所有支持的 proxy 的 URL 格式总结如下:
<service_name> - 使用 http 访问默认的端口
<service_name>:<port_name> - 使用 http 访问指定的端口
<service_name>:<port_number> - 使用 http 访问指定的端口
https:<service_name>: - 使用 https 访问默认的端口(注意有个冒号)
https:<service_name>:<port_name> - 使用 https 访问指定的端口
五、有什么用?
很多时候,K8s 集群里面 App 的访问,都是只能通过「数据面」访问(无论是 ClusterIP,NodePort,Ingress 等),比如要从互联网访问,就得靠绑定 EIP 来完成。但是如果「管理面」也能访问到 App 的话,我们就可以设计一种 “代理模式”,通过复用管理面通道,提供 App 的默认访问能力。这样你的用户,不用额外绑定 EIP 也能访问他的 App。
点击关注,第一时间了解华为云新鲜技术~
009.Kubernetes 二进制部署 kube-apiserver
一 部署 master 节点
1.1 master 节点服务
- kube-apiserver
- kube-scheduler
- kube-controller-manager
- kube-nginx
1.2 安装 Kubernetes
1 [root@k8smaster01 ~]# cd /opt/k8s/work
2 [root@k8smaster01 work]# wget https://dl.k8s.io/v1.14.2/kubernetes-server-linux-amd64.tar.gz
3 [root@k8smaster01 work]# tar -xzvf kubernetes-server-linux-amd64.tar.gz
4 [root@k8smaster01 work]# cd kubernetes
5 [root@k8smaster01 kubernetes]# tar -xzvf kubernetes-src.tar.gz
1.3 分发 Kubernetes
1 [root@k8smaster01 ~]# cd /opt/k8s/work
2 [root@k8smaster01 work]# source /opt/k8s/bin/environment.sh
3 [root@k8smaster01 work]# for master_ip in ${MASTER_IPS[@]}
4 do
5 echo ">>> ${master_ip}"
6 scp kubernetes/server/bin/{apiextensions-apiserver,cloud-controller-manager,kube-apiserver,kube-controller-manager,kube-proxy,kube-scheduler,kubeadm,kubectl,kubelet,mounter} root@${master_ip}:/opt/k8s/bin/
7 ssh root@${master_ip} "chmod +x /opt/k8s/bin/*"
8 done
二 部署高可用 kube-apiserver
2.1 高可用 apiserver 介绍
2.2 创建 Kubernetes 证书和私钥
1 [root@k8smaster01 ~]# cd /opt/k8s/work
2 [root@k8smaster01 work]# cat > kubernetes-csr.json <<EOF
3 {
4 "CN": "kubernetes",
5 "hosts": [
6 "127.0.0.1",
7 "172.24.8.71",
8 "172.24.8.72",
9 "172.24.8.73",
10 "${CLUSTER_KUBERNETES_SVC_IP}",
11 "kubernetes",
12 "kubernetes.default",
13 "kubernetes.default.svc",
14 "kubernetes.default.svc.cluster",
15 "kubernetes.default.svc.cluster.local."
16 ],
17 "key": {
18 "algo": "rsa",
19 "size": 2048
20 },
21 "names": [
22 {
23 "C": "CN",
24 "ST": "Shanghai",
25 "L": "Shanghai",
26 "O": "k8s",
27 "OU": "System"
28 }
29 ]
30 }
31 EOF
32 #创建Kubernetes的CA证书请求文件
1 # kubectl get svc kubernetes
1 [root@k8smaster01 ~]# cd /opt/k8s/work
2 [root@k8smaster01 work]# cfssl gencert -ca=/opt/k8s/work/ca.pem \
3 -ca-key=/opt/k8s/work/ca-key.pem -config=/opt/k8s/work/ca-config.json \
4 -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes #生成CA密钥(ca-key.pem)和证书(ca.pem)
2.3 分发证书和私钥
1 [root@k8smaster01 ~]# cd /opt/k8s/work
2 [root@k8smaster01 work]# source /opt/k8s/bin/environment.sh
3 [root@k8smaster01 work]# for master_ip in ${MASTER_IPS[@]}
4 do
5 echo ">>> ${master_ip}"
6 ssh root@${master_ip} "mkdir -p /etc/kubernetes/cert"
7 scp kubernetes*.pem root@${master_ip}:/etc/kubernetes/cert/
8 done
2.4 创建加密配置文件
1 [root@k8smaster01 ~]# cd /opt/k8s/work
2 [root@k8smaster01 work]# source /opt/k8s/bin/environment.sh
3 [root@k8smaster01 work]# cat > encryption-config.yaml <<EOF
4 kind: EncryptionConfig
5 apiVersion: v1
6 resources:
7 - resources:
8 - secrets
9 providers:
10 - aescbc:
11 keys:
12 - name: key1
13 secret: ${ENCRYPTION_KEY}
14 - identity: {}
15 EOF
2.5 分发加密配置文件
1 [root@k8smaster01 ~]# cd /opt/k8s/work
2 [root@k8smaster01 work]# source /opt/k8s/bin/environment.sh
3 [root@k8smaster01 work]# for master_ip in ${MASTER_IPS[@]}
4 do
5 echo ">>> ${master_ip}"
6 scp encryption-config.yaml root@${master_ip}:/etc/kubernetes/
7 done
2.6 创建审计策略文件
1 [root@k8smaster01 ~]# cd /opt/k8s/work
2 [root@k8smaster01 work]# source /opt/k8s/bin/environment.sh
3 [root@k8smaster01 work]# cat > audit-policy.yaml <<EOF
4 apiVersion: audit.k8s.io/v1beta1
5 kind: Policy
6 rules:
7 # The following requests were manually identified as high-volume and low-risk, so drop them.
8 - level: None
9 resources:
10 - group: ""
11 resources:
12 - endpoints
13 - services
14 - services/status
15 users:
16 - ''system:kube-proxy''
17 verbs:
18 - watch
19
20 - level: None
21 resources:
22 - group: ""
23 resources:
24 - nodes
25 - nodes/status
26 userGroups:
27 - ''system:nodes''
28 verbs:
29 - get
30
31 - level: None
32 namespaces:
33 - kube-system
34 resources:
35 - group: ""
36 resources:
37 - endpoints
38 users:
39 - ''system:kube-controller-manager''
40 - ''system:kube-scheduler''
41 - ''system:serviceaccount:kube-system:endpoint-controller''
42 verbs:
43 - get
44 - update
45
46 - level: None
47 resources:
48 - group: ""
49 resources:
50 - namespaces
51 - namespaces/status
52 - namespaces/finalize
53 users:
54 - ''system:apiserver''
55 verbs:
56 - get
57
58 # Don''t log HPA fetching metrics.
59 - level: None
60 resources:
61 - group: metrics.k8s.io
62 users:
63 - ''system:kube-controller-manager''
64 verbs:
65 - get
66 - list
67
68 # Don''t log these read-only URLs.
69 - level: None
70 nonResourceURLs:
71 - ''/healthz*''
72 - /version
73 - ''/swagger*''
74
75 # Don''t log events requests.
76 - level: None
77 resources:
78 - group: ""
79 resources:
80 - events
81
82 # node and pod status calls from nodes are high-volume and can be large, don''t log responses for expected updates from nodes
83 - level: Request
84 omitStages:
85 - RequestReceived
86 resources:
87 - group: ""
88 resources:
89 - nodes/status
90 - pods/status
91 users:
92 - kubelet
93 - ''system:node-problem-detector''
94 - ''system:serviceaccount:kube-system:node-problem-detector''
95 verbs:
96 - update
97 - patch
98
99 - level: Request
100 omitStages:
101 - RequestReceived
102 resources:
103 - group: ""
104 resources:
105 - nodes/status
106 - pods/status
107 userGroups:
108 - ''system:nodes''
109 verbs:
110 - update
111 - patch
112
113 # deletecollection calls can be large, don''t log responses for expected namespace deletions
114 - level: Request
115 omitStages:
116 - RequestReceived
117 users:
118 - ''system:serviceaccount:kube-system:namespace-controller''
119 verbs:
120 - deletecollection
121
122 # Secrets, ConfigMaps, and TokenReviews can contain sensitive & binary data,
123 # so only log at the Metadata level.
124 - level: Metadata
125 omitStages:
126 - RequestReceived
127 resources:
128 - group: ""
129 resources:
130 - secrets
131 - configmaps
132 - group: authentication.k8s.io
133 resources:
134 - tokenreviews
135 # Get repsonses can be large; skip them.
136 - level: Request
137 omitStages:
138 - RequestReceived
139 resources:
140 - group: ""
141 - group: admissionregistration.k8s.io
142 - group: apiextensions.k8s.io
143 - group: apiregistration.k8s.io
144 - group: apps
145 - group: authentication.k8s.io
146 - group: authorization.k8s.io
147 - group: autoscaling
148 - group: batch
149 - group: certificates.k8s.io
150 - group: extensions
151 - group: metrics.k8s.io
152 - group: networking.k8s.io
153 - group: policy
154 - group: rbac.authorization.k8s.io
155 - group: scheduling.k8s.io
156 - group: settings.k8s.io
157 - group: storage.k8s.io
158 verbs:
159 - get
160 - list
161 - watch
162
163 # Default level for known APIs
164 - level: RequestResponse
165 omitStages:
166 - RequestReceived
167 resources:
168 - group: ""
169 - group: admissionregistration.k8s.io
170 - group: apiextensions.k8s.io
171 - group: apiregistration.k8s.io
172 - group: apps
173 - group: authentication.k8s.io
174 - group: authorization.k8s.io
175 - group: autoscaling
176 - group: batch
177 - group: certificates.k8s.io
178 - group: extensions
179 - group: metrics.k8s.io
180 - group: networking.k8s.io
181 - group: policy
182 - group: rbac.authorization.k8s.io
183 - group: scheduling.k8s.io
184 - group: settings.k8s.io
185 - group: storage.k8s.io
186
187 # Default level for all other requests.
188 - level: Metadata
189 omitStages:
190 - RequestReceived
191 EOF
2.7 分发策略文件
1 [root@k8smaster01 ~]# cd /opt/k8s/work
2 [root@k8smaster01 work]# source /opt/k8s/bin/environment.sh
3 [root@k8smaster01 work]# for master_ip in ${MASTER_IPS[@]}
4 do
5 echo ">>> ${master_ip}"
6 scp audit-policy.yaml root@${master_ip}:/etc/kubernetes/audit-policy.yaml
7 done
2.8 创建访问 metrics-server 的证书和密钥
1 [root@k8smaster01 ~]# cd /opt/k8s/work
2 [root@k8smaster01 work]# cat > proxy-client-csr.json <<EOF
3 {
4 "CN": "aggregator",
5 "hosts": [],
6 "key": {
7 "algo": "rsa",
8 "size": 2048
9 },
10 "names": [
11 {
12 "C": "CN",
13 "ST": "Shanghai",
14 "L": "Shanghai",
15 "O": "k8s",
16 "OU": "System"
17 }
18 ]
19 }
20 EOF
21 #创建metrics-server的CA证书请求文件
1 [root@k8smaster01 ~]# cd /opt/k8s/work
2 [root@k8smaster01 work]# cfssl gencert -ca=/opt/k8s/work/ca.pem \
3 -ca-key=/opt/k8s/work/ca-key.pem -config=/opt/k8s/work/ca-config.json \
4 -profile=kubernetes proxy-client-csr.json | cfssljson -bare proxy-client #生成CA密钥(ca-key.pem)和证书(ca.pem)
2.9 分发证书和私钥
1 [root@k8smaster01 ~]# cd /opt/k8s/work
2 [root@k8smaster01 work]# source /opt/k8s/bin/environment.sh
3 [root@k8smaster01 work]# for master_ip in ${MASTER_IPS[@]}
4 do
5 echo ">>> ${master_ip}"
6 scp proxy-client*.pem root@${master_ip}:/etc/kubernetes/cert/
7 done
2.10 创建 kube-apiserver 的 systemd
1 [root@k8smaster01 ~]# cd /opt/k8s/work
2 [root@k8smaster01 work]# source /opt/k8s/bin/environment.sh
3 [root@k8smaster01 work]# cat > kube-apiserver.service.template <<EOF
4 [Unit]
5 Description=Kubernetes API Server
6 Documentation=https://github.com/GoogleCloudPlatform/kubernetes
7 After=network.target
8
9 [Service]
10 WorkingDirectory=${K8S_DIR}/kube-apiserver
11 ExecStart=/opt/k8s/bin/kube-apiserver \\
12 --advertise-address=##MASTER_IP## \\
13 --default-not-ready-toleration-seconds=360 \\
14 --default-unreachable-toleration-seconds=360 \\
15 --feature-gates=DynamicAuditing=true \\
16 --max-mutating-requests-inflight=2000 \\
17 --max-requests-inflight=4000 \\
18 --default-watch-cache-size=200 \\
19 --delete-collection-workers=2 \\
20 --encryption-provider-config=/etc/kubernetes/encryption-config.yaml \\
21 --etcd-cafile=/etc/kubernetes/cert/ca.pem \\
22 --etcd-certfile=/etc/kubernetes/cert/kubernetes.pem \\
23 --etcd-keyfile=/etc/kubernetes/cert/kubernetes-key.pem \\
24 --etcd-servers=${ETCD_ENDPOINTS} \\
25 --bind-address=##MASTER_IP## \\
26 --secure-port=6443 \\
27 --tls-cert-file=/etc/kubernetes/cert/kubernetes.pem \\
28 --tls-private-key-file=/etc/kubernetes/cert/kubernetes-key.pem \\
29 --insecure-port=0 \\
30 --audit-dynamic-configuration \\
31 --audit-log-maxage=15 \\
32 --audit-log-maxbackup=3 \\
33 --audit-log-maxsize=100 \\
34 --audit-log-mode=batch \\
35 --audit-log-truncate-enabled \\
36 --audit-log-batch-buffer-size=20000 \\
37 --audit-log-batch-max-size=2 \\
38 --audit-log-path=${K8S_DIR}/kube-apiserver/audit.log \\
39 --audit-policy-file=/etc/kubernetes/audit-policy.yaml \\
40 --profiling \\
41 --anonymous-auth=false \\
42 --client-ca-file=/etc/kubernetes/cert/ca.pem \\
43 --enable-bootstrap-token-auth \\
44 --requestheader-allowed-names="aggregator" \\
45 --requestheader-client-ca-file=/etc/kubernetes/cert/ca.pem \\
46 --requestheader-extra-headers-prefix="X-Remote-Extra-" \\
47 --requestheader-group-headers=X-Remote-Group \\
48 --requestheader-username-headers=X-Remote-User \\
49 --service-account-key-file=/etc/kubernetes/cert/ca.pem \\
50 --authorization-mode=Node,RBAC \\
51 --runtime-config=api/all=true \\
52 --enable-admission-plugins=NodeRestriction \\
53 --allow-privileged=true \\
54 --apiserver-count=3 \\
55 --event-ttl=168h \\
56 --kubelet-certificate-authority=/etc/kubernetes/cert/ca.pem \\
57 --kubelet-client-certificate=/etc/kubernetes/cert/kubernetes.pem \\
58 --kubelet-client-key=/etc/kubernetes/cert/kubernetes-key.pem \\
59 --kubelet-https=true \\
60 --kubelet-timeout=10s \\
61 --proxy-client-cert-file=/etc/kubernetes/cert/proxy-client.pem \\
62 --proxy-client-key-file=/etc/kubernetes/cert/proxy-client-key.pem \\
63 --service-cluster-ip-range=${SERVICE_CIDR} \\
64 --service-node-port-range=${NODE_PORT_RANGE} \\
65 --logtostderr=true \\
66 --v=2
67 Restart=on-failure
68 RestartSec=10
69 Type=notify
70 LimitNOFILE=65536
71
72 [Install]
73 WantedBy=multi-user.target
74 EOF
- --advertise-address:apiserver 对外通告的 IP(kubernetes 服务后端节点 IP);
- --default-*-toleration-seconds:设置节点异常相关的阈值;
- --max-*-requests-inflight:请求相关的最大阈值;
- --etcd-*:访问 etcd 的证书和 etcd 服务器地址;
- --experimental-encryption-provider-config:指定用于加密 etcd 中 secret 的配置;
- --bind-address: https 监听的 IP,不能为 127.0.0.1,否则外界不能访问它的安全端口 6443;
- --secret-port:https 监听端口;
- --insecure-port=0:关闭监听 http 非安全端口 (8080);
- --tls-*-file:指定 apiserver 使用的证书、私钥和 CA 文件;
- --audit-*:配置审计策略和审计日志文件相关的参数;
- --client-ca-file:验证 client (kue-controller-manager、kube-scheduler、kubelet、kube-proxy 等) 请求所带的证书;
- --enable-bootstrap-token-auth:启用 kubelet bootstrap 的 token 认证;
- --requestheader-*:kube-apiserver 的 aggregator layer 相关的配置参数,proxy-client & HPA 需要使用;
- --requestheader-client-ca-file:用于签名 --proxy-client-cert-file 和 --proxy-client-key-file 指定的证书;在启用了 metric aggregator 时使用;
- --requestheader-allowed-names:不能为空,值为逗号分割的 --proxy-client-cert-file 证书的 CN 名称,这里设置为 "aggregator";
- --service-account-key-file:签名 ServiceAccount Token 的公钥文件,kube-controller-manager 的 --service-account-private-key-file 指定私钥文件,两者配对使用;
- --runtime-config=api/all=true: 启用所有版本的 APIs,如 autoscaling/v2alpha1;
- --authorization-mode=Node,RBAC、--anonymous-auth=false: 开启 Node 和 RBAC 授权模式,拒绝未授权的请求;
- --enable-admission-plugins:启用一些默认关闭的 plugins;
- --allow-privileged:运行执行 privileged 权限的容器;
- --apiserver-count=3:指定 apiserver 实例的数量;
- --event-ttl:指定 events 的保存时间;
- --kubelet-*:如果指定,则使用 https 访问 kubelet APIs;需要为证书对应的用户 (上面 kubernetes*.pem 证书的用户为 kubernetes) 用户定义 RBAC 规则,否则访问 kubelet API 时提示未授权;
- --proxy-client-*:apiserver 访问 metrics-server 使用的证书;
- --service-cluster-ip-range: 指定 Service Cluster IP 地址段;
- --service-node-port-range: 指定 NodePort 的端口范围。
1 [root@zhangjun-k8s01 1.8+]# kubectl top nodes
2 Error from server (Forbidden): nodes.metrics.k8s.io is forbidden: User "aggregator" cannot list resource "nodes" in API group "metrics.k8s.io" at the cluster scope
3
2.11 分发 systemd
1 [root@k8smaster01 ~]# cd /opt/k8s/work
2 [root@k8smaster01 work]# source /opt/k8s/bin/environment.sh
3 [root@k8smaster01 work]# for (( i=0; i < 3; i++ ))
4 do
5 sed -e "s/##MASTER_NAME##/${MASTER_NAMES[i]}/" -e "s/##MASTER_IP##/${MASTER_IPS[i]}/" kube-apiserver.service.template > kube-apiserver-${MASTER_IPS[i]}.service
6 done
7 [root@k8smaster01 work]# ls kube-apiserver*.service #替换相应的IP
8 [root@k8smaster01 ~]# cd /opt/k8s/work
9 [root@k8smaster01 work]# source /opt/k8s/bin/environment.sh
10 [root@k8smaster01 work]# for master_ip in ${MASTER_IPS[@]}
11 do
12 echo ">>> ${master_ip}"
13 scp kube-apiserver-${master_ip}.service root@${master_ip}:/etc/systemd/system/kube-apiserver.service
14 done #分发systemd
三 启动并验证
3.1 启动 kube-apiserver 服务
1 [root@k8smaster01 ~]# source /opt/k8s/bin/environment.sh
2 [root@k8smaster01 ~]# for master_ip in ${MASTER_IPS[@]}
3 do
4 echo ">>> ${master_ip}"
5 ssh root@${master_ip} "mkdir -p ${K8S_DIR}/kube-apiserver"
6 ssh root@${master_ip} "systemctl daemon-reload && systemctl enable kube-apiserver && systemctl restart kube-apiserver"
7 done
3.2 检查 kube-apiserver 服务
1 [root@k8smaster01 ~]# source /opt/k8s/bin/environment.sh
2 [root@k8smaster01 ~]# for master_ip in ${MASTER_IPS[@]}
3 do
4 echo ">>> ${master_ip}"
5 ssh root@${master_ip} "systemctl status kube-apiserver |grep ''Active:''"
6 done
3.3 查看 kube-apiserver 写入 etcd 的数据
1 [root@k8smaster01 ~]# source /opt/k8s/bin/environment.sh
2 [root@k8smaster01 ~]# ETCDCTL_API=3 etcdctl \
3 --endpoints=${ETCD_ENDPOINTS} \
4 --cacert=/opt/k8s/work/ca.pem \
5 --cert=/opt/k8s/work/etcd.pem \
6 --key=/opt/k8s/work/etcd-key.pem \
7 get /registry/ --prefix --keys-only
3.4 检查集群信息
1 [root@k8smaster01 ~]# kubectl cluster-info
2 [root@k8smaster01 ~]# kubectl get all --all-namespaces
3 [root@k8smaster01 ~]# kubectl get componentstatuses
4 [root@k8smaster01 ~]# sudo netstat -lnpt|grep kube #检查 kube-apiserver 监听的端口
3.5 授权
1 [root@k8smaster01 ~]# kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes
原文出处:https://www.cnblogs.com/itzgr/p/11873920.html
docker amd64 镜像和没加amd64的有什么区别?比如kube-apiserver-amd64 和 kube-apiserver
docker amd64 镜像和没加amd64的有什么区别?比如kube-apiserver-amd64 和 kube-apiserver
Error getting ConfigMap kube-system:kube-dns err: configmaps “kube-dns“ not found
问题:
dns不通
[root@k8s-master ~]# kubectl exec -it busyBoxx sh
/ # nslookup Nginx
Server: 10.254.230.254
Address 1: 10.254.230.254
nslookup: can't resolve 'Nginx'
/ # cat /etc/resolv.conf
search default.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.254.230.254
nameserver 8.8.8.8
options ndots:5
查看pod日志
[root@k8s-master ~]# kubectl logs -f kube-dns-3204099596-x7vdj -c kubedns -n kube-system
I0613 20:15:12.651122 1 dns.go:42] version: v1.6.0-alpha.0.680+3872cb93abf948-dirty
I0613 20:15:12.651262 1 server.go:107] Using http://192.168.150.61:8080 for kubernetes master, kubernetes API: v1
I0613 20:15:12.651550 1 server.go:68] Using configuration read from ConfigMap: kube-system:kube-dns
I0613 20:15:12.651599 1 server.go:113] FLAG: --alsologtostderr="false"
I0613 20:15:12.651613 1 server.go:113] FLAG: --config-map="kube-dns"
I0613 20:15:12.651620 1 server.go:113] FLAG: --config-map-namespace="kube-system"
I0613 20:15:12.651626 1 server.go:113] FLAG: --dns-bind-address="0.0.0.0"
I0613 20:15:12.651630 1 server.go:113] FLAG: --dns-port="10053"
I0613 20:15:12.651637 1 server.go:113] FLAG: --domain="cluster.local."
I0613 20:15:12.651645 1 server.go:113] FLAG: --federations=""
I0613 20:15:12.651651 1 server.go:113] FLAG: --healthz-port="8081"
I0613 20:15:12.651656 1 server.go:113] FLAG: --kube-master-url="http://192.168.150.61:8080"
I0613 20:15:12.651662 1 server.go:113] FLAG: --kubecfg-file=""
I0613 20:15:12.651666 1 server.go:113] FLAG: --log-backtrace-at=":0"
I0613 20:15:12.651673 1 server.go:113] FLAG: --log-dir=""
I0613 20:15:12.651679 1 server.go:113] FLAG: --log-flush-frequency="5s"
I0613 20:15:12.651685 1 server.go:113] FLAG: --logtostderr="true"
I0613 20:15:12.651691 1 server.go:113] FLAG: --stderrthreshold="2"
I0613 20:15:12.651695 1 server.go:113] FLAG: --v="0"
I0613 20:15:12.651700 1 server.go:113] FLAG: --version="false"
I0613 20:15:12.651707 1 server.go:113] FLAG: --vmodule=""
I0613 20:15:12.651756 1 server.go:155] Starting SkyDNS server (0.0.0.0:10053)
I0613 20:15:12.666762 1 server.go:165] Skydns metrics enabled (/metrics:10055)
I0613 20:15:12.669465 1 logs.go:41] skydns: ready for queries on cluster.local. for tcp://0.0.0.0:10053 [rcache 0]
I0613 20:15:12.669518 1 logs.go:41] skydns: ready for queries on cluster.local. for udp://0.0.0.0:10053 [rcache 0]
E0613 20:15:12.679738 1 sync.go:105] Error getting ConfigMap kube-system:kube-dns err: configmaps "kube-dns" not found
E0613 20:15:12.679763 1 dns.go:190] Error getting initial ConfigMap: configmaps "kube-dns" not found, starting with default values
I0613 20:15:12.684971 1 server.go:126] Setting up Healthz Handler (/readiness)
I0613 20:15:12.685000 1 server.go:131] Setting up cache handler (/cache)
I0613 20:15:12.685007 1 server.go:120] Status HTTP port 8081
解决:
[root@k8s-master ~]# cat kube-dns-cm.yml
apiVersion: v1
kind: ConfigMap
Metadata:
name: kube-dns
namespace: kube-system
data:
upstreamNameservers: |
["1.2.3.4"]
kubectl create -f kube-dns-cm.yml
重启kube.
[root@k8s-master ~]# kubectl get deploy,po,svc,cm,ep -n kube-system
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/kube-dns 1 1 1 1 33m
NAME READY STATUS RESTARTS AGE
po/kube-dns-3204099596-n2w84 4/4 UnkNown 0 33m
po/kube-dns-3204099596-x7vdj 4/4 Running 0 23m
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/kube-dns 10.254.230.254 <none> 53/UDP,53/TCP 31m
NAME DATA AGE
cm/kube-dns 1 13m
NAME ENDPOINTS AGE
ep/kube-controller-manager <none> 25m
ep/kube-dns 10.0.55.7:53,10.0.55.7:53 31m
ep/kube-scheduler <none> 25m
再次 nslookup
[root@k8s-master ~]# kubectl exec -it busyBoxx sh
/ # nslookup kubernetes
Server: 10.254.230.254
Address 1: 10.254.230.254 kube-dns.kube-system.svc.cluster.local
Name: kubernetes
Address 1: 10.254.0.1 kubernetes.default.svc.cluster.local
/ # nslookup Nginx
Server: 10.254.230.254
Address 1: 10.254.230.254 kube-dns.kube-system.svc.cluster.local
Name: Nginx
Address 1: 10.254.213.70 Nginx.default.svc.cluster.local
参考
https://cloud.tencent.com/developer/article/1649590
https://www.thinbug.com/q/43240135
K8S 性能优化 - K8S APIServer 调优
前言
K8S 性能优化系列文章,本文为第二篇:Kubernetes API Server 性能优化参数最佳实践。
系列文章:
- 《K8S 性能优化 - OS sysctl 调优》
参数一览
kube-apiserver 推荐优化的参数如下:
--default-watch-cache-size
:默认值 100;用于 List-Watch 的缓存池;建议 1000 或更多;--delete-collection-workers
:默认值 1;用于提升 namesapce 清理速度,有利于多租户场景;建议 10;--event-ttl
: 默认值 1h0m0s;用于控制保留 events 的时长;集群 events 较多时建议 30m,以避免 etcd 增长过快;--max-mutating-requests-inflight
: 默认值 200;用于 write 请求的访问频率限制;建议 800 或更高;--max-requests-inflight
: 默认值 400;用于 read 请求的访问频率限制;建议 1600 或更高;--watch-cache-sizes
: 系统根据环境启发式的设定;用于 pods/nodes/endpoints 等核心资源,其他资源参考 default-watch-cache-size 的设定; K8s v1.19 开始,该参数为动态设定,建议使用该版本。
EOF
三人行, 必有我师; 知识共享, 天下为公. 本文由东风微鸣技术博客 EWhisper.cn 编写.
今天的关于如何通过 kube-apiserver 访问 K8s 集群中的 App和k8s访问流程的分享已经结束,谢谢您的关注,如果想了解更多关于009.Kubernetes 二进制部署 kube-apiserver、docker amd64 镜像和没加amd64的有什么区别?比如kube-apiserver-amd64 和 kube-apiserver、Error getting ConfigMap kube-system:kube-dns err: configmaps “kube-dns“ not found、K8S 性能优化 - K8S APIServer 调优的相关知识,请在本站进行查询。
本文标签: