本文将介绍CentOS7ETCD集群配置大全的详细情况,特别是关于centos7/etc/sysconfig/network的相关信息。我们将通过案例分析、数据研究等多种方式,帮助您更全面地了解这个主
本文将介绍CentOS 7 ETCD集群配置大全的详细情况,特别是关于centos7 /etc/sysconfig/network的相关信息。我们将通过案例分析、数据研究等多种方式,帮助您更全面地了解这个主题,同时也将涉及一些关于CentOS 6 升级到 CentOS 7、CentOS 6 和 CentOS 7 防火墙的关闭、CentOS 6, CentOS 7 安装mysql数据库、CentOS 6.6 系统升级到 CentOS 6.7的知识。
本文目录一览:- CentOS 7 ETCD集群配置大全(centos7 /etc/sysconfig/network)
- CentOS 6 升级到 CentOS 7
- CentOS 6 和 CentOS 7 防火墙的关闭
- CentOS 6, CentOS 7 安装mysql数据库
- CentOS 6.6 系统升级到 CentOS 6.7
CentOS 7 ETCD集群配置大全(centos7 /etc/sysconfig/network)
目录
- 前言
- 环境准备
- 安装
- 静态集群
- 配置
- node01 配置文件
- node02 配置文件
- node03 配置文件
- 启动测试
- 查看集群状态
- 配置
- 生成TLS证书
- etcd证书创建
- 安装cfssl工具集
- 生成证书
- 分发证书到各节点上
- etcd证书创建
- 静态TLS集群
- etcd 配置
- node01 配置文件
- node02 配置文件
- node03 配置文件
- 启动测试
- 检查TLS集群状态
- etcd 配置
- ETCD 动态集群基于DNS的SRV解析自动发现
- 添加SRV解析
- 方法一: 使用
bind
配置SRV解析 - 方法二: 使用
dnsmasq
配置SRV解析 - 验证SRV解析是否正常
- 方法一: 使用
- 配置ETCD
- node01 配置文件
- node02 配置文件
- node03 配置文件
- 启动并测试
- 添加SRV解析
- ETCD TLS动态集群基于DNS的SRV解析自动发现
- 添加SRV解析
- 方法一: 使用
bind
配置SRV解析 - 方法二: 使用
dnsmasq
配置SRV解析 - 验证SRV解析是否正常
- 方法一: 使用
- ETCD 配置
- node01 配置文件
- node02 配置文件
- node03 配置文件
- 启动测试
- 添加SRV解析
前言
Etcd 是 CoreOS 基于 Raft 开发的分布式 key-value 存储,可用于服务发现、共享配置以及一致性保障(如数据库选主、分布式锁等)
本次环境,是用于k8s集群,由于在二进制部署 k8s 中,由于 Etcd 集群导致各种各样的问题,特意抽出时间来研究 Etcd 集群。
Etcd 集群配置分为三种:
- 静态发现
- Etcd 动态发现
- DNS 动态发现 通过DNS的SRV解析动态发现集群
本次主要基于 静态发现 和 DNS动态发现 两种,并结合自签的TLS证书来创建集群。
环境准备
此环境实际用于 k8s 中的ETCD集群使用,用于本次文档
主机名 | 角色 | IP | 系统版本 | 内核版本 |
---|---|---|---|---|
node01.k8s.com | node01 | 192.168.1.91 | CentOS 7.7 | 5.1.4-1.el7.elrepo.x86_64 |
node02.k8s.com | node02 | 192.168.1.92 | CentOS 7.7 | 5.1.4-1.el7.elrepo.x86_64 |
node03.k8s.com | node03 | 192.168.1.93 | CentOS 7.7 | 5.1.4-1.el7.elrepo.x86_64 |
安装
在三台机器上均执行
[root@node01 ~]# yum install etcd -y
[root@node01 ~]# rpm -qa etcd
etcd-3.3.11-2.el7.centos.x86_64
创建Etcd所需目录,在三台机器上均执行
mkdir /data/k8s/etcd/{data,wal} -p
mkdir -p /etc/kubernetes/cert
chown -R etcd.etcd /data/k8s/etcd
静态集群
配置
node01 配置文件
ETCD_data_dir="/data/k8s/etcd/data"
ETCD_WAL_DIR="/data/k8s/etcd/wal"
ETCD_LISTEN_PEER_URLS="http://192.168.1.91:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.1.91:2379"
ETCD_MAX_SNAPSHOTS="5"
ETCD_MAX_WALS="5"
ETCD_NAME="etcd1"
ETCD_SNAPSHOT_COUNT="100000"
ETCD_HEARTBEAT_INTERVAL="100"
ETCD_ELECTION_TIMEOUT="1000"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.1.91:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.1.91:2379"
ETCD_INITIAL_CLUSTER="etcd1=http://192.168.1.91:2380,etcd2=http://192.168.1.92:2380,etcd3=http://192.168.1.93:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
node02 配置文件
ETCD_data_dir="/data/k8s/etcd/data"
ETCD_WAL_DIR="/data/k8s/etcd/wal"
ETCD_LISTEN_PEER_URLS="http://192.168.1.92:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.1.92:2379"
ETCD_MAX_SNAPSHOTS="5"
ETCD_MAX_WALS="5"
ETCD_NAME="etcd2"
ETCD_SNAPSHOT_COUNT="100000"
ETCD_HEARTBEAT_INTERVAL="100"
ETCD_ELECTION_TIMEOUT="1000"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.1.92:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.1.92:2379"
ETCD_INITIAL_CLUSTER="etcd1=http://192.168.1.91:2380,etcd2=http://192.168.1.92:2380,etcd3=http://192.168.1.93:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
node03 配置文件
ETCD_data_dir="/data/k8s/etcd/data"
ETCD_WAL_DIR="/data/k8s/etcd/wal"
ETCD_LISTEN_PEER_URLS="http://192.168.1.93:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.1.93:2379"
ETCD_MAX_SNAPSHOTS="5"
ETCD_MAX_WALS="5"
ETCD_NAME="etcd3"
ETCD_SNAPSHOT_COUNT="100000"
ETCD_HEARTBEAT_INTERVAL="100"
ETCD_ELECTION_TIMEOUT="1000"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.1.93:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.1.93:2379"
ETCD_INITIAL_CLUSTER="etcd1=http://192.168.1.91:2380,etcd2=http://192.168.1.92:2380,etcd3=http://192.168.1.93:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
启动测试
[root@node01 etcd]# systemctl start etcd
[root@node01 etcd]# systemctl status etcd
● etcd.service - Etcd Server
Loaded: loaded (/usr/lib/systemd/system/etcd.service; disabled; vendor preset: disabled)
Active: active (running) since Thu 2019-11-07 09:28:54 CST; 5s ago
Main PID: 1546 (etcd)
Tasks: 8
Memory: 41.3M
CGroup: /system.slice/etcd.service
└─1546 /usr/bin/etcd --name=etcd1 --data-dir=/data/k8s/etcd/data --listen-client-urls=http://192.168.1.91:2379
Nov 07 09:28:54 node01.k8s.com etcd[1546]: 3b8b38de05e2c497 [term: 1] received a MsgVote message with higher term from 9c64fba479c5e94 [term: 2]
Nov 07 09:28:54 node01.k8s.com etcd[1546]: 3b8b38de05e2c497 became follower at term 2
Nov 07 09:28:54 node01.k8s.com etcd[1546]: 3b8b38de05e2c497 [logterm: 1, index: 3, Vote: 0] cast MsgVote for 9c64fba479c5e94 [logterm: 1, index: 3] at term 2
Nov 07 09:28:54 node01.k8s.com etcd[1546]: raft.node: 3b8b38de05e2c497 elected leader 9c64fba479c5e94 at term 2
Nov 07 09:28:54 node01.k8s.com etcd[1546]: published {Name:etcd1 ClientURLs:[http://192.168.1.91:2379]} to cluster 19456f0bfd57284e
Nov 07 09:28:54 node01.k8s.com etcd[1546]: ready to serve client requests
Nov 07 09:28:54 node01.k8s.com etcd[1546]: serving insecure client requests on 192.168.1.91:2379, this is strongly discouraged!
Nov 07 09:28:54 node01.k8s.com systemd[1]: Started Etcd Server.
Nov 07 09:28:54 node01.k8s.com etcd[1546]: set the initial cluster version to 3.3
Nov 07 09:28:54 node01.k8s.com etcd[1546]: enabled capabilities for version 3.3
查看 /var/log/message 日志中,会有日下体现:
Nov 7 09:28:53 node02 etcd: added member 9c64fba479c5e94 [http://192.168.1.92:2380] to cluster 19456f0bfd57284e
Nov 7 09:28:53 node02 etcd: added member 3b8b38de05e2c497 [http://192.168.1.91:2380] to cluster 19456f0bfd57284e
Nov 7 09:28:53 node02 etcd: added member 76ea8679db7365b3 [http://192.168.1.93:2380] to cluster 19456f0bfd57284e
查看集群状态
[root@node01 etcd]# ETCDCTL_API=3 etcdctl --endpoints=http://192.168.1.91:2379,http://192.168.1.92:2379,http://192.168.1.93:2379 endpoint health
http://192.168.1.92:2379 is healthy: successfully committed proposal: took = 1.103545ms
http://192.168.1.93:2379 is healthy: successfully committed proposal: took = 2.122478ms
http://192.168.1.91:2379 is healthy: successfully committed proposal: took = 2.690215ms
[root@node01 etcd]# etcdctl --endpoints=http://192.168.1.91:2379,http://192.168.1.92:2379,http://192.168.1.93:2379 cluster-health
member 9c64fba479c5e94 is healthy: got healthy result from http://192.168.1.92:2379
member 3b8b38de05e2c497 is healthy: got healthy result from http://192.168.1.91:2379
member 76ea8679db7365b3 is healthy: got healthy result from http://192.168.1.93:2379
cluster is healthy
生成TLS证书
使用自签证书
CA(Certificate Authority)是自签名的根证书,用来签名后续创建的其他证书。本文章使用CloudFlare的PKI工具cfssl创建所有证书。
etcd证书创建
整个证书的创建过程均在 node01
上操作;
安装cfssl工具集
mkdir -p /opt/k8s/cert && cd /opt/k8s
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
mv cfssl_linux-amd64 /opt/k8s/bin/cfssl
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
mv cfssljson_linux-amd64 /opt/k8s/bin/cfssljson
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
mv cfssl-certinfo_linux-amd64 /opt/k8s/bin/cfssl-certinfo
chmod +x /opt/k8s/bin/*
echo ''export PATH=/opt/k8s/bin:$PATH'' >> ~/.bash_profile
source ~/.bash_profile
生成证书
创建根证书 (CA)
CA证书是集群所有节点共享的,只需要创建一个CA证书,后续创建的所有证书都是由它签名
创建配置文件
CA配置文件用于配置根证书的使用场景(profile)和具体参数
(usage、过期时间、服务端认证、客户端认证、加密等)
cd /opt/k8s/work
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "87600h"
}
}
}
}
EOF
######################
signing 表示该证书可用于签名其它证书,生成的ca.pem证书找中CA=TRUE
server auth 表示client可以用该证书对server提供的证书进行验证
client auth 表示server可以用该证书对client提供的证书进行验证
创建证书签名请求文件
cd /opt/k8s/work
cat > ca-csr.json <<EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "4Paradigm"
}
],
"ca": {
"expiry": "876000h"
}
}
EOF
#######################
CN CommonName,kube-apiserver从证书中提取该字段作为请求的用户名(User Name),浏览器使用该字段验证网站是否合法
O Organization,kube-apiserver 从证书中提取该字段作为请求用户和所属组(Group)
kube-apiserver将提取的User、Group作为RBAC授权的用户和标识
生成CA证书和私钥
cd /opt/k8s/work
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
ls ca*
创建etcd证书和私钥
cd /opt/k8s/work
cat > etcd-csr.json <<EOF
{
"CN": "etcd",
"hosts": [
"127.0.0.1",
"192.168.1.91",
"192.168.1.92",
"192.168.1.93",
"k8s.com",
"etcd1.k8s.com",
"etcd2.k8s.com",
"etcd3.k8s.com"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "4Paradigm"
}
]
}
EOF
#host字段指定授权使用该证书的etcd节点IP或域名列表,需要将etcd集群的3个节点都添加其中
在这一步需要把域名都要加进去,否则会在日志中报错:
Nov 7 12:37:03 node01 etcd: rejected connection from "192.168.1.93:46294" (error "remote error: tls: bad certificate", ServerName "k8s.com")
生成证书和私钥
cd /opt/k8s/work
cfssl gencert -ca=/opt/k8s/work/ca.pem \\
-ca-key=/opt/k8s/work/ca-key.pem \\
-config=/opt/k8s/work/ca-config.json \\
-profile=kubernetes etcd-csr.json | cfssljson -bare etcd
ls etcd*pem -l
-rw------- 1 root root 1675 Nov 7 09:52 etcd-key.pem
-rw-r--r-- 1 root root 1444 Nov 7 09:52 etcd.pem
etcd 使用的TLS证书创建完成
分发证书到各节点上
要做所有节点上创建对应的目录
mkdir /data/k8s/etcd/{data,wal} -p
mkdir -p /etc/kubernetes/cert
chown -R etcd.etcd /data/k8s/etcd
分发证书
cd /opt/k8s/work
scp ca*.pem ca-config.json 192.168.1.91:/etc/kubernetes/cert
scp ca*.pem ca-config.json 192.168.1.92:/etc/kubernetes/cert
scp ca*.pem ca-config.json 192.168.1.93:/etc/kubernetes/cert
scp etcd*pem 192.168.1.91:/etc/etcd/cert/
scp etcd*pem 192.168.1.92:/etc/etcd/cert/
scp etcd*pem 192.168.1.93:/etc/etcd/cert/
在所有节点上执行:
chown -R etcd.etcd /etc/etcd/cert
静态TLS集群
etcd 配置
node01 配置文件
ETCD_data_dir="/data/k8s/etcd/data"
ETCD_WAL_DIR="/data/k8s/etcd/wal"
ETCD_LISTEN_PEER_URLS="https://192.168.1.91:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.1.91:2379"
ETCD_MAX_SNAPSHOTS="5"
ETCD_MAX_WALS="5"
ETCD_NAME="etcd1"
ETCD_SNAPSHOT_COUNT="100000"
ETCD_HEARTBEAT_INTERVAL="100"
ETCD_ELECTION_TIMEOUT="1000"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.91:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.91:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.1.91:2380,etcd2=https://192.168.1.92:2380,etcd3=https://192.168.1.93:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_CERT_FILE="/etc/etcd/cert/etcd.pem"
ETCD_KEY_FILE="/etc/etcd/cert/etcd-key.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_TRUSTED_CA_FILE="/etc/kubernetes/cert/ca.pem"
ETCD_AUTO_TLS="true"
ETCD_PEER_CERT_FILE="/etc/etcd/cert/etcd.pem"
ETCD_PEER_KEY_FILE="/etc/etcd/cert/etcd-key.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_TRUSTED_CA_FILE="/etc/kubernetes/cert/ca.pem"
ETCD_PEER_AUTO_TLS="true"
node02 配置文件
ETCD_data_dir="/data/k8s/etcd/data"
ETCD_WAL_DIR="/data/k8s/etcd/wal"
ETCD_LISTEN_PEER_URLS="https://192.168.1.92:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.1.92:2379"
ETCD_MAX_SNAPSHOTS="5"
ETCD_MAX_WALS="5"
ETCD_NAME="etcd2"
ETCD_SNAPSHOT_COUNT="100000"
ETCD_HEARTBEAT_INTERVAL="100"
ETCD_ELECTION_TIMEOUT="1000"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.92:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.92:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.1.91:2380,etcd2=https://192.168.1.92:2380,etcd3=https://192.168.1.93:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_CERT_FILE="/etc/etcd/cert/etcd.pem"
ETCD_KEY_FILE="/etc/etcd/cert/etcd-key.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_TRUSTED_CA_FILE="/etc/kubernetes/cert/ca.pem"
ETCD_AUTO_TLS="true"
ETCD_PEER_CERT_FILE="/etc/etcd/cert/etcd.pem"
ETCD_PEER_KEY_FILE="/etc/etcd/cert/etcd-key.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_TRUSTED_CA_FILE="/etc/kubernetes/cert/ca.pem"
ETCD_PEER_AUTO_TLS="true"
node03 配置文件
ETCD_data_dir="/data/k8s/etcd/data"
ETCD_WAL_DIR="/data/k8s/etcd/wal"
ETCD_LISTEN_PEER_URLS="https://192.168.1.93:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.1.93:2379"
ETCD_MAX_SNAPSHOTS="5"
ETCD_MAX_WALS="5"
ETCD_NAME="etcd3"
ETCD_SNAPSHOT_COUNT="100000"
ETCD_HEARTBEAT_INTERVAL="100"
ETCD_ELECTION_TIMEOUT="1000"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.93:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.93:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.1.91:2380,etcd2=https://192.168.1.92:2380,etcd3=https://192.168.1.93:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_CERT_FILE="/etc/etcd/cert/etcd.pem"
ETCD_KEY_FILE="/etc/etcd/cert/etcd-key.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_TRUSTED_CA_FILE="/etc/kubernetes/cert/ca.pem"
ETCD_AUTO_TLS="true"
ETCD_PEER_CERT_FILE="/etc/etcd/cert/etcd.pem"
ETCD_PEER_KEY_FILE="/etc/etcd/cert/etcd-key.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_TRUSTED_CA_FILE="/etc/kubernetes/cert/ca.pem"
ETCD_PEER_AUTO_TLS="true"
启动测试
[root@node01 work]# systemctl start etcd
[root@node01 work]# systemctl status etcd
● etcd.service - Etcd Server
Loaded: loaded (/usr/lib/systemd/system/etcd.service; disabled; vendor preset: disabled)
Active: active (running) since Thu 2019-11-07 10:15:58 CST; 5s ago
Main PID: 2078 (etcd)
Tasks: 8
Memory: 28.9M
CGroup: /system.slice/etcd.service
└─2078 /usr/bin/etcd --name=etcd1 --data-dir=/data/k8s/etcd/data --listen-client-urls=https://192.168.1.91:2379
Nov 07 10:15:58 node01.k8s.com etcd[2078]: 2a40d8ba966d12fe [term: 1] received a MsgVote message with higher term from af05139f75a68867 [term: 2]
Nov 07 10:15:58 node01.k8s.com etcd[2078]: 2a40d8ba966d12fe became follower at term 2
Nov 07 10:15:58 node01.k8s.com etcd[2078]: 2a40d8ba966d12fe [logterm: 1, index: 3, Vote: 0] cast MsgVote for af05139f75a68867 [logterm: 1, index: 3] at term 2
Nov 07 10:15:58 node01.k8s.com etcd[2078]: raft.node: 2a40d8ba966d12fe elected leader af05139f75a68867 at term 2
Nov 07 10:15:58 node01.k8s.com etcd[2078]: published {Name:etcd1 ClientURLs:[https://192.168.1.91:2379]} to cluster f3e9c54e1aafb3c1
Nov 07 10:15:58 node01.k8s.com etcd[2078]: ready to serve client requests
Nov 07 10:15:58 node01.k8s.com etcd[2078]: serving client requests on 192.168.1.91:2379
Nov 07 10:15:58 node01.k8s.com systemd[1]: Started Etcd Server.
Nov 07 10:15:58 node01.k8s.com etcd[2078]: set the initial cluster version to 3.3
Nov 07 10:15:58 node01.k8s.com etcd[2078]: enabled capabilities for version 3.3
查看 /var/log/message 日志中,会有日下体现:
Nov 7 10:15:57 node01 etcd: added member 2a40d8ba966d12fe [https://192.168.1.91:2380] to cluster f3e9c54e1aafb3c1
Nov 7 10:15:57 node01 etcd: added member af05139f75a68867 [https://192.168.1.92:2380] to cluster f3e9c54e1aafb3c1
Nov 7 10:15:57 node01 etcd: added member c3bab7c20fba3f60 [https://192.168.1.93:2380] to cluster f3e9c54e1aafb3c1
检查TLS集群状态
ETCDCTL_API=3 etcdctl \\
--endpoints=https://etcd1.k8s.com:2379,https://etcd2.k8s.com:2379,https://etcd3.k8s.com:2379 \\
--cacert=/etc/kubernetes/cert/ca.pem \\
--cert=/etc/etcd/cert/etcd.pem \\
--key=/etc/etcd/cert/etcd-key.pem endpoint health
# 输出
https://192.168.1.92:2379 is healthy: successfully committed proposal: took = 1.317022ms
https://192.168.1.91:2379 is healthy: successfully committed proposal: took = 1.59958ms
https://192.168.1.93:2379 is healthy: successfully committed proposal: took = 1.453049ms
ETCD 动态集群基于DNS的SRV解析自动发现
需要局域网内部有DNS服务器
添加SRV解析
目前常用的内部DNS服务有两种,
bind
、dnsmasq
在下面都会列出具体的配置,但只需要配置其中之一即可;
DNS 如果配置有问题,会有如下报错:
etcd: error setting up initial cluster: cannot find local etcd member "etcd1" in SRV records
方法一: 使用bind
配置SRV解析
如果内部没有bind
服务,可以参考部署文档文章: https://www.cnblogs.com/winstom/p/11806962.html
使用域名为 : k8s.com
,在bind的zone文件中添加如下解析:
etcd1 IN A 192.168.1.91
etcd2 IN A 192.168.1.92
etcd3 IN A 192.168.1.93
_etcd-server._tcp.k8s.com. 1H IN SRV 2380 0 100 etcd1
_etcd-server._tcp.k8s.com. 1H IN SRV 2380 0 100 etcd2
_etcd-server._tcp.k8s.com. 1H IN SRV 2380 0 100 etcd3
_etcd-client._tcp.k8s.com. 1H IN SRV 2380 0 100 etcd1
_etcd-client._tcp.k8s.com. 1H IN SRV 2380 0 100 etcd2
_etcd-client._tcp.k8s.com. 1H IN SRV 2380 0 100 etcd3
修改之后重新加载配置文件:
[root@jenkins named]# named-checkzone k8s.com k8s.com.zone
zone k8s.com/IN: loaded serial 0
OK
[root@jenkins named]# rndc reload
server reload successful
方法二: 使用dnsmasq
配置SRV解析
如果内部没有dnsmasq
服务,可以参考部署文档文章: https://www.cnblogs.com/winstom/p/11809066.html
使用域名为 : k8s.com
,具体修改如下:
在/etc/dnsmasq_hosts
新增下面内容
192.168.1.91 etcd1 etcd1.k8s.com
192.168.1.92 etcd2 etcd2.k8s.com
192.168.1.93 etcd3 etcd3.k8s.com
在 /etc/dnsmasq.conf
文件中增加下面SRV解析内容
srv-host=_etcd-server._tcp.k8s.com,etcd1.k8s.com,2380,0,100
srv-host=_etcd-server._tcp.k8s.com,etcd2.k8s.com,2380,0,100
srv-host=_etcd-server._tcp.k8s.com,etcd3.k8s.com,2380,0,100
srv-host=_etcd-client._tcp.k8s.com,etcd1.k8s.com,2380,0,100
srv-host=_etcd-client._tcp.k8s.com,etcd2.k8s.com,2380,0,100
srv-host=_etcd-client._tcp.k8s.com,etcd3.k8s.com,2380,0,100
修改之后重启服务 systemctl restart dnsmasq
验证SRV解析是否正常
查询SRV记录
[root@node01 ~]# dig @192.168.1.122 +noall +answer SRV _etcd-server._tcp.k8s.com
_etcd-server._tcp.k8s.com. 3600 IN SRV 2380 0 100 etcd2.k8s.com.
_etcd-server._tcp.k8s.com. 3600 IN SRV 2380 0 100 etcd1.k8s.com.
_etcd-server._tcp.k8s.com. 3600 IN SRV 2380 0 100 etcd3.k8s.com.
查询域名解析结果
[root@node01 ~]# dig @192.168.1.122 +noall +answer etcd1.k8s.com etcd2.k8s.com etcd3.k8s.com
etcd1.k8s.com. 86400 IN A 192.168.1.91
etcd2.k8s.com. 86400 IN A 192.168.1.92
etcd3.k8s.com. 86400 IN A 192.168.1.93
如上述显示,则表示SRV解析正常
配置ETCD
node01 配置文件
ETCD_data_dir="/data/k8s/etcd/data"
ETCD_WAL_DIR="/data/k8s/etcd/wal"
ETCD_LISTEN_PEER_URLS="http://192.168.1.91:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.1.91:2379"
ETCD_MAX_SNAPSHOTS="5"
ETCD_MAX_WALS="5"
ETCD_NAME="etcd1"
ETCD_SNAPSHOT_COUNT="100000"
ETCD_HEARTBEAT_INTERVAL="100"
ETCD_ELECTION_TIMEOUT="1000"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://etcd1.k8s.com:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://etcd1.k8s.com:2379"
ETCD_disCOVERY_SRV="k8s.com"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
node02 配置文件
ETCD_data_dir="/data/k8s/etcd/data"
ETCD_WAL_DIR="/data/k8s/etcd/wal"
ETCD_LISTEN_PEER_URLS="http://192.168.1.92:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.1.92:2379"
ETCD_MAX_SNAPSHOTS="5"
ETCD_MAX_WALS="5"
ETCD_NAME="etcd2"
ETCD_SNAPSHOT_COUNT="100000"
ETCD_HEARTBEAT_INTERVAL="100"
ETCD_ELECTION_TIMEOUT="1000"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://etcd2.k8s.com:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://etcd2.k8s.com:2379"
ETCD_disCOVERY_SRV="k8s.com"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
node03 配置文件
ETCD_data_dir="/data/k8s/etcd/data"
ETCD_WAL_DIR="/data/k8s/etcd/wal"
ETCD_LISTEN_PEER_URLS="http://192.168.1.93:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.1.93:2379"
ETCD_MAX_SNAPSHOTS="5"
ETCD_MAX_WALS="5"
ETCD_NAME="etcd3"
ETCD_SNAPSHOT_COUNT="100000"
ETCD_HEARTBEAT_INTERVAL="100"
ETCD_ELECTION_TIMEOUT="1000"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://etcd3.k8s.com:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://etcd3.k8s.com:2379"
ETCD_disCOVERY_SRV="k8s.com"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
启动并测试
启动
[root@node01 etcd]# systemctl start etcd
[root@node01 etcd]# systemctl status etcd
● etcd.service - Etcd Server
Loaded: loaded (/usr/lib/systemd/system/etcd.service; disabled; vendor preset: disabled)
Active: active (running) since Thu 2019-11-07 11:25:29 CST; 4s ago
Main PID: 14203 (etcd)
Tasks: 8
Memory: 16.9M
CGroup: /system.slice/etcd.service
└─14203 /usr/bin/etcd --name=etcd1 --data-dir=/data/k8s/etcd/data --listen-client-urls=http://192.168.1.91:2379
Nov 07 11:25:29 node01.k8s.com etcd[14203]: d79e9ae86b2a1de1 [quorum:2] has received 2 MsgVoteResp Votes and 0 Vote rejections
Nov 07 11:25:29 node01.k8s.com etcd[14203]: d79e9ae86b2a1de1 became leader at term 2
Nov 07 11:25:29 node01.k8s.com etcd[14203]: raft.node: d79e9ae86b2a1de1 elected leader d79e9ae86b2a1de1 at term 2
Nov 07 11:25:29 node01.k8s.com etcd[14203]: published {Name:etcd1 ClientURLs:[http://etcd1.k8s.com:2379 http://etcd1.k8s.com:4001]} to cluster 42cecf80e3791d6c
Nov 07 11:25:29 node01.k8s.com etcd[14203]: ready to serve client requests
Nov 07 11:25:29 node01.k8s.com etcd[14203]: serving insecure client requests on 192.168.1.91:2379, this is strongly discouraged!
Nov 07 11:25:29 node01.k8s.com systemd[1]: Started Etcd Server.
Nov 07 11:25:29 node01.k8s.com etcd[14203]: setting up the initial cluster version to 3.3
Nov 07 11:25:29 node01.k8s.com etcd[14203]: set the initial cluster version to 3.3
Nov 07 11:25:29 node01.k8s.com etcd[14203]: enabled capabilities for version 3.3
日志 vim /var/log/messages
表现如下:
Nov 7 11:25:27 node01 etcd: got bootstrap from DNS for etcd-server at 0=http://etcd3.k8s.com:2380
Nov 7 11:25:27 node01 etcd: got bootstrap from DNS for etcd-server at 1=http://etcd2.k8s.com:2380
Nov 7 11:25:27 node01 etcd: got bootstrap from DNS for etcd-server at etcd1=http://etcd1.k8s.com:2380
Nov 7 11:25:27 node01 etcd: resolving etcd1.k8s.com:2380 to 192.168.1.91:2380
Nov 7 11:25:27 node01 etcd: resolving etcd1.k8s.com:2380 to 192.168.1.91:2380
Nov 7 11:25:28 node01 etcd: name = etcd1
Nov 7 11:25:28 node01 etcd: data dir = /data/k8s/etcd/data
Nov 7 11:25:28 node01 etcd: member dir = /data/k8s/etcd/data/member
Nov 7 11:25:28 node01 etcd: dedicated WAL dir = /data/k8s/etcd/wal
Nov 7 11:25:28 node01 etcd: heartbeat = 100ms
Nov 7 11:25:28 node01 etcd: election = 1000ms
Nov 7 11:25:28 node01 etcd: snapshot count = 100000
Nov 7 11:25:28 node01 etcd: advertise client URLs = http://etcd1.k8s.com:2379,http://etcd1.k8s.com:4001
Nov 7 11:25:28 node01 etcd: initial advertise peer URLs = http://etcd1.k8s.com:2380
Nov 7 11:25:28 node01 etcd: initial cluster = 0=http://etcd3.k8s.com:2380,1=http://etcd2.k8s.com:2380,etcd1=http://etcd1.k8s.com:2380
测试:
[root@node01 etcd]# etcdctl --endpoints=http://192.168.1.91:2379 cluster-health
member 184beca37ca32d75 is healthy: got healthy result from http://etcd2.k8s.com:2379
member d79e9ae86b2a1de1 is healthy: got healthy result from http://etcd1.k8s.com:2379
member f7662e609b7e4013 is healthy: got healthy result from http://etcd3.k8s.com:2379
cluster is healthy
ETCD TLS动态集群基于DNS的SRV解析自动发现
需要局域网内部有DNS服务器
添加SRV解析
目前常用的内部DNS服务有两种,
bind
、dnsmasq
在下面都会列出具体的配置,但只需要配置其中之一即可;
方法一: 使用bind
配置SRV解析
如果内部没有bind
服务,可以参考部署文档文章: https://www.cnblogs.com/winstom/p/11806962.html
使用域名为 : k8s.com
,在bind的zone文件中添加如下解析:
etcd1 IN A 192.168.1.91
etcd2 IN A 192.168.1.92
etcd3 IN A 192.168.1.93
_etcd-server-ssl._tcp.k8s.com. 1H IN SRV 2380 0 100 etcd1
_etcd-server-ssl._tcp.k8s.com. 1H IN SRV 2380 0 100 etcd2
_etcd-server-ssl._tcp.k8s.com. 1H IN SRV 2380 0 100 etcd3
_etcd-client-ssl._tcp.k8s.com. 1H IN SRV 2380 0 100 etcd1
_etcd-client-ssl._tcp.k8s.com. 1H IN SRV 2380 0 100 etcd2
_etcd-client-ssl._tcp.k8s.com. 1H IN SRV 2380 0 100 etcd3
修改之后重新加载配置文件:
[root@jenkins named]# named-checkzone k8s.com k8s.com.zone
zone k8s.com/IN: loaded serial 0
OK
[root@jenkins named]# rndc reload
server reload successful
方法二: 使用dnsmasq
配置SRV解析
如果内部没有dnsmasq
服务,可以参考部署文档文章: https://www.cnblogs.com/winstom/p/11809066.html
使用域名为 : k8s.com
,具体修改如下:
在/etc/dnsmasq_hosts
新增下面内容
192.168.1.91 etcd1 etcd1.k8s.com
192.168.1.92 etcd2 etcd2.k8s.com
192.168.1.93 etcd3 etcd3.k8s.com
在 /etc/dnsmasq.conf
文件中增加下面SRV解析内容
srv-host=_etcd-server-ssl._tcp.k8s.com,etcd1.k8s.com,2380,0,100
srv-host=_etcd-server-ssl._tcp.k8s.com,etcd2.k8s.com,2380,0,100
srv-host=_etcd-server-ssl._tcp.k8s.com,etcd3.k8s.com,2380,0,100
srv-host=_etcd-client-ssl._tcp.k8s.com,etcd1.k8s.com,2380,0,100
srv-host=_etcd-client-ssl._tcp.k8s.com,etcd2.k8s.com,2380,0,100
srv-host=_etcd-client-ssl._tcp.k8s.com,etcd3.k8s.com,2380,0,100
修改之后重启服务 systemctl restart dnsmasq
验证SRV解析是否正常
查询SRV记录
[root@node01 etcd]# dig @192.168.1.122 +noall +answer SRV _etcd-server-ssl._tcp.k8s.com
_etcd-server-ssl._tcp.k8s.com. 3600 IN SRV 2380 0 100 etcd3.k8s.com.
_etcd-server-ssl._tcp.k8s.com. 3600 IN SRV 2380 0 100 etcd2.k8s.com.
_etcd-server-ssl._tcp.k8s.com. 3600 IN SRV 2380 0 100 etcd1.k8s.com.
查询域名解析结果
[root@node01 ~]# dig @192.168.1.122 +noall +answer etcd1.k8s.com etcd2.k8s.com etcd3.k8s.com
etcd1.k8s.com. 86400 IN A 192.168.1.91
etcd2.k8s.com. 86400 IN A 192.168.1.92
etcd3.k8s.com. 86400 IN A 192.168.1.93
ETCD 配置
node01 配置文件
ETCD_data_dir="/data/k8s/etcd/data"
ETCD_WAL_DIR="/data/k8s/etcd/wal"
ETCD_LISTEN_PEER_URLS="https://192.168.1.91:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.1.91:2379"
ETCD_MAX_SNAPSHOTS="5"
ETCD_MAX_WALS="5"
ETCD_NAME="etcd1"
ETCD_SNAPSHOT_COUNT="100000"
ETCD_HEARTBEAT_INTERVAL="100"
ETCD_ELECTION_TIMEOUT="1000"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://etcd1.k8s.com:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://etcd1.k8s.com:2379,https://etcd1.k8s.com:4001"
ETCD_disCOVERY_SRV="k8s.com"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_CERT_FILE="/etc/etcd/cert/etcd.pem"
ETCD_KEY_FILE="/etc/etcd/cert/etcd-key.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_TRUSTED_CA_FILE="/etc/kubernetes/cert/ca.pem"
ETCD_AUTO_TLS="true"
ETCD_PEER_CERT_FILE="/etc/etcd/cert/etcd.pem"
ETCD_PEER_KEY_FILE="/etc/etcd/cert/etcd-key.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_TRUSTED_CA_FILE="/etc/kubernetes/cert/ca.pem"
ETCD_PEER_AUTO_TLS="true"
node02 配置文件
ETCD_data_dir="/data/k8s/etcd/data"
ETCD_WAL_DIR="/data/k8s/etcd/wal"
ETCD_LISTEN_PEER_URLS="https://192.168.1.92:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.1.92:2379"
ETCD_MAX_SNAPSHOTS="5"
ETCD_MAX_WALS="5"
ETCD_NAME="etcd2"
ETCD_SNAPSHOT_COUNT="100000"
ETCD_HEARTBEAT_INTERVAL="100"
ETCD_ELECTION_TIMEOUT="1000"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://etcd2.k8s.com:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://etcd2.k8s.com:2379"
ETCD_disCOVERY_SRV="k8s.com"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_CERT_FILE="/etc/etcd/cert/etcd.pem"
ETCD_KEY_FILE="/etc/etcd/cert/etcd-key.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_TRUSTED_CA_FILE="/etc/kubernetes/cert/ca.pem"
ETCD_AUTO_TLS="true"
ETCD_PEER_CERT_FILE="/etc/etcd/cert/etcd.pem"
ETCD_PEER_KEY_FILE="/etc/etcd/cert/etcd-key.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_TRUSTED_CA_FILE="/etc/kubernetes/cert/ca.pem"
ETCD_PEER_AUTO_TLS="true"
node03 配置文件
ETCD_data_dir="/data/k8s/etcd/data"
ETCD_WAL_DIR="/data/k8s/etcd/wal"
ETCD_LISTEN_PEER_URLS="https://192.168.1.93:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.1.93:2379"
ETCD_MAX_SNAPSHOTS="5"
ETCD_MAX_WALS="5"
ETCD_NAME="etcd3"
ETCD_SNAPSHOT_COUNT="100000"
ETCD_HEARTBEAT_INTERVAL="100"
ETCD_ELECTION_TIMEOUT="1000"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://etcd3.k8s.com:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://etcd3.k8s.com:2379"
ETCD_disCOVERY_SRV="k8s.com"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_CERT_FILE="/etc/etcd/cert/etcd.pem"
ETCD_KEY_FILE="/etc/etcd/cert/etcd-key.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_TRUSTED_CA_FILE="/etc/kubernetes/cert/ca.pem"
ETCD_AUTO_TLS="true"
ETCD_PEER_CERT_FILE="/etc/etcd/cert/etcd.pem"
ETCD_PEER_KEY_FILE="/etc/etcd/cert/etcd-key.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_TRUSTED_CA_FILE="/etc/kubernetes/cert/ca.pem"
ETCD_PEER_AUTO_TLS="true"
启动测试
启动
[root@node03 etcd]# systemctl restart etcd
[root@node03 etcd]# systemctl status etcd
● etcd.service - Etcd Server
Loaded: loaded (/usr/lib/systemd/system/etcd.service; disabled; vendor preset: disabled)
Active: active (running) since Thu 2019-11-07 12:38:37 CST; 4s ago
Main PID: 13460 (etcd)
Tasks: 8
Memory: 16.6M
CGroup: /system.slice/etcd.service
└─13460 /usr/bin/etcd --name=etcd3 --data-dir=/data/k8s/etcd/data --listen-client-urls=https://192.168.1.93:2379
Nov 07 12:38:36 node03.k8s.com etcd[13460]: established a TCP streaming connection with peer 40a8f19a5db99534 (stream Message writer)
Nov 07 12:38:36 node03.k8s.com etcd[13460]: established a TCP streaming connection with peer 40a8f19a5db99534 (stream MsgApp v2 writer)
Nov 07 12:38:37 node03.k8s.com etcd[13460]: 9888555207dbf0e0 [term: 92] received a MsgVote message with higher term from a0d541999e9eb3b3 [term: 98]
Nov 07 12:38:37 node03.k8s.com etcd[13460]: 9888555207dbf0e0 became follower at term 98
Nov 07 12:38:37 node03.k8s.com etcd[13460]: 9888555207dbf0e0 [logterm: 92, index: 9, Vote: 0] cast MsgVote for a0d541999e9eb3b3 [logterm: 92, index: 9] at term 98
Nov 07 12:38:37 node03.k8s.com etcd[13460]: raft.node: 9888555207dbf0e0 elected leader a0d541999e9eb3b3 at term 98
Nov 07 12:38:37 node03.k8s.com etcd[13460]: published {Name:etcd3 ClientURLs:[https://etcd3.k8s.com:2379]} to cluster f445a02ce3dc6a02
Nov 07 12:38:37 node03.k8s.com etcd[13460]: ready to serve client requests
Nov 07 12:38:37 node03.k8s.com etcd[13460]: serving client requests on 192.168.1.93:2379
Nov 07 12:38:37 node03.k8s.com systemd[1]: Started Etcd Server.
日志体现
Nov 7 12:38:36 node01 etcd: added member 40a8f19a5db99534 [https://etcd2.k8s.com:2380] to cluster f445a02ce3dc6a02
Nov 7 12:38:36 node01 etcd: starting peer 40a8f19a5db99534...
Nov 7 12:38:36 node01 etcd: started HTTP pipelining with peer 40a8f19a5db99534
Nov 7 12:38:36 node01 etcd: started streaming with peer 40a8f19a5db99534 (writer)
Nov 7 12:38:36 node01 etcd: started peer 40a8f19a5db99534
Nov 7 12:38:36 node01 etcd: added peer 40a8f19a5db99534
Nov 7 12:38:36 node01 etcd: added member 9888555207dbf0e0 [https://etcd3.k8s.com:2380] to cluster f445a02ce3dc6a02
Nov 7 12:38:36 node01 etcd: starting peer 9888555207dbf0e0...
Nov 7 12:38:36 node01 etcd: started HTTP pipelining with peer 9888555207dbf0e0
Nov 7 12:38:36 node01 etcd: started peer 9888555207dbf0e0
Nov 7 12:38:36 node01 etcd: added peer 9888555207dbf0e0
Nov 7 12:38:36 node01 etcd: added member a0d541999e9eb3b3 [https://etcd1.k8s.com:2380] to cluster f445a02ce3dc6a02
测试集群状态:
ETCDCTL_API=3 etcdctl --endpoints=https://etcd1.k8s.com:2379,https://etcd2.k8s.com:2379,https://etcd3.k8s.com:2379 \\
--cacert=/etc/kubernetes/cert/ca.pem \\
--cert=/etc/etcd/cert/etcd.pem \\
--key=/etc/etcd/cert/etcd-key.pem endpoint health
# 输出
https://etcd1.k8s.com:2379 is healthy: successfully committed proposal: took = 4.269468ms
https://etcd3.k8s.com:2379 is healthy: successfully committed proposal: took = 1.58797ms
https://etcd2.k8s.com:2379 is healthy: successfully committed proposal: took = 1.622151ms
CentOS 6 升级到 CentOS 7
注意
非必要情况,请使用重新安装系统的方式升级,原因如下:
- 并非所有的系统都能顺利从 6 升级到 7,安装的软件越少,升级成功的可能性越大;
- 只支持 6.5 及以上系统升级到不高于 7.2 系统;
- 升级的耗时完全不比重新安装少,绝大多数情况下会耗费更长的时间和更多精力;
- 升级完成后处理各种依赖是一个非常头大的问题。
本人在同一天升级了两台电脑,一个成功一个失败。成功的那台电脑额外花了一天时间处理各种依赖和问题,失败的电脑半小时装好系统和必备软件,用得爽歪歪。所以如非必要,建议采用备份数据后直接重装系统的方式。
操作
通过软件方式从 6 升级到 7,请参考下面的步骤:
-
升级当前系统到最新版本:
yum update -y
-
安装旧版
openscap
:yum remove -y openscap && yum install -y http://dev.centos.org/centos/6/upg/x86_64/Packages/openscap-1.0.8-1.0.1.el6.centos.x86_64.rpm
; -
添加
upgradetool
源:cat <<EOF >/etc/yum.repos.d/upgradetool.repo [upgrade] name=CentOS-$releasever - Upgrade Tool baseurl=http://dev.centos.org/centos/6/upg/x86_64/ gpgcheck=1 enabled=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6 EOF
-
安装升级工具:
yum install -y redhat-upgrade-tool preupgrade-assistant preupgrade-assistant-contents
; -
执行升级可行性分析:
preupg -l
,该命令会耗费几分钟到几十分钟时间。如果出现preupg: error: [Errno 2] No such file or directory: ''/root/preupgrade/result.html''
的错误,请参考第一步安装openscap
的旧版; -
使用清华大学的
centos-vault
源安装 7.2 版本:centos-upgrade-tool-cli --network 7 --instrepo=https://mirrors.tuna.tsinghua.edu.cn/centos-vault/7.2.1511/os/x86_64/
。** 注意:**7.2 是支持升级的最高版本,升级其他版本将会出现
Downloading failed: invalid data in .treeinfo: No section: ''checksums''
的错误提示; -
如果升级成功,用
reboot
命令重启系统;如果提示The requested URL returned error: 404 Not Found
等错误,基本上说明当前系统不支持直接升级。果断采用重装系统的正道吧,少年! -
系统重启后,有可能因为依赖库确实导致
ssh
无法启动,grep
不能正常使用等问题。基本功底够好的手动排查,然后一个个问题解决;搞不懂错误原因或者觉得处理麻烦的,备份数据后重装系统吧! -
使用
rpm -qa | grep el6
查看系统上残留的软件包。如果能手动清理掉,让系统update
无障碍,耐心一个个处理掉。如果觉得依赖太麻烦或者搞不定,备份数据后重装系统吧!
参考
- https://blog.51cto.com/moerjinrong/2340656
CentOS 6 和 CentOS 7 防火墙的关闭
CentOS6.5 查看防火墙的状态:
[linuxidc@localhost ~]$service iptable status 显示结果: [linuxidc@localhost ~]$service iptable status Redirecting to /bin/systemctl status iptable.service
● iptable.service Loaded: not-found (Reason: No such file or directory) Active: inactive (dead) -- 表示防火墙已经关闭 CentOS 6.5 关闭防火墙
[root@localhost ~]#servcie iptables stop -- 临时关闭防火墙 [root@localhost ~]#chkconfig iptables off -- 永久关闭防火墙
CentOS 7.2 关闭防火墙
CentOS 7.0 默认使用的是 firewall 作为防火墙,这里改为 iptables 防火墙步骤。 firewall-cmd --state #查看默认防火墙状态(关闭后显示 notrunning,开启后显示 running)
[root@localhost ~]#firewall-cmd --state not running 检查防火墙的状态:
从 centos7 开始使用 systemctl 来管理服务和程序,包括了 service 和 chkconfig。
[root@localhost ~]#systemctl list-unit-files|grep firewalld.service -- 防火墙处于关闭状态 firewalld.service disabled 或者
[root@localhost ~]#systemctl status firewalld.service ● firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled) Active: inactive (dead) 关闭防火墙:
systemctl stop firewalld.service #停止 firewall systemctl disable firewalld.service #禁止 firewall 开机启动
[root@localhost ~]#systemctl stop firewalld.service [root@localhost ~]#systemctl disable firewalld.service
启动一个服务:systemctl start firewalld.service 关闭一个服务:systemctl stop firewalld.service 重启一个服务:systemctl restart firewalld.service 显示一个服务的状态:systemctl status firewalld.service 在开机时启用一个服务:systemctl enable firewalld.service 在开机时禁用一个服务:systemctl disable firewalld.service 查看服务是否开机启动:systemctl is-enabled firewalld.service;echo $? 查看已启动的服务列表:systemctl list-unit-files|grep enabled Centos 7 firewall 命令: 查看已经开放的端口:
firewall-cmd --list-ports 开启端口
firewall-cmd --zone=public --add-port=80/tcp --permanent 命令含义:
–zone #作用域
–add-port=80/tcp #添加端口,格式为:端口 / 通讯协议
–permanent #永久生效,没有此参数重启后失效
重启防火墙
firewall-cmd --reload #重启 firewall systemctl stop firewalld.service #停止 firewall systemctl disable firewalld.service #禁止 firewall 开机启动 firewall-cmd --state #查看默认防火墙状态(关闭后显示 notrunning,开启后显示 running) CentOS 7 以下版本 iptables 命令 如要开放 80,22,8080 端口,输入以下命令即可
/sbin/iptables -I INPUT -p tcp --dport 80 -j ACCEPT /sbin/iptables -I INPUT -p tcp --dport 22 -j ACCEPT /sbin/iptables -I INPUT -p tcp --dport 8080 -j ACCEPT 然后保存:
/etc/rc.d/init.d/iptables save 查看打开的端口:
/etc/init.d/iptables status 关闭防火墙 1) 永久性生效,重启后不会复原
开启: chkconfig iptables on
关闭: chkconfig iptables off
2) 即时生效,重启后复原
开启: service iptables start
关闭: service iptables stop
查看防火墙状态: service iptables status
下面说下 CentOS7 和 6 的默认防火墙的区别
CentOS 7 默认使用的是 firewall 作为防火墙,使用 iptables 必须重新设置一下
1、直接关闭防火墙
systemctl stop firewalld.service #停止 firewall
systemctl disable firewalld.service #禁止 firewall 开机启动
2、设置 iptables service
yum -y install iptables-services
如果要修改防火墙配置,如增加防火墙端口 3306
vi /etc/sysconfig/iptables
增加规则
-A INPUT -m state --state NEW -m tcp -p tcp --dport 3306 -j ACCEPT
保存退出后
systemctl restart iptables.service #重启防火墙使配置生效
systemctl enable iptables.service #设置防火墙开机启动
最后重启系统使设置生效即可。
systemctl start iptables.service #打开防火墙
systemctl stop iptables.service #关闭防火墙
解决主机不能访问虚拟机 CentOS 中的站点 前阵子在虚拟机上装好了 CentOS6.2,并配好了 apache+php+mysql,但是本机就是无法访问。一直就没去折腾了。
具体情况如下
- 本机能 ping 通虚拟机
- 虚拟机也能 ping 通本机 3. 虚拟机能访问自己的 web 4. 本机无法访问虚拟机的 web
后来发现是防火墙将 80 端口屏蔽了的缘故。
检查是不是服务器的 80 端口被防火墙堵了,可以通过命令:telnet server_ip 80 来测试。
解决方法如下: /sbin/iptables -I INPUT -p tcp --dport 80 -j ACCEPT 然后保存: /etc/rc.d/init.d/iptables save 重启防火墙 /etc/init.d/iptables restart
CentOS 防火墙的关闭,关闭其服务即可: 查看 CentOS 防火墙信息:/etc/init.d/iptables status 关闭 CentOS 防火墙服务:/etc/init.d/iptables stop
CentOS 6, CentOS 7 安装mysql数据库
#!/bin/sh
# CentOs 6
#使用sohu镜像,速度快
wget http://mirrors.sohu.com/mysql/MySQL-5.6/MySQL-server-5.6.35-1.el6.x86_64.rpm
wget http://mirrors.sohu.com/mysql/MySQL-5.6/MySQL-client-5.6.35-1.el6.x86_64.rpm
wget http://mirrors.sohu.com/mysql/MySQL-5.6/MySQL-shared-5.6.35-1.el6.x86_64.rpm
#删除默认安装包
rpm -qa| grep mysql-libs | xargs rpm -e --nodeps
#安装依赖包
yum -y install numactl
rpm -ivh MySQL-shared-5.6.35-1.el6.x86_64.rpm
rpm -ivh MySQL-client-5.6.35-1.el6.x86_64.rpm
rpm -ivh MySQL-server-5.6.35-1.el6.x86_64.rpm
#获取默认root密码
sqlpasswd=`cat /root/.mysql_secret | awk -F''): '' {''print $2''}`
echo "MySQL root passwd: $sqlpasswd"
#设置数据库服务端编码为utf8
echo character_set_server=utf8 >> /usr/my.cnf
#重启数据库
service mysql restart
#!/bin/sh
# CentOS 7
wget http://mirrors.sohu.com/mysql/MySQL-5.6/MySQL-server-5.6.35-1.el7.x86_64.rpm
wget http://mirrors.sohu.com/mysql/MySQL-5.6/MySQL-client-5.6.35-1.el7.x86_64.rpm
wget http://mirrors.sohu.com/mysql/MySQL-5.6/MySQL-shared-5.6.35-1.el7.x86_64.rpm
rpm -ivh MySQL-shared-5.6.35-1.el7.x86_64.rpm
rpm -ivh MySQL-client-5.6.35-1.el7.x86_64.rpm
rpm -ivh MySQL-server-5.6.35-1.el7.x86_64.rpm
#获取默认root密码
sqlpasswd=`cat /root/.mysql_secret | awk -F''): '' {''print $2''}`
echo "MySQL root passwd: $sqlpasswd"
#设置数据库服务端编码为utf8
echo character_set_server=utf8 >> /usr/my.cnf
SET PASSWORD FOR ''root''@''localhost''=PASSWORD(''newpass'');
CentOS 6.6 系统升级到 CentOS 6.7
1、利用 Centos6.7 ISO 镜像挂载为本地镜像
创建一个挂载目录 CentOS 6.6 系统升级到 CentOS 6.7
mkdir /mnt/data
2、挂载镜像(远程镜像)
mount -t nfs 172.16.2.100://iso /mnt/data
3、yum 源配置文件
vim CentOS-Media.repo
[c6-media]
name=CentOS-$releasever - Media
baseurl=file:///mnt/data
gpgcheck=1
enabled=1
gpgkey=file:///mnt/data/RPM-GPG-KEY-CentOS-6
4、清除 yum 缓存进行更新
yum clean all
yum makecache
5、系统更新:
yum -y update
(如有报错看依赖进行安装,或者依赖版本问题需要重新安装)
rpm -e kernel-2.6.32-504.el6.x86_64
rpm -e kernel-devel-2.6.32-504.el6.x86_64
重启:
reboot
可能会遇到的依赖提示:
rpm -e libreport
yum remove libreport
yum -y install libreport
yum remove libreport-filesystem
yum -y install libreport
关于CentOS 7 ETCD集群配置大全和centos7 /etc/sysconfig/network的介绍已经告一段落,感谢您的耐心阅读,如果想了解更多关于CentOS 6 升级到 CentOS 7、CentOS 6 和 CentOS 7 防火墙的关闭、CentOS 6, CentOS 7 安装mysql数据库、CentOS 6.6 系统升级到 CentOS 6.7的相关信息,请在本站寻找。
本文标签: