本文将介绍使用Docker-Compose启动具有不同配置的容器的多个实例的详细情况,特别是关于dockercompose启动容器的相关信息。我们将通过案例分析、数据研究等多种方式,帮助您更全面地了解
本文将介绍使用Docker-Compose启动具有不同配置的容器的多个实例的详细情况,特别是关于docker compose启动容器的相关信息。我们将通过案例分析、数据研究等多种方式,帮助您更全面地了解这个主题,同时也将涉及一些关于docker 启动具有多个网络接口的容器的方法示例、docker(6)容器的三剑客:docker machine、docker-compose、docker Swarm、docker-compose (单机版的容器编排工具)、docker-compose 基于Dockerfile 安装并启动redis容器的血案的知识。
本文目录一览:- 使用Docker-Compose启动具有不同配置的容器的多个实例(docker compose启动容器)
- docker 启动具有多个网络接口的容器的方法示例
- docker(6)容器的三剑客:docker machine、docker-compose、docker Swarm
- docker-compose (单机版的容器编排工具)
- docker-compose 基于Dockerfile 安装并启动redis容器的血案
使用Docker-Compose启动具有不同配置的容器的多个实例(docker compose启动容器)
我了解您可以使用scale命令来使用docker-compose来旋转多个容器。但是,它们都将具有相同的配置。
是否可以在同一主机上以同一配置(不同.yml
文件)启动容器的容器?
使用以下命令:
docker-compose -f dev.yml up -ddocker-compose -f qa.yml up -d
只有qa.yml
容器会运行,这不是我想要的。
- 编辑 -
当我尝试运行两个命令时,将发生以下情况。
$ docker-compose -f compose/dev.yml up -dcompose_mydocker_1 is up-to-date$ docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES905912df6e48 compose_mydocker "/sbin/my_init" 2 days ago Up 2 days 0.0.0.0:1234->80/tcp compose_mydocker_1$ docker-compose -f compose/qa.yml up -dRecreating compose_mydocker_1...$ docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES3fc912201224 compose_mydocker "/sbin/my_init" 5 seconds ago Up 5 seconds 0.0.0.0:1235->80/tcp compose_mydocker_1
我qa.yml
和dev.yml
这个样子的:
mydocker: build: .. ports: - "1234:80" #for dev.yml #- "1235:80" for qa.yml environment: - ENVIRONMENT=dev #and vice-versa for qa volumes: - ../assets/images:/var/www/assets
答案1
小编典典您需要做的就是更改项目名称。默认情况下,compose使用基于当前目录命名的项目。在您的情况下,您需要单独的环境,因此您需要不同的项目名称。
您可以在环境中使用docker-compose -p <project_name>
或设置COMPOSE_PROJECT_NAME
。
关于提供持久化项目名称的方法也有一些讨论:https :
//github.com/docker/compose/issues/745
docker 启动具有多个网络接口的容器的方法示例
为容器添加网络接口
1 以默认的网络方式运行一个容器
# docker run --name tst_add_inf -it tst_img /bin/bash
这样,我们就通过宿主机器上的镜像tst_img创建了一个名字为tst_add_inf的容器,此容器默认已经创建了一个网络接口eth0。
2 获取容器的PID
# docker inspect -f ''{{.State.Pid}}'' tst_add_inf
上面获取到容器的PID即为容器1号进程在宿主机器命名空间的进程PID。
3 为容器添加网络接口eth1
(1) 创建一对 veth peer设备
# ip link add veth0 type veth peer name veth1
创建好后可以通过" ip link list"看到刚创建的两个设备。
(2) 将veth一端添加到网桥
# brctl addif docker0 veth0 # ip link set veth0 up
(3) 将veth另一端与容器关联
# ln -s /proc/$pid/ns/net /var/run/netns/$container_id # ip link set veth1 netns $pid
(4) 配置容器新添加的网络接口
将新接口更名为eth1并修改其IP地址。
# ip netns exec $pid ip link set dev veth1 name eth1 # ip netns exec $pid lp link set eth1 up
容器启动后,您可以使用“docker network connect”进行操作,但这意味着该进程已经在运行,可能会错过新的.
这个问题是关于码头和多个网络接口的搜索.虽然不是所需的版本在我离开这里的一些信息:
使用Docker 1.12,可以向docker容器添加多个网络接口,但首先需要创建容器,然后在启动容器之前附加第二个(和后续的)网络NIC:
$docker create --network=network1 --name container_name containerimage:latest $docker network connect network2 container_name $docker start container_name
需要先创建网络:
$docker network create --driver=bridge network1 --subnet=172.19.0.0/24 $docker network create --driver=bridge network2 --subnet=172.19.1.0/24
此外,您可以使用docker运行中的–network = host参数启动Dockerhost网络接口的容器:
$docker run --net=host containerimage:latest
翻译自:http://stackoverflow.com/questions/34110416/start-container-with-multiple-network-interfaces
以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持。
- Docker容器自启动的实现方法
- Docker容器内应用服务自启动的方法示例
- docker实践之从新镜像启动容器报错解决
- docker容器启动后添加端口映射
- Docker容器中运行flume及启动不输出运行日志问题
- Docker学习笔记之容器查看启动终止删除的方法
- Docker如何进入启动容器
- 两种方式创建docker镜像的启动容器时区别介绍(总结篇)
- Docker使用Dockerfile创建支持ssh服务自启动的容器镜像
docker(6)容器的三剑客:docker machine、docker-compose、docker Swarm
文章目录
- 一.docker machine
- 1.在已经安装docker的目标主机部署
- 1)server11作为管理端,创建machine
- 2)server12上已经安装docker
- 3)免密
- 4)创建主机
- 5)启动docker
- 6)管理machine
- 2.在未安装docker的目标主机部署
- 1)自己在本地建好docker源
- 2)配置脚本
- 2)免密
- 3)调用sh安装
- 二.docker-compose实践
- 三.docker Swarm
- 1.创建swarp集群
- 1)初始化
- 2)在其他docker节点上执行命令
- 3)部署swarm监控:(各节点提前导入dockersamples/visualizer镜像)
- 4)转移leader至server12
- 5)加入私有仓库,速度快
- 6)滚动更新
- 7)Portainer可视化
一.docker machine
- docker machine:负责再多种平台快速安装docker环境
1.在已经安装docker的目标主机部署
1)server11作为管理端,创建machine
[root@zhenji file_recv]# scp docker-machine-* root@192.168.100.242:
[root@server11 ~]# ls
convoy docker-machine-prompt.bash Nginx.tar
convoy.tar.gz docker-machine-wrapper.bash registry.tar
docker-machine-Linux-x86_64-0.16.1 lxcfs-2.0.5-3.el7.centos.x86_64.rpm
[root@server11 ~]# mv docker-machine-Linux-x86_64-0.16.1 /usr/local/bin/docker-machine
[root@server11 ~]# chmod +x /usr/local/bin/docker-machine
2)server12上已经安装docker
[root@server12 ~]# rpm -q docker-ce
docker-ce-20.10.2-3.el7.x86_64
3)免密
[root@server11 ~]# ssh-keygen
[root@server11 ~]# ssh-copy-id 192.168.100.242
4)创建主机
[root@server11 ~]# docker-machine create --driver generic --generic-ip-address=192.168.100.242 server12
[root@server11 ~]# docker-machine env server12
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.100.242:2376"
export DOCKER_CERT_PATH="/root/.docker/machine/machines/server12"
export DOCKER_MACHINE_NAME="server12"
# Run this command to configure your shell:
# eval $(docker-machine env server12)
[root@server12 ~]# netstat -antlp|grep 2376
tcp6 0 0 :::2376 :::* LISTEN 7098/dockerd
5)启动docker
[root@server12 ~]# systemctl start docker
[root@server12 ~]# cd /etc/systemd/system/docker.service.d/
[root@server12 docker.service.d]# ls
10-machine.conf
[root@server12 docker.service.d]# cat 10-machine.conf
[root@server11 ~]# docker-machine ls#查看主机
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
server12 - generic Running tcp://192.168.100.242:2376 v20.10.2
#再server11上远程产看server12#通过远程端口2376端口进行连接
[root@server11 ~]# docker-machine config server12
[root@server11 ~]# docker `docker-machine config server12` ps
#只要是docker指令都是server2上的
[root@server11 ~]# docker-machine env server12
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.100.242:2376"
export DOCKER_CERT_PATH="/root/.docker/machine/machines/server12"
export DOCKER_MACHINE_NAME="server12"
# Run this command to configure your shell:
# eval $(docker-machine env server12)
[root@server11 ~]# eval $(docker-machine env server12)
[root@server11 ~]# docker images#只要是docker指令都是server2上的
6)管理machine
先装bash脚本,使得行提示符更加的人性化
[root@server11 ~]# yum install bash-completion.noarch -y
三个官方脚本放进去
[root@server11 ~]# cd /etc/bash_completion.d/
[root@server11 bash_completion.d]# cp /root/*.bash .
[root@server11 bash_completion.d]# ls
docker-machine.bash rct rhsm-debug
docker-machine-prompt.bash redefine_filedir rhsm-icon
docker-machine-wrapper.bash rhn-migrate-classic-to-rhsm subscription-manager
iprutils rhsmcertd
[root@server11 bash_completion.d]# cd
[root@server11 ~]# vim .bashrc #行提示符
最后一行添加
PS1='[\u@\h \W$(__docker_machine_ps1)]\$'
#此时需要退掉再连接server11
[root@server11 ~]# exit
[root@zhenji Desktop]# ssh root@192.168.100.241
[root@server11 ~]#docker-machine env server12
# eval $(docker-machine env server12)
[root@server11 ~]#eval $(docker-machine env server12)
[root@server11 ~ [server12]]#docker ps#带[server12]
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2.在未安装docker的目标主机部署
- 新建server13,server13上没有安装docker
1)自己在本地建好docker源
[root@server13 yum.repos.d]# vim docker-ce.repo
[root@server13 ~]# cat /etc/yum.repos.d/docker-ce.repo
[docker]
name=docker-ce
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/x86_64/stable/
gpgcheck=0
[base]
name=CentOS-7 - Base - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/7/os/$basearch/
gpgcheck=1
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
#released updates
[updates]
name=CentOS-7 - Updates - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/7/updates/$basearch/
gpgcheck=1
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
#additional packages that may be useful
[extras]
name=CentOS-7 - Extras - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/7/extras/$basearch/
gpgcheck=1
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
#additional packages that extend functionality of existing packages
[centosplus]
name=CentOS-7 - Plus - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/7/centosplus/$basearch/
gpgcheck=1
enabled=0
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
#contrib - packages by Centos Users
[contrib]
name=CentOS-7 - Contrib - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/7/contrib/$basearch/
gpgcheck=1
enabled=0
[root@server13 yum.repos.d]# yum install docker-ce docker-ce-cli
选d下载到本地
[root@server13 ~]# mkdir docker
[root@server13 ~]# cd /var/cache/yum/x86_64/7Server/extras/packages/
[root@server13 packages]# cp * /root/docker/
[root@server13 packages]# cd /var/cache/yum/x86_64/7Server/docker/packages/
[root@server13 packages]# ls
[root@server13 packages]# cp * /root/docker/
[root@server13 packages]# cd /root/docker/
[root@server13 docker]# ls
[root@server13 docker]# cd
[root@server13 docker]# yum install -y createrepo
[root@server13 docker]# createrepo . #创建repodata
[root@server13 docker]# ls
repodata
[root@server13 ~]# scp -r docker/* root@192.168.100.141:/var/www/html/docker-ce
[root@zhenji yum.repos.d]# cd /var/www/html/
[root@zhenji html]# ls
rhel7.6 westos zabbix zabbix.tar.gz
[root@zhenji html]# mkdir docker-ce
[root@zhenji html]# ls
docker-ce rhel7.6 westos zabbix zabbix.tar.gz
[root@zhenji html]# vim /etc/yum.repos.d/docker-ce.repo
[root@zhenji html]# cat /etc/yum.repos.d/docker-ce.repo
[docker]
name=docker-ce
baseurl=http://192.168.100.141/docker-ce
gpgcheck=0
2)配置脚本
[root@server11 ~]#wget https://get.docker.com
[root@server11 ~]#mv index.html get-docker.sh
[root@server11 ~]#vim get-docker.sh
改动两个
412行改:
centos|fedora|rhel)
yum_repo="http://192.168.100.141/docker-ce.repo"
476行改:
# install the correct cli version first
#if [ -n "$cli_pkg_version" ]; then
# $sh_c "$pkg_manager install -y -q docker-ce-cli-$cli_pkg_version"
#fi
$sh_c "$pkg_manager install -y -q docker-ce"
2)免密
[root@server11 ~]# ssh-keygen
[root@server11 ~]# ssh-copy-id 192.168.100.243
3)调用sh安装
[root@server11 ~]#docker-machine create --driver generic --engine-install-url “http://192.168.100.241/get-docker.sh” --generic-ip-address 192.168.100.243 server13
二.docker-compose实践
- Docker Compose是一种编排服务,基于pyhton语言实现,是一个用于在 Docker 上定义并运行复杂应用的工具,可以让用户在集群中部署分布式应用
docker-compose up
[root@server11 ~]#mkdir compose
[root@server11 ~]#cd compose/
[root@server11 compose]#vim docker-compose.yml
[root@server11 compose]#cat docker-compose.yml
version: "3.9"
services:
web1:
image: Nginx
networks:
- mynet
volumes:
- ./web1:/usr/share/Nginx/html
web2:
image: Nginx
networks:
- mynet
volumes:
- ./web2:/usr/share/Nginx/html
haproxy:
image: haproxy
networks:
- mynet
ports:
- "80:80"
volumes:
- ./haproxy/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg
[root@server11 compose]#mkdir web1
[root@server11 compose]#mkdir web2
[root@server11 compose]#echo web1 > web1/index.html
[root@server11 compose]#echo web2 > web2/index.html
[root@server11 compose]#mkdir haproxy
[root@server11 compose]#cd haproxy/
[root@server11 haproxy]#vim haproxy.cfg
global
maxconn 65535
stats socket /var/run/haproxy.stat mode 600 level admin
log 127.0.0.1 local0
uid 200
gid 200
#chroot /var/empty
daemon
defaults
mode http
log global
option httplog
option dontlognull
monitor-uri /monitoruri
maxconn 8000
timeout client 30s
retries 2
option redispatch
timeout connect 5s
timeout server 5s
stats uri /status
# The public 'www' address in the DMZ
frontend public
bind *:80 name clear
#bind 192.168.1.10:443 ssl crt /etc/haproxy/haproxy.pem
#use_backend static if { hdr_beg(host) -i img }
#use_backend static if { path_beg /img /css }
default_backend dynamic
# The static backend backend for 'Host: img', /img and /css.
backend dynamic
balance roundrobin
server app1 web1:80 check inter 1000
server app2 web2:80 check inter 1000
[root@server11 haproxy]#docker-compose ps
Name Command State Ports
-------------------------------------------------------------------
compose_haproxy_1 docker-entrypoint.sh hapro ... Exit 0
compose_web1_1 /docker-entrypoint.sh ngin ... Exit 0
compose_web2_1 /docker-entrypoint.sh ngin ... Exit 0
[root@server11 haproxy]#docker-compose start
Starting web1 ... done
Starting web2 ... done
Starting haproxy ... done
[root@server11 haproxy]#docker-compose ps
Name Command State Ports
-------------------------------------------------------------------------------
compose_haproxy_1 docker-entrypoint.sh hapro ... Up 0.0.0.0:80->80/tcp
compose_web1_1 /docker-entrypoint.sh ngin ... Up 80/tcp
compose_web2_1 /docker-entrypoint.sh ngin ... Up 80/tcp
[root@server11 haproxy]#docker-compose up
网页访问http://192.168.100.241/status
#负载均衡
[root@zhenji docker-ce]# curl 192.168.100.241
web2
[root@zhenji docker-ce]# curl 192.168.100.241
web1
[root@server11 haproxy]#docker-compose stop web1
[root@zhenji docker-ce]# curl 192.168.100.241
web2
[root@zhenji docker-ce]# curl 192.168.100.241
web2
[root@server11 haproxy]#docker-compose logs web1#可以看日志
[root@server11 haproxy]#docker-compose rm#删除
[root@server11 haproxy]#docker-compose up -d#重启
三.docker Swarm
- Swarm可以把多个 Docker 主机组成的系统转换为单一的虚拟 Docker 主机,使得容器可以组成跨主机的子网网络
- Docker Swarm 是一个为 IT 运维团队提供集群和调度能力的编排工具
server11、server12、server13都有docker
1.创建swarp集群
1)初始化
[root@server11 ~]# docker swarm --help
[root@server11 ~]# docker swarm init
Swarm initialized: current node (9gk2aq06rfub568jiirp98oyk) is Now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-3vsbpghdpd9tq2qey8q8kfzf8mayes9dkutmnd8p9o4ojyxcx2-csfbm8gwilfhc8jl72doxwh6p 192.168.100.241:2377
2)在其他docker节点上执行命令
[root@server12 ~]# docker swarm join --token SWMTKN-1-3vsbpghdpd9tq2qey8q8kfzf8mayes9dkutmnd8p9o4ojyxcx2-csfbm8gwilfhc8jl72doxwh6p 192.168.100.241:2377
[root@server13 ~]# docker swarm join --token SWMTKN-1-3vsbpghdpd9tq2qey8q8kfzf8mayes9dkutmnd8p9o4ojyxcx2-csfbm8gwilfhc8jl72doxwh6p 192.168.100.241:2377
[root@server11 ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
9gk2aq06rfub568jiirp98oyk * server11 Ready Active leader 20.10.2
kn68xxkqz82cm0yg3dogyci1u server12 Ready Active 20.10.2
kt4o1yq9aralmuvvxile682ko server13 Ready Active 20.10.2
3)部署swarm监控:(各节点提前导入dockersamples/visualizer镜像)
[root@server11 docker]# scp Nginx.tar root@192.168.100.243:
[root@server13 ~]# ls
docker-anzhaung Nginx.tar
[root@server13 ~]# docker load -i Nginx.tar
[root@server11 docker]# docker service create --name my_cluster --replicas 2 -p 80:80 Nginx
#server11.server12.13的80端口都开了
[root@server11 docker]# netstat -antlp|grep :80
tcp6 0 0 :::80 :::* LISTEN 3390/dockerd
[root@server11 docker]# docker service scale my_cluster=4#扩充到4个
[root@server12 ~]# echo server12 > index.html
[root@server12 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
12211165d4a1 Nginx:latest "/docker-entrypoint.…" 4 minutes ago Up 4 minutes 80/tcp my_cluster.2.yuxu22o95iv3mr1wdnsak0l6b
[root@server12 ~]# docker cp index.html 12211165d4a1:/usr/share/Nginx/html
[root@server13 ~]# echo server13 > index.html
[root@server13 ~]# docker cp index.html b569010bcd05:/usr/share/Nginx/html
[root@server11 docker]# docker service ps my_cluster
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
y70sana59cq1 my_cluster.1 Nginx:latest server11 Running Running 25 minutes ago
yuxu22o95iv3 my_cluster.2 Nginx:latest server12 Running Running 24 minutes ago
u01zvq2ywc0f my_cluster.3 Nginx:latest server13 Running Running 7 minutes ago
cqreyj0ub96k my_cluster.4 Nginx:latest server13 Running Running 7 minutes ago
##负载均衡
[root@zhenji Desktop]# curl 192.168.100.241
server13
[root@zhenji Desktop]# curl 192.168.100.241
server13
[root@zhenji Desktop]# curl 192.168.100.241
server12
[root@server11 docker]# docker service rm my_cluster
[root@server11 ~]# docker pull ikubernetes/myapp:v1
[root@server11 ~]# docker tag ikubernetes/myapp:v1 myapp:v1
[root@server11 ~]# docker rmi ikubernetes/myapp:v1
[root@server12 ~]# docker pull ikubernetes/myapp:v1
[root@server12 ~]# docker tag ikubernetes/myapp:v1 myapp:v1
[root@server12 ~]# docker rmi ikubernetes/myapp:v1
[root@server13 ~]# docker pull ikubernetes/myapp:v1
[root@server13 ~]# docker tag ikubernetes/myapp:v1 myapp:v1
[root@server13 ~]# docker rmi ikubernetes/myapp:v1
[root@server11 docker]# docker service create --name my_cluster --replicas 2 -p 80:80 myapp:v1
[root@server11 docker]# docker service ps my_cluster
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
i4gh2kfo1gbx my_cluster.1 myapp:v1 server12 Running Running 36 seconds ago
qnxqytwuqxwh my_cluster.2 myapp:v1 server11 Running Running 45 seconds ago
[root@zhenji images]# curl 192.168.100.241
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@zhenji images]# curl 192.168.100.241/hostname.html
61174f4c0f4d
[root@zhenji images]# curl 192.168.100.241/hostname.html
cfed57eabb08
[root@zhenji images]# curl 192.168.100.241/hostname.html
61174f4c0f4d
[root@zhenji images]# curl 192.168.100.241/hostname.html
cfed57eabb08
[root@server11 docker]# docker service scale my_cluster=6
#扩充到6个节点,依然会负载均衡
[root@server11 docker]# docker pull dockersamples/visualizer
[root@server11 docker]# docker service create --name=viz --publish=8080:8080/tcp --constraint=node.role==manager --mount=type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock dockersamples/visualizer
#故障迁移,server13上的直接关掉,然后均分到server11和server12上
[root@server13 ~]# systemctl stop docker#之后再起开systemctl start docker
4)转移leader至server12
%%%再开一个server14
[root@server11 ~]# cd harbor/harbor/
[root@server11 harbor]# docker-compose down
[root@server11 ~]# docker node promote server12#升级server12
Node server12 promoted to a manager in the swarm.
[root@server11 ~]# docker node demote server11#降级server11
Manager server11 demoted in the swarm.
[root@server12 ~]# docker node ls
[root@server11 ~]# docker swarm leave##leave离开集群
rm server11 #删除server11。从集群里分离出来
[root@server12 ~]# docker node rm server11
[root@server14 ~]# docker swarm join --token SWMTKN-1-3vsbpghdpd9tq2qey8q8kfzf8mayes9dkutmnd8p9o4ojyxcx2-csfbm8gwilfhc8jl72doxwh6p 192.168.100.242:2377
[root@server11 harbor]# ./install.sh --with-chartmuseum
%网页访问192.168.100.241 harbor页面,登陆admin Harbor12345
5)加入私有仓库,速度快
[root@server12 ~]# vim /etc/hosts
[root@server12 ~]# cd /etc/docker/
[root@server12 docker]# ls
ca.pem certs.d daemon.json key.json plugins server-key.pem server.pem
[root@server12 docker]# vim daemon.json
[root@server12 docker]# cat daemon.json
{
"registry-mirrors": ["https://reg.westos.org"]
}
[root@server12 docker]# systemctl reload docker.service
[root@server12 docker]# scp daemon.json root@192.168.100.243:/etc/docker/
[root@server12 docker]# scp daemon.json root@192.168.100.244:/etc/docker/
[root@server13 ~]# systemctl reload docker.service
[root@server14 ~]# systemctl reload docker.service
[root@server12 docker]# scp -r certs.d/ root@192.168.100.243:/etc/docker/
[root@server12 docker]# scp -r certs.d/ root@192.168.100.244:/etc/docker/
[root@server13 ~]# vim /etc/hosts
192.168.100.241 server11 reg.westos.org
[root@server11 harbor]# docker tag myapp:v1 reg.westos.org/library/myapp:v1
[root@server11 harbor]# docker push reg.westos.org/library/myapp:v1
[root@server13 ~]# docker pull myapp:v1
[root@server13 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
myapp v1 d4a5e0eaa84f 2 years ago 15.5MB
[root@server12 docker]# docker service rm my_cluster
[root@server12 docker]# docker rmi myapp:v1
[root@server12 docker]# docker service create --name my_web --replicas 3 -p 80:80 myapp:v1
6)滚动更新
#扩充10个
[root@server12 docker]# docker service scale my_web=10
[root@server11 docker]# docker pull ikubernetes/myapp:v2
[root@server11 docker]# docker tag ikubernetes/myapp:v2 reg.westos.org/library/myapp:v2
[root@server11 docker]# docker push reg.westos.org/library/myapp:v2
[root@server12 docker]# docker service update --image myapp:v2 --update-parallelism 2 --update-delay 5s my_web #parallelism 每次更新数;delay每隔多久
[root@server11 docker]# curl http://192.168.100.242/hostname.html
c91945ef3008
[root@server11 docker]# curl http://192.168.100.242/hostname.html
2c149c2c4aa2
[root@server11 docker]# curl http://192.168.100.242/hostname.html
7cd9ce9e540a
[root@server11 docker]# curl http://192.168.100.242/hostname.html
19a610c4a0aa
[root@server11 docker]# curl http://192.168.100.242/hostname.html
ed5c5611b0ad
[root@server12 compose]# vim docker-compose.yml
[root@server11 docker]# docker tag dockersamples/visualizer:latest reg.westos.org/library/visualizer:latest
[root@server11 docker]# docker push reg.westos.org/library/visualizer:latest
[root@server12 compose]# docker service rm viz
viz
[root@server12 compose]# docker service rm my_web
[root@server12 compose]# docker stack deploy -c docker-compose.yml my_cluster
#直接改docker-compose.yml中的replicas:6,就会扩充到6,再推一次
[root@server12 compose]# docker stack deploy -c docker-compose.yml my_cluster
7)Portainer可视化
[root@server11 ~]# mkdir portainer
[root@server11 ~]# cd portainer/
[root@server11 portainer]# pwd
/root/portainer
[root@server11 portainer]# ls
portainer-agent-stack.yml portainer-agent.tar portainer.tar
[root@server11 portainer]# docker load -i portainer-agent.tar
[root@server11 portainer]# docker load -i portainer.tar
[root@server11 portainer]# docker tag portainer/agent:latest reg.westos.org/library/agent:latest
[root@server11 portainer]# docker push reg.westos.org/library/agent:latest
[root@server12 compose]# docker service ls
[root@server12 ~]# mv portainer-agent-stack.yml compose/
[root@server12 compose]# vim portainer-agent-stack.yml
[root@server12 compose]# docker stack rm my_cluster
[root@server12 compose]# docker stack deploy -c portainer-agent-stack.yml portainer
[root@server12 compose]# docker stack ps portainer
网页访问192.168.100.242:9000
docker-compose (单机版的容器编排工具)
类似于ansible剧本 yml 格式
要使用这个编排工具,必须先安装
yum install -y docker-compose
cd wordpress/
vi docker-compose.yml
###############
version: '3'
services:
db:
image: MysqL:5.7
volumes:
- /data/db_data:/var/lib/MysqL
restart: always
environment:
MysqL_ROOT_PASSWORD: somewordpress
MysqL_DATABASE: wordpress
MysqL_USER: wordpress
MysqL_PASSWORD: wordpress
wordpress:
depends_on:
- db
image: wordpress:latest
volumes:
- /data/web_data:/var/www/html
ports:
- "80:80"
restart: always
environment:
wordpress_DB_HOST: db
wordpress_DB_USER: wordpress
wordpress_DB_PASSWORD: wordpress
###############
在含有docker-compose.yml的目录下执行
docker-compose up -d
访问网页,检查
使用docker-compose编排方式,安装zabbix
vim docker-compose.yml
################
version: '3'
services:
MysqL-server:
image: MysqL:5.7
restart: always
command: --character-set-server=utf8 --collation-server=utf8_bin
environment:
MysqL_ROOT_PASSWORD: root_pwd
MysqL_DATABASE: zabbix
MysqL_USER: zabbix
MysqL_PASSWORD: zabbix_pwd
zabbix-java-gateway:
image: zabbix/zabbix-java-gateway:latest
restart: always
zabbix-server:
depends_on:
- MysqL-server
- zabbix-java-gateway
image: zabbix/zabbix-server-MysqL:latest
ports:
- "10051:10051"
restart: always
environment:
DB_SERVER_HOST: MysqL-server
MysqL_DATABASE: zabbix
MysqL_USER: zabbix
MysqL_PASSWORD: zabbix_pwd
MysqL_ROOT_PASSWORD: root_pwd
ZBX_JAVAGATEWAY: zabbix-java-gateway
zabbix-web:
depends_on:
- MysqL-server
- zabbix-server
image: zabbix/zabbix-web-Nginx-MysqL:latest
ports:
- "80:80"
restart: always
environment:
DB_SERVER_HOST: MysqL-server
MysqL_DATABASE: zabbix
MysqL_USER: zabbix
MysqL_PASSWORD: zabbix_pwd
MysqL_ROOT_PASSWORD: root_pwd
################
在含有docker-compose.yml的目录下执行
docker-compose up -d
访问网页,检查
docker-compose 基于Dockerfile 安装并启动redis容器的血案
前言
为了实现“一键部署”的目的,我采用Dockerfile 和 docker-compose来实现自己的目的。这个过程中,我怎么也无法启动自己的redis-server服务。
目录结构
~/Workspace/docker/images/redis tree
.
├── Dockerfile
├── conf
│ └── redis.conf
└── docker-compose.yml
文件内容
Dockerfile
FROM redis:latest
WORKDIR /data/
# 默认的源太慢,原因就是被我大天朝给墙了,所以换成国内,阿里的
RUN sed -i s@/archive.ubuntu.com/@/mirrors.aliyun.com/@g /etc/apt/sources.list
RUN apt-get clean
# 不能直接 apt-get install curl ,因为容器里面默认apt的包是空的,所以需要更新到本地
RUN apt-get update
# docker 是基于Ubuntu的,所以里面基本默认都带有apt-get这个工具
RUN apt-get install -y curl \
&& rm -rf /var/lib/apt/lists/* \
&& mkdir -p /usr/local/etc/redis/ \
&& curl http://download.redis.io/redis-stable/redis.conf > /usr/local/etc/redis/redis.conf
CMD [ "redis-server","/usr/local/etc/redis/redis.conf"]
docker-compose.yml
version: "2.2"
services:
redis:
# 使用当前目录下的Dockerfile构建镜像
build: .
image: my_redis
container_name: redis
ports:
- "6379:6379"
volumes:
- ./data:/data
# 此处就是引发血案的地方
# - ./conf:/usr/local/etc/redis
问题分析:
-
Dockerfile 在构建的过程中,通过curl获取到了redis.conf的配置
-
docker-compose 在启动容器时,由于
volumes
这个地方将本地的目录挂在到了redis容器内部的/usr/local/etc/redis
下。那么/usr/local/etc/redis
里面的文件就会被全部被本地覆盖。如果本地./conf
这个目录下是空的,则/usr/local/etc/redis
里面也会是空的。 -
解决办法
- 本地的
./conf
文件夹中存在redis.conf
,这样的文件 - 像上面的案例一样,不要将redis.conf暴露处理。
- 本地的
调试问题的经过
本次调试,着实让我头疼了老一阵,一看死,总是报 can''t open file
这种错误。我查看了docker 日志,依然无法找到问题。想进入到docker 容器里面去看,结果发现redis容器根本就没有起来。将Dockerfile 改成如下的形式,才启动了redis容器,并顺利进入到容器里面。才找到原来是redis.conf
文件被覆盖掉了。
FROM redis:latest
WORKDIR /data/
RUN sed -i s@/archive.ubuntu.com/@/mirrors.aliyun.com/@g /etc/apt/sources.list
RUN apt-get clean
RUN apt-get update
RUN apt-get install -y curl \
&& rm -rf /var/lib/apt/lists/* \
&& mkdir -p /usr/local/etc/redis/ \
&& curl http://download.redis.io/redis-stable/redis.conf > /usr/local/etc/redis/redis.conf
# CMD [ "redis-server","/usr/local/etc/redis/redis.conf"]
# 启动容器,直接让其运行 shell脚本,这样容器就不会推出了。
CMD ["sh","-c","while true;do sleep 1000 ;done"]
关于使用Docker-Compose启动具有不同配置的容器的多个实例和docker compose启动容器的问题我们已经讲解完毕,感谢您的阅读,如果还想了解更多关于docker 启动具有多个网络接口的容器的方法示例、docker(6)容器的三剑客:docker machine、docker-compose、docker Swarm、docker-compose (单机版的容器编排工具)、docker-compose 基于Dockerfile 安装并启动redis容器的血案等相关内容,可以在本站寻找。
本文标签: