zhmg23

我们是如此的不同
资深运维工程师互荐群: 102123162

Centos7配置zookeeper为systemd服务

# vim /etc/systemd/system/zookeeper.service

[Unit]

Description=zookeeper.service

After=network.target

[Service]

Type=forking

Environment=ZOO_LOG_DIR=/usr/local/zookeeper/

Environment=PATH=/usr/local/jdk8/bin:/usr/local/jdk8/jre/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin:/root/bin

ExecStart=/usr/local/zookeeper/bin/zkServer.sh start /usr/local/zookeeper/conf/zoo.cfg

ExecStop=/usr/local/zookeeper/bin/zkServer.sh stop  /usr/local/zookeeper/conf/zoo.cfg

ExecReload=/usr/local/zookeeper/bin/zkServer.sh restart /usr/local/zookeeper/conf/zoo.cfg

User=iflyweb

Restart=always

[Install]

WantedBy=multi-user.target



启动zookeeper:systemctl start zookeeper.service

停止zookeeper:systemctl stop zookeeper.service

查看进程状态及日志(重要):systemctl status zookeeper.service

开机自启动:systemctl enable zookeeper.service

关闭自启动:systemctl disable zookeeper.service


CentOS7快速配置服务器网卡聚合双bond方法

说明:服务器上双网段,服务器上四个网口全部插满,分别为eno1-4个口,eno1、eno2聚合为: 10.103.1.105,eno3、eno4聚合为: 10.103.0.215,模式为mode6

bond0: 10.103.1.105

bond1: 10.103.0.215


网卡绑定mode共有七种(0~6) bond0、bond1、bond2、bond3、bond4、bond5、bond6(常用的有三种)

mode=0:平衡负载模式,有自动备援,但需要”Switch”支援及设定。

mode=1:自动备援模式,其中一条线若断线,其他线路将会自动备援。

mode=6:平衡负载模式,有自动备援,不必”Switch”支援及设定


1、 开启  NetworkManager

systemctl start  NetworkManager


2、备份配置文件

mkdir -p /data/bak/

cp -r /etc/sysconfig/network-scripts/ /data/bak/


3、使用命令创建mode 6,如果使用其他模式,对应调整即可

nmcli connection add type bond ifname bond0 mode 6

nmcli connection add type bond-slave ifname eno1 master bond0

nmcli connection add type bond-slave ifname eno2 master bond0


nmcli connection add type bond ifname bond1 mode 6

nmcli connection add type bond-slave ifname eno3 master bond1

nmcli connection add type bond-slave ifname eno4 master bond1


4、分别修改bond0、bond1配置文件,添加IP

vim  /etc/sysconfig/network-scripts/ifcfg-bond-bond0

BOOTPROTO=dhcp

修改为:

BOOTPROTO=static

添加IP

IPADDR=10.103.1.105

NETMASK=255.255.255.128

GATEWAY=10.103.1.126


编辑 ifcfg-bond-bond1

vim /etc/sysconfig/network-scripts/ifcfg-bond-bond1 

BOOTPROTO=dhcp

修改为:

BOOTPROTO=static

添加IP

IPADDR=10.103.0.215

NETMASK=255.255.255.128


5、重启网卡:

systemctl   restart  network


RHEL7更换yum为CentOS7

在RHEL下使用yum安装相关软件时,经常会遇到

This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.

我们可以替换为CentOS的yum,来解决此问题


一、下载yum包 

http://mirrors.163.com/centos/7.6.1810/os/x86_64/Packages/yum-3.4.3-161.el7.centos.noarch.rpm

http://mirrors.163.com/centos/7.6.1810/os/x86_64/Packages/yum-utils-1.1.31-50.el7.noarch.rpm

http://mirrors.163.com/centos/7.6.1810/os/x86_64/Packages/yum-metadata-parser-1.1.4-10.el7.x86_64.rpm

http://mirrors.163.com/centos/7.6.1810/os/x86_64/Packages/yum-plugin-fastestmirror-1.1.31-50.el7.noarch.rpm


http://mirrors.163.com/centos/7.6.1810/os/x86_64/Packages/python-chardet-2.2.1-1.el7_1.noarch.rpm

http://mirrors.163.com/centos/7.6.1810/os/x86_64/Packages/python-kitchen-1.1.1-5.el7.noarch.rpm



二、卸载

# rpm -qa | grep yum


yum-3.4.3-150.el7.noarch

yum-utils-1.1.31-50.el7.noarch

yum-langpacks-0.4.2-7.el7.noarch

PackageKit-yum-1.0.7-6.el7.x86_64

yum-rhn-plugin-2.0.1-6.el7.noarch

yum-metadata-parser-1.1.4-10.el7.x86_64


# rpm -qa | grep yum | xargs rpm -e --nodeps


三、安装

# rpm -ivh yum-3.4.3-161.el7.centos.noarch.rpm  yum-metadata-parser-1.1.4-10.el7.x86_64.rpm  yum-plugin-fastestmirror-1.1.31-50.el7.noarch.rpm  yum-utils-1.1.31-50.el7.noarch.rpm



结果报:

rpm >= 0:4.11.3-22 is needed by yum-3.4.3-161.el7.centos.noarch


重新下载新版本的依赖软件,安装更新


  wget  http://mirrors.163.com/centos/7/os/x86_64/Packages/rpm-4.11.3-35.el7.x86_64.rpm

  wget  http://mirrors.163.com/centos/7/os/x86_64/Packages/rpm-libs-4.11.3-35.el7.x86_64.rpm

  wget  http://mirrors.163.com/centos/7/os/x86_64/Packages/rpm-build-libs-4.11.3-35.el7.x86_64.rpm

  wget  http://mirrors.163.com/centos/7/os/x86_64/Packages/rpm-python-4.11.3-35.el7.x86_64.rpm

  wget  http://mirrors.163.com/centos/7/os/x86_64/Packages/python-chardet-2.2.1-1.el7_1.noarch.rpm

  wget  http://mirrors.163.com/centos/7/os/x86_64/Packages/python-kitchen-1.1.1-5.el7.noarch.rpm

  wget  http://mirrors.163.com/centos/7/os/x86_64/Packages/python-urlgrabber-3.10-9.el7.noarch.rpm

  wget  http://mirrors.163.com/centos/7/os/x86_64/Packages/libxml2-python-2.9.1-6.el7_2.3.x86_64.rpm

  wget  http://mirrors.163.com/centos/7/os/x86_64/Packages/rpm-4.11.3-35.el7.x86_64.rpm

  wget  http://mirrors.163.com/centos/7/os/x86_64/Packages/rpm-sign-4.11.3-35.el7.x86_64.rpm

  wget  http://mirrors.163.com/centos/7/os/x86_64/Packages/rpm-build-4.11.3-35.el7.x86_64.rpm     


更新依赖软件:

   

# rpm -Uvh rpm-4.11.3-35.el7.x86_64.rpm  rpm-libs-4.11.3-35.el7.x86_64.rpm rpm-python-4.11.3-35.el7.x86_64.rpm  rpm-build-libs-4.11.3-35.el7.x86_64.rpm



在次安装

# rpm -ivh yum-3.4.3-161.el7.centos.noarch.rpm  yum-metadata-parser-1.1.4-10.el7.x86_64.rpm  yum-plugin-fastestmirror-1.1.31-50.el7.noarch.rpm  yum-utils-1.1.31-50.el7.noarch.rpm



wget http://mirrors.163.com/.help/CentOS6-Base-163.repo

使用全文替换:
:%s/$releasever/7.6.1810/g

注:此处7.6.1810的版本号,为CentOS最新发布相关版本号



四、升级php


1、首先确认yum源的地址是否有效。


# yum install epel-release

# rpm -Uvh http://rpms.famillecollet.com/enterprise/remi-release-7.rpm


2、确认安装的php版本

# yum list --enablerepo=remi --enablerepo=remi-php72 | grep php


# yum --enablerepo=remi-php72 install php-xml php-soap php-xmlrpc php-mbstring php-json php-gd php-mcrypt



kubernetes之网络插件flannel安装

 关于容器间的网络插件,有很多选择,如下:

calico v3.1.3

canal (given calico/flannel versions)

cilium v1.3.0

contiv v1.1.7

flanneld v0.10.0

weave v2.4.1

kube-router v0.2.1

multus v3.1

本文主要记录kubernetes中,网络插件flannel安装

注:安装前,要提前安装好etcd集群、还要配置好相关的证书

1、yum安装

yum -y install flannel


2、修改service配置文件

vim /usr/lib/systemd/system/flanneld.service 

[Unit]

Description=Flanneld overlay address etcd agent

After=network.target

After=network-online.target

Wants=network-online.target

After=etcd.service

Before=docker.service

 

[Service]

Type=notify

EnvironmentFile=/etc/sysconfig/flanneld

EnvironmentFile=-/etc/sysconfig/docker-network

ExecStart=/usr/bin/flanneld-start \

  -etcd-endpoints=${FLANNEL_ETCD_ENDPOINTS} \

  -etcd-prefix=${FLANNEL_ETCD_PREFIX} \

  $FLANNEL_OPTIONS

ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker

Restart=on-failure

 

[Install]

WantedBy=multi-user.target

RequiredBy=docker.service



3、修改/etc/sysconfig/flanneld 配置

vim /etc/sysconfig/flanneld

# Flanneld configuration options  

 

# etcd url location.  Point this to the server where etcd runs

FLANNEL_ETCD_ENDPOINTS="https://192.168.45.182:2379,https://192.168.45.183:2379,https://192.168.45.184:2379"

 

# etcd config key.  This is the configuration key that flannel queries

# For address range assignment

FLANNEL_ETCD_PREFIX="/kubernetes/network"

 

# Any additional options that you want to pass

FLANNEL_OPTIONS="-etcd-cafile=/etc/kubernetes/ssl/ca.pem -etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem -etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem"


注意:FLANNEL_ETCD_ENDPOINTS 填写etcd集群服务器ip及端口,如果etcd没有集群就只填写一个地址即可,由于etcd开启了证书验证,所以FLANNEL_OPTIONS 这里需要配置证书路径。


4、etcd创建网络配置

etcdctl \

  --ca-file=/etc/kubernetes/ssl/ca.pem \

  --cert-file=/etc/kubernetes/ssl/kubernetes.pem \

  --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \

  mk  //kubernetes/network/config  '{"Network":"172.17.0.0/16"}' 


注:此步只需要在etcd集群中的一台执行即可


5、启动flannel

#  systemctl daemon-reload

#  systemctl enable flanneld

#  systemctl start flanneld

#  systemctl status flanneld



6、验证etcd中网络

# etcdctl \

--ca-file=/etc/kubernetes/ssl/ca.pem \

 --cert-file=/etc/kubernetes/ssl/kubernetes.pem \

--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \

ls -r |grep subnets




RHEL7下通过NFS挂载服务存储


服务器A:192.168.45.182

服务器B:192.168.45.183

挂载目录: /home/wav


rpcbind:rpcbind服务器将RPC程序编号转换为通用地址。

nfs-server :   它使客户端能够访问NFS共享。

nfs-lock/rpc-statd:NFS文件锁定。当NFS服务器崩溃并重启时实施文件锁定恢复。

nfs-idmap:它将用户和组ID转换为名称,并将用户名和组名转换为ID

/etc/exports:它是主配置文件,用于控制将哪些文件系统导出到远程主机并指定选项。



# rpm -qa | grep nfs-utils

# yum install nfs-utils rpcbind


配置开机启动rpcbind、nfs-server、nfs-lock、nfs-idmap

#  systemctl enable nfs-server

#  systemctl enable rpcbind

#  systemctl enable nfs-lock

#  systemctl enable nfs-idmap


启动rpcbind、nfs-server、nfs-lock、nfs-idmap

#  systemctl start rpcbind

#  systemctl start nfs-server

#  systemctl start nfs-lock

#  systemctl start nfs-idmap


查看nfs状态

# systemctl status nfs


创建/home/wav

# mkdir /home/wav


编辑配置文件: /etc/exports

vi  /etc/exports 

/home/wav 192.168.45.183 (rw,no_root_squash,no_all_squash,sync)


注:

rw:对共享文件夹的可写入权限

no_root_squash :默认情况下,由用户root在客户机上创建的任何文件请求都被服务器上的用户nobody所对待。(请求映射到哪个UID取决于服务器上的用户“nobody”的UID,而不是客户端)。如果选择no_root_squash,则客户端计算机上的root用户将具有相同级别的访问权限系统作为服务器上的root用户。

sync:对相应文件系统的所有更改立即刷新到磁盘; 正在等待相应的写入操作。



可以设定的参数主要有以下这些:

rw:可读写的权限; 

ro:只读的权限; 

no_root_squash:登入到NFS主机的用户如果是root,该用户即拥有root权限;

root_squash:登入NFS主机的用户如果是root,该用户权限将被限定为匿名使用者nobody; 

all_squash:不管登陆NFS主机的用户是何权限都会被重新设定为匿名使用者nobody。 

anonuid:将登入NFS主机的用户都设定成指定的user id,此ID必须存在于/etc/passwd中。 

anongid:同anonuid,但是变成group ID就是了! 

sync:资料同步写入存储器中。 

async:资料会先暂时存放在内存中,不会直接写入硬盘。 

insecure:允许从这台机器过来的非授权访问。




# exportfs -r

exportfs: No options for /home/wav 192.168.45.183: suggest 192.168.45.183(sync) to avoid warning

exportfs: No host name given with /home/wav (rw,no_root_squash,no_all_squash,sync), suggest *(rw,no_root_squash,no_all_squash,sync) to avoid warning


exportfs -v:显示服务器上的共享文件和导出选项列表

exportfs -a:导出/ etc / 

exportfs -u:取消导出一个或多个目录

exportfs -r:修改/ etc /出口




NFS的重要命令

showmount -e:显示本地计算机上的可用共享

showmount -e <server-ip or hostname>: 显示出远程服务器上可用的共享

showmount -d:列出所有子目录

exportfs -v:显示服务器上的共享文件和选项列表

exportfs -a:导出/ etc / exports中列出的所有股份或名称

exportfs -u : Unexports all shares listed in /etc/exports, or given name

exportfs -r : 修改/etc/exports后刷新服务器列表


https://www.thegeekdiary.com/centos-rhel-7-configuring-an-nfs-server-and-nfs-client/


RHEL7/CentOS7下通过Systemd控制tomcat启动/停止/进程守护

例如:我要通过Systemd,控制tomcat的端口为7000的管理系统停止、启动,以及进程守护

创建服务文件,服务文件的位置
/etc/systemd/system

vi /etc/systemd/system/tomcat7000.service


# Systemd unit file for tomcat
[Unit]
Description=Apache Tomcat Web Application Container
After=syslog.target network.target

[Service]
Type=forking

Environment=JAVA_HOME=/usr/server/jdk8/jre
Environment=CATALINA_PID=/usr/server/tomcat-manage-7000/temp/tomcat.pid
Environment=CATALINA_HOME=/usr/server/tomcat-manage-7000
Environment=CATALINE_BASE=/usr/server/tomcat-manage-7000
Environment='CATALINE_OPTS=-Xms2G -Xmx2G -Xss256k -XX:PermSize=256m -XX:MaxPermSize=256m -XX:NewSize=256m -XX:MaxNewSize=256m -XX:SurvivorRatio=8 -XX:+UseParNewGC -XX:ParallelGCThreads=8 -XX:+CMSParallelRemarkEnabled -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSCompactAtFullCollection'
Environment='JAVA_OPTS=-Djava.awt.haedless=true -Djava.security.egd=file:/dev/./urandom'

ExecStart=/usr/server/tomcat-manage-7000/bin/startup.sh
ExecStop=/bin/kill -15 $MAINPID

User=tomcat
Group=tomcat

[Install]
WantedBy=multi-user.target


保存退出!

启动服务

# systemctl  start  tomcat7000.service

停止服务

# systemctl  stop  tomcat7000.service

查看服务状态

# systemctl  status tomcat7000.service


怎么在RHEL7或CentOS7上安装etcd集群

在RHEL7或CentOS7上安装etcd集群,其实就3步,安装、配置、启动。

1、yum安装etcd

yum -y install etcd


2、配置

etcd1配置如下


# grep -vE "^#|^$" /etc/etcd/etcd.conf 

# [member]

ETCD_NAME="etcd1"

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"

ETCD_LISTEN_PEER_URLS="http://192.168.1.8:2380"

ETCD_LISTEN_CLIENT_URLS="http://192.168.1.8:2379,http://127.0.0.1:2379"

#[cluster]

ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.1.8:2380"

ETCD_ADVERTISE_CLIENT_URLS="http://192.168.1.8:2379"

ETCD_INITIAL_CLUSTER="etcd1=http://192.168.1.8:2380,etcd2=http://192.168.1.9:2380,etcd3=http://192.168.1.10:2380"

ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"

ETCD_INITIAL_CLUSTER_STATE="new"


etcd2配置如下

#[cluster]

ETCD_NAME="etcd2"

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"

ETCD_LISTEN_PEER_URLS="http://192.168.1.9:2380"

ETCD_LISTEN_CLIENT_URLS="http://192.168.1.9:2379,http://127.0.0.1:2379"

#[cluster]

ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.1.9:2380"

ETCD_ADVERTISE_CLIENT_URLS="http://192.168.1.9:2379"

ETCD_INITIAL_CLUSTER="etcd1=http://192.168.1.8:2380,etcd2=http://192.168.1.9:2380,etcd3=http://192.168.1.10:2380"

ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"

ETCD_INITIAL_CLUSTER_STATE="new"


etcd3配置如下

#[cluster]

ETCD_NAME="etcd2"

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"

ETCD_LISTEN_PEER_URLS="http://192.168.1.10:2380"

ETCD_LISTEN_CLIENT_URLS="http://192.168.1.10:2379,http://127.0.0.1:2379"

#[cluster]

ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.1.10:2380"

ETCD_ADVERTISE_CLIENT_URLS="http://192.168.1.10:2379"

ETCD_INITIAL_CLUSTER="etcd1=http://192.168.1.8:2380,etcd2=http://192.168.1.9:2380,etcd3=http://192.168.1.10:2380"

ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"

ETCD_INITIAL_CLUSTER_STATE="new"


3、启动etcd

systemctl start etcd

systemctl enable etcd



使用docker容器方式快速安装zabbix

说明:版本3.4.12

服务器上要先安装docker,docker安装(略)

首先pull下面需要的镜像

docker pull zabbix/zabbix-server-mysql

docker pull zabbix/zabbix-agent

docker pull zabbix/zabbix-web-nginx-mysql

 

 

1. 启动一个空的MySQL服务器实例

docker run --name mysql-server -t \

     -e MYSQL_DATABASE="zabbix" \

      -e MYSQL_USER="zabbix" \

     -e MYSQL_PASSWORD="zabbix_pwd" \

     -e MYSQL_ROOT_PASSWORD="root_pwd" \

     -d mysql:5.7

     -v /data/mysql/zabbix:/var/lib/mysql     

              

#进入容器

docker exec -it mysql bash

#登录mysql

mysql -u root -p 

ALTER USER 'root'@'localhost' IDENTIFIED BY'zbx_passwd';

 

2 启动Zabbix Java gateway实例

docker run --name zabbix-java-gateway -t \

     -d zabbix/zabbix-java-gateway:latest

     

3. 启动Zabbix server实例

注:启动时并关联这个实例到已创建的MySQL服务器实例

docker run --name zabbix-server-mysql -t \

     -e DB_SERVER_HOST="mysql-server" \

     -e MYSQL_DATABASE="zabbix" \

     -e MYSQL_USER="zabbix" \

     -e MYSQL_PASSWORD="zabbix_pwd" \

     -e MYSQL_ROOT_PASSWORD="root_pwd" \

     -e ZBX_JAVAGATEWAY="zabbix-java-gateway" \

     --link mysql-server:mysql \

     --link zabbix-java-gateway:zabbix-java-gateway \

     -p 10051:10051 \

     -d zabbix/zabbix-server-mysql:latest

 

4. 启动Zabbix web 接口

并将它与MySQL服务器实例和Zabbix server实例关联

 

docker run --name zabbix-web-nginx-mysql -t\

     -e DB_SERVER_HOST="mysql-server" \

     -e MYSQL_DATABASE="zabbix" \

     -e MYSQL_USER="zabbix" \

     -e MYSQL_PASSWORD="zabbix_pwd" \

     -e MYSQL_ROOT_PASSWORD="root_pwd" \

     --link mysql-server:mysql \

     --link zabbix-server-mysql:zabbix-server \

     -p 8000:80 \

     -d zabbix/zabbix-web-nginx-mysql:latest

 注:这里的8000是宿主机发布的端口,80是容器内部端口

 

5 访问zabbix页面,进行安装

http://192.168.1.41:8000/

username: Admin

password: zabbix

 

 官方参考:

https://www.zabbix.com/documentation/3.4/zh/manual/installation/containers



-bash: /usr/bin/ls: Argument list too long

find . -name "log.2017*"  -exec rm {} +

RHEL7上安装docker-ce及修改默认存储路径、devicemapper大小

1、安装docker-ce

首先删除较旧版本的docker(如果有):

yum remove docker docker-common docker-selinux docker-engine-selinux docker-engine docker-ce

下一步安装需要的软件包:

yum install -y yum-utils device-mapper-persistent-data lvm2

配置docker-ce repo:

yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

最后安装docker-ce:

yum install docker-ce

注:安装完成后,Server Version: 18.06.0-ce


2、修改默认存储位置



docker安装完成后,默认是存储在/var/lib/docker下的,我们把这个目录,拷贝到/data下,然后建立软连接

# systemctl  stop  docker.service

# cd /var/lib

# cp -r  /var/lib/docker  /data/

# ln -s /data/docker  docker

# systemctl  start docker.service

在查看默认目录存储,已经在/data/docker下了



3、修改devicemapper默认大小

devicemapper是RHEL下Docker Engine的默认存储驱动,它有两种配置模式:loop-lvm和direct-lvm

docker 启动一个容器后默认根分区大小为10GB

# docker info  |grep Base

Base Device Size: 10.74GB

现在要修改默认大小为50G(其中30G也行,本次为测试)

修改方法:

编辑/lib/systemd/system/docker.service

在ExecStart=/usr/bin/dockerd 后,加入如下参数

 --storage-opt dm.basesize=50G --storage-opt dm.loopdatasize=1024G --storage-opt dm.loopmetadatasize=10G --storage-opt dm.fs=xfs



dm.basesize 默认为10G,限制容器和镜像的大小

dm.loopdatasize 存储池大小,默认为100G

dm.loopmetadatasize 元数据大小,默认为2G

dm.datadev 存储池设备,默认生成一个/var/lib/docker/devicemapper/devicemapper/data文件

dm.metadatadev 元数据设备,默认生成一个/var/lib/docker/devicemapper/devicemapper/metadata文件

dm.fs 文件系统

注:生产环境调整,注意备份数据

修改完成后,启动docker,然后查看


到此,修改完成。

docker官方建议,生产环境上不要使用loop-lvm,而应该使用direct-lvm,有关他们的对比可以参加redhat的说明。

https://developers.redhat.com/blog/2014/09/30/overview-storage-scalability-docker/