zhmg23

我们是如此的不同
资深运维工程师互荐群: 102123162

RHEL7/CentOS7下通过Systemd控制tomcat启动/停止/进程守护

例如:我要通过Systemd,控制tomcat的端口为7000的管理系统停止、启动,以及进程守护

创建服务文件,服务文件的位置
/etc/systemd/system

vi /etc/systemd/system/tomcat7000.service


# Systemd unit file for tomcat
[Unit]
Description=Apache Tomcat Web Application Container
After=syslog.target network.target

[Service]
Type=forking

Environment=JAVA_HOME=/usr/server/jdk8/jre
Environment=CATALINA_PID=/usr/server/tomcat-manage-7000/temp/tomcat.pid
Environment=CATALINA_HOME=/usr/server/tomcat-manage-7000
Environment=CATALINE_BASE=/usr/server/tomcat-manage-7000
Environment='CATALINE_OPTS=-Xms2G -Xmx2G -Xss256k -XX:PermSize=256m -XX:MaxPermSize=256m -XX:NewSize=256m -XX:MaxNewSize=256m -XX:SurvivorRatio=8 -XX:+UseParNewGC -XX:ParallelGCThreads=8 -XX:+CMSParallelRemarkEnabled -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSCompactAtFullCollection'
Environment='JAVA_OPTS=-Djava.awt.haedless=true -Djava.security.egd=file:/dev/./urandom'

ExecStart=/usr/server/tomcat-manage-7000/bin/startup.sh
ExecStop=/bin/kill -15 $MAINPID

User=tomcat
Group=tomcat

[Install]
WantedBy=multi-user.target


保存退出!

启动服务

# systemctl  start  tomcat7000.service

停止服务

# systemctl  stop  tomcat7000.service

查看服务状态

# systemctl  status tomcat7000.service


怎么在RHEL7或CentOS7上安装etcd集群

在RHEL7或CentOS7上安装etcd集群,其实就3步,安装、配置、启动。

1、yum安装etcd

yum -y install etcd


2、配置

etcd1配置如下


# grep -vE "^#|^$" /etc/etcd/etcd.conf 

# [member]

ETCD_NAME="etcd1"

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"

ETCD_LISTEN_PEER_URLS="http://192.168.1.8:2380"

ETCD_LISTEN_CLIENT_URLS="http://192.168.1.8:2379,http://127.0.0.1:2379"

#[cluster]

ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.1.8:2380"

ETCD_ADVERTISE_CLIENT_URLS="http://192.168.1.8:2379"

ETCD_INITIAL_CLUSTER="etcd1=http://192.168.1.8:2380,etcd2=http://192.168.1.9:2380,etcd3=http://192.168.1.10:2380"

ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"

ETCD_INITIAL_CLUSTER_STATE="new"


etcd2配置如下

#[cluster]

ETCD_NAME="etcd2"

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"

ETCD_LISTEN_PEER_URLS="http://192.168.1.9:2380"

ETCD_LISTEN_CLIENT_URLS="http://192.168.1.9:2379,http://127.0.0.1:2379"

#[cluster]

ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.1.9:2380"

ETCD_ADVERTISE_CLIENT_URLS="http://192.168.1.9:2379"

ETCD_INITIAL_CLUSTER="etcd1=http://192.168.1.8:2380,etcd2=http://192.168.1.9:2380,etcd3=http://192.168.1.10:2380"

ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"

ETCD_INITIAL_CLUSTER_STATE="new"


etcd3配置如下

#[cluster]

ETCD_NAME="etcd2"

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"

ETCD_LISTEN_PEER_URLS="http://192.168.1.10:2380"

ETCD_LISTEN_CLIENT_URLS="http://192.168.1.10:2379,http://127.0.0.1:2379"

#[cluster]

ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.1.10:2380"

ETCD_ADVERTISE_CLIENT_URLS="http://192.168.1.10:2379"

ETCD_INITIAL_CLUSTER="etcd1=http://192.168.1.8:2380,etcd2=http://192.168.1.9:2380,etcd3=http://192.168.1.10:2380"

ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"

ETCD_INITIAL_CLUSTER_STATE="new"


3、启动etcd

systemctl start etcd

systemctl enable etcd



使用docker容器方式快速安装zabbix

说明:版本3.4.12

服务器上要先安装docker,docker安装(略)

首先pull下面需要的镜像

docker pull zabbix/zabbix-server-mysql

docker pull zabbix/zabbix-agent

docker pull zabbix/zabbix-web-nginx-mysql

 

 

1. 启动一个空的MySQL服务器实例

docker run --name mysql-server -t \

     -e MYSQL_DATABASE="zabbix" \

      -e MYSQL_USER="zabbix" \

     -e MYSQL_PASSWORD="zabbix_pwd" \

     -e MYSQL_ROOT_PASSWORD="root_pwd" \

     -d mysql:5.7

     -v /data/mysql/zabbix:/var/lib/mysql     

              

#进入容器

docker exec -it mysql bash

#登录mysql

mysql -u root -p 

ALTER USER 'root'@'localhost' IDENTIFIED BY'zbx_passwd';

 

2 启动Zabbix Java gateway实例

docker run --name zabbix-java-gateway -t \

     -d zabbix/zabbix-java-gateway:latest

     

3. 启动Zabbix server实例

注:启动时并关联这个实例到已创建的MySQL服务器实例

docker run --name zabbix-server-mysql -t \

     -e DB_SERVER_HOST="mysql-server" \

     -e MYSQL_DATABASE="zabbix" \

     -e MYSQL_USER="zabbix" \

     -e MYSQL_PASSWORD="zabbix_pwd" \

     -e MYSQL_ROOT_PASSWORD="root_pwd" \

     -e ZBX_JAVAGATEWAY="zabbix-java-gateway" \

     --link mysql-server:mysql \

     --link zabbix-java-gateway:zabbix-java-gateway \

     -p 10051:10051 \

     -d zabbix/zabbix-server-mysql:latest

 

4. 启动Zabbix web 接口

并将它与MySQL服务器实例和Zabbix server实例关联

 

docker run --name zabbix-web-nginx-mysql -t\

     -e DB_SERVER_HOST="mysql-server" \

     -e MYSQL_DATABASE="zabbix" \

     -e MYSQL_USER="zabbix" \

     -e MYSQL_PASSWORD="zabbix_pwd" \

     -e MYSQL_ROOT_PASSWORD="root_pwd" \

     --link mysql-server:mysql \

     --link zabbix-server-mysql:zabbix-server \

     -p 8000:80 \

     -d zabbix/zabbix-web-nginx-mysql:latest

 注:这里的8000是宿主机发布的端口,80是容器内部端口

 

5 访问zabbix页面,进行安装

http://192.168.1.41:8000/

username: Admin

password: zabbix

 

 官方参考:

https://www.zabbix.com/documentation/3.4/zh/manual/installation/containers



-bash: /usr/bin/ls: Argument list too long

find . -name "log.2017*"  -exec rm {} +

RHEL7上安装docker-ce及修改默认存储路径、devicemapper大小

1、安装docker-ce

首先删除较旧版本的docker(如果有):

yum remove docker docker-common docker-selinux docker-engine-selinux docker-engine docker-ce

下一步安装需要的软件包:

yum install -y yum-utils device-mapper-persistent-data lvm2

配置docker-ce repo:

yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

最后安装docker-ce:

yum install docker-ce

注:安装完成后,Server Version: 18.06.0-ce


2、修改默认存储位置



docker安装完成后,默认是存储在/var/lib/docker下的,我们把这个目录,拷贝到/data下,然后建立软连接

# systemctl  stop  docker.service

# cd /var/lib

# cp -r  /var/lib/docker  /data/

# ln -s /data/docker  docker

# systemctl  start docker.service

在查看默认目录存储,已经在/data/docker下了



3、修改devicemapper默认大小

devicemapper是RHEL下Docker Engine的默认存储驱动,它有两种配置模式:loop-lvm和direct-lvm

docker 启动一个容器后默认根分区大小为10GB

# docker info  |grep Base

Base Device Size: 10.74GB

现在要修改默认大小为50G(其中30G也行,本次为测试)

修改方法:

编辑/lib/systemd/system/docker.service

在ExecStart=/usr/bin/dockerd 后,加入如下参数

 --storage-opt dm.basesize=50G --storage-opt dm.loopdatasize=1024G --storage-opt dm.loopmetadatasize=10G --storage-opt dm.fs=xfs



dm.basesize 默认为10G,限制容器和镜像的大小

dm.loopdatasize 存储池大小,默认为100G

dm.loopmetadatasize 元数据大小,默认为2G

dm.datadev 存储池设备,默认生成一个/var/lib/docker/devicemapper/devicemapper/data文件

dm.metadatadev 元数据设备,默认生成一个/var/lib/docker/devicemapper/devicemapper/metadata文件

dm.fs 文件系统

注:生产环境调整,注意备份数据

修改完成后,启动docker,然后查看


到此,修改完成。

docker官方建议,生产环境上不要使用loop-lvm,而应该使用direct-lvm,有关他们的对比可以参加redhat的说明。

https://developers.redhat.com/blog/2014/09/30/overview-storage-scalability-docker/


Linux下安装pip及升级方法

方法一:

下载安装

https://files.pythonhosted.org/packages/ae/e8/2340d46ecadb1692a1e455f13f75e596d4eab3d11a57446f08259dee8f02/pip-10.0.1.tar.gz

   

方法二:   

curl "https://bootstrap.pypa.io/get-pip.py" -o "get-pip.py"   

python get-pip.py   

    

方法三:

wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

yum install ./epel-release-latest-*.noarch.rpm

yum -y install python-pip



 升级方法

pip install --upgrade pip    

       


nginx允许跨域访问配置

一、什么是跨域[转]

简单地理解就是因为JavaScript同源策略的限制,a.com域名下的js无法操作b.com或是c.a.com域名下的对象。


同源是指相同的协议、域名、端口。特别注意两点:

1、如果是协议和端口造成的跨域问题"前台"是无能为力的

2、在跨域问题上,域仅仅是通过"协议+域名+端口"来识别,两个不同的域名即便指向同一个ip地址,也是跨域的。


二、如何配置nginx跨域


只需要在Nginx的server下的location配置文件中配置以下参数:

location / { 

add_header Access-Control-Allow-Origin *;

add_header Access-Control-Allow-Methods 'GET,PUT,DELETE,POST,OPTIONS';

add_header Access-Control-Allow-Headers 'DNT,X-Mx-ReqToken,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Authorization,Content-MD5';


解释:

1. Access-Control-Allow-Origin

服务器默认是不被允许跨域的。给Nginx服务器配置Access-Control-Allow-Origin *后,表示服务器可以接受所有的请求源(Origin),即接受所有跨域的请求。


2. Access-Control-Allow-Headers 是为了防止出现以下错误:

Request header field Content-Type is not allowed by Access-Control-Allow-Headers in preflight response.


这个错误表示当前请求Content-Type的值不被支持。其实是我们发起了"application/json"的类型请求导致的。这里涉及到一个概念:预检请求(preflight request),请看下面"预检请求"的介绍。


3. Access-Control-Allow-Methods 是为了防止出现以下错误:

Content-Type is not allowed by Access-Control-Allow-Headers in preflight response.


发送"预检请求"时,需要用到方法 OPTIONS ,所以服务器需要允许该方法。



关于ceph存储池Pool的操作

发布了长文章:关于ceph存储池Pool的操作

点击查看

RHEL7.2快速安装部署Ceph

一、安装准备

1、安装概况



2、创建用户

useradd -d /home/ceph -m ceph

passwd ceph


echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph

chmod 0440 /etc/sudoers.d/ceph

sed -i s'/Defaults requiretty/#Defaults requiretty'/g /etc/sudoers


3、安装配置NTP

yum install -y ntp ntpdate ntp-doc

ntpdate  2.cn.pool.ntp.org

hwclock --systohc

systemctl enable ntpd.service

systemctl start ntpd.service


4、安装Open-vm-tools

If you are running all nodes inside VMware, you need to install this virtualization utility. Otherwise skip this step.




5、禁用SELinux

sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config


6、配置host

192.168.1.41            c001041

192.168.1.42            c001042

192.168.1.43            c001043


sed -i '$ a\192.168.1.41\tc001041'  /etc/hosts

sed -i '$ a\192.168.1.42\tc001042'  /etc/hosts

sed -i '$ a\192.168.1.43\tc001043'  /etc/hosts



二、配置SSH Server

# su - ceph

$ ssh-keygen

Generating public/private rsa key pair.

Enter file in which to save the key (/home/ceph/.ssh/id_rsa): 

Created directory '/home/ceph/.ssh'.

Enter passphrase (empty for no passphrase): 

Enter same passphrase again: 

Your identification has been saved in /home/ceph/.ssh/id_rsa.

Your public key has been saved in /home/ceph/.ssh/id_rsa.pub.

The key fingerprint is:

47:7f:10:c9:8d:bd:22:6f:2c:96:c9:6b:a9:0a:0a:d0 ceph@h0073021

The key's randomart image is:

+--[ RSA 2048]----+

|           ..=   |

|            +.o  |

|          . .  . |

| .       ......  |

|. E     S..*...  |

|.        .* +.   |

|.   .    . =     |

| . . .    +      |

|  .   ...o       |

+-----------------+


注: 一直回车

编辑文件,加入如下内容

vim ~/.ssh/config


Host c001041

        Hostname c001041

        User ceph

 

Host c001042

        Hostname c001042

        User ceph

 

Host c001043

        Hostname c001043

        User ceph

 

$ sudo chmod 644 ~/.ssh/config

$ sudo ssh-keyscan c001041 c001042 c001043 >> ~/.ssh/known_hosts

$sudo  ssh-copy-id c001041

$sudo  ssh-copy-id c001042

$sudo  ssh-copy-id c001043

注: 以上操作,是在c001041

 三、配置 Firewalld

systemctl start firewalld

systemctl enable firewalld

sudo firewall-cmd --zone=public --add-port=80/tcp --permanent

sudo firewall-cmd --zone=public --add-port=2003/tcp --permanent

sudo firewall-cmd --zone=public --add-port=6789/tcp --permanent

sudo firewall-cmd --zone=public --add-port=4505-4506/tcp --permanent

sudo firewall-cmd --reload

打开 6800-7300 在 osd nodes

sudo firewall-cmd --zone=public --add-port=6800-7300/tcp --permanent

sudo firewall-cmd --reload

注:此步如果不使用防火墙,可忽略

Ceph Monitors 之间默认使用 6789 端口通信, OSD 之间默认用 6800:7300 这个范围内的端口通信



四、在各节点上创建ceph 源

在 /etc/yum.repos.d/目录下创建 ceph.repo然后写入以下内容

[Ceph]

name=Ceph packages for $basearch

baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/$basearch

enabled=1

gpgcheck=0

type=rpm-md

gpgkey=https://mirrors.163.com/ceph/keys/release.asc

priority=1


[Ceph-noarch]

name=Ceph noarch packages

baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/noarch

enabled=1

gpgcheck=0

type=rpm-md

gpgkey=https://mirrors.163.com/ceph/keys/release.asc

priority=1


[ceph-source]

name=Ceph source packages

baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/SRPMS

enabled=1

gpgcheck=0

type=rpm-md

gpgkey=https://mirrors.163.com/ceph/keys/release.asc

priority=1


注:科大的源 http://mirrors.ustc.edu.cn/ceph/

五、创建  Ceph Cluster

1、在管理节点c001041上进行安装准备

ssh root@c001041

su - ceph

# mkdir  ~/ceph-cluster

# cd  ~/ceph-cluster


$ sudo yum install http://mirrors.163.com/ceph/rpm-jewel/el7/noarch/ceph-deploy-1.5.38-0.noarch.rpm

安装ceph-deploy

$ sudo yum install ceph-deploy


若安装ceph后遇到麻烦可以使用以下命令进行清除包和配置

$ sudo ceph-deploy purge c001041 c001042 c001043

$ sudo ceph-deploy purgedata c001041 c001042 c001043

$ sudo ceph-deploy forgetkeys


2、创建CEPH集群

$sudo ceph-deploy new c001041  c001042 c001043


$ sudo vim ceph.conf

在[global]最下面加入如下内容

public network = 192.168.1.0/24

osd pool default size = 2

保存并退出

3、在每个 Ceph 节点上都安装 Ceph

$ sudo ceph-deploy install c001041  c001042 c001043


配置初始 monitor(s)、并收集所有密钥

# ceph-deploy mon create-initial


4、创建osd

添加两个 OSD ,登录到 Ceph 节点、并给 OSD 守护进程创建一个目录

#ssh c001042

#sudo mkdir /var/local/osd0

#exit


#ssh c001043

#sudo mkdir /var/local/osd1

#exit

然后,从管理节点(c001041)执行 ceph-deploy 来准备 OSD

$ sudo ceph-deploy osd prepare c001042:/var/local/osd0 c001043:/var/local/osd1


最后,激活 OSD

$ sudo ceph-deploy osd activate  c001042:/var/local/osd0 c001043:/var/local/osd1


确保你对 ceph.client.admin.keyring 有正确的操作权限

$ sudo chmod +r /etc/ceph/ceph.client.admin.keyring


检查集群的健康状况

[ceph@c001041 ceph-cluster]$ ceph -s

    cluster 8a529d9d-046e-472b-a771-98be158b7405

     health HEALTH_OK

     monmap e1: 3 mons at {c001041=192.168.1.41:6789/0,c001042=192.168.1.42:6789/0,c001043=192.168.1.43:6789/0}

            election epoch 4, quorum 0,1,2 c001041,c001042,c001043

     osdmap e10: 2 osds: 2 up, 2 in

            flags sortbitwise,require_jewel_osds

      pgmap v55: 64 pgs, 1 pools, 0 bytes data, 0 objects

            13079 MB used, 89270 MB / 102350 MB avail

                  64 active+clean


$ ceph --version

ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe)




RHEL7/CentOS7下的firewalld基础用法

启动/停止/查看 firewalld service

# systemctl start firewalld.service

# systemctl stop firewalld.service

# systemctl status firewalld.service


开机启动

# systemctl enable firewalld


禁用 

# systemctl disable firewalld


查看版本: firewall-cmd --version

查看帮助: firewall-cmd --help

显示状态: firewall-cmd --state

查看所有打开的端口: firewall-cmd --zone=public --list-ports

更新防火墙规则: firewall-cmd --reload

查看区域信息:  firewall-cmd --get-active-zones

查看指定接口所属区域: firewall-cmd --get-zone-of-interface=eth0

拒绝所有包:firewall-cmd --panic-on

取消拒绝状态: firewall-cmd --panic-off

查看是否拒绝: firewall-cmd --query-panic



To list details of default and active zones

# firewall-cmd --get-default-zone

# firewall-cmd --get-active-zones

# firewall-cmd --list-all



To add/remove interfaces to zones

把网卡为 "eth1" 添加到 "public" zone.

# firewall-cmd --zone=public --change-interface=eth1




添加开放端口 (--permanent永久生效,没有此参数重启后失效)

firewall-cmd --add-port=[YOUR PORT]/tcp

firewall-cmd --add-port=22/tcp

firewall-cmd --zone=public --add-port=8090/tcp




指定 192.168.5.20   192.168.5.22   192.168.6.80访问本机的6379端口

firewall-cmd --permanent --add-rich-rule="rule family="ipv4" source address="192.168.5.20" port protocol="tcp" port="6379" accept"

firewall-cmd --permanent --add-rich-rule="rule family="ipv4" source address="192.168.5.22" port protocol="tcp" port="6379" accept"

firewall-cmd --permanent --add-rich-rule="rule family="ipv4" source address="192.168.6.80" port protocol="tcp" port="6379" accept"


查看配置结果

firewall-cmd --list-all


删除规则

firewall-cmd --permanent --remove-rich-rule="rule family="ipv4" source address="192.168.5.20" port protocol="tcp" port="6379" accept"


指定的IP开放指定的端口段 

firewall-cmd --permanent --add-rich-rule="rule family="ipv4" source address="192.168.5.20" port protocol="tcp" port="8090-8099" accept"


添加指定端口范围

firewall-cmd --zone=public --add-port=10000-20000/udp --permanent


https://firewalld.org/