|
ceph 3.0
部署:Ansible
本地服务
osd存储:filestore
ceph 5.0
cephadm : 容器化 rook
osd存储: bluestore
集群:
RHEL6: HA
CONGA
RHEL7: HA
Pacemaker
投票 仲栽
过半数
4/2=2 + 1 = 3 # 只能坏 1 个 4-1 =3 4-2 = 2
5/2=2 + 1 = 3 #可以坏 2 个 5-2 = 3
部署ceph规则:
1) ceph节点数:3,5,7,9 奇数个节点# ansible 扩容osd ,monitor
3个monitor
9个osd
mgr
client
mds
rgw
2)故障转移域: ***** CRUSH MAP , CRUSH RULE ***** pg复本的放置 基于OSD ,主机,机架,机房
默认基于host的 故障转移域
考试注意:部署ceph ,确保所有节点时间 同步
[root@serverc ~]# ntpdate classroom.example.com
all.yml :
monitor_interface: eth0 # monitor 监听网卡设备
journal_size: 1024 #配置osd journal size 1024MB
public_network: 172.25.250.0/24 # public_network : OSD 网络,客户端访问网络 高带宽 10Gb/s
cluster_network: "{{ public_network }}" # monitor 网络 1Gb/s
ceph_conf_overrides: # 生成 ceph.conf 配置文件 参数
global:
mon_allow_pool_delete: true # 允许 删除pool 池 ,默认不能删除池
mon_osd_allow_primary_affinity: 1 # primary 副本 放置第一个osd上
mon_clock_drift_allowed: 0.5 # 时间 偏移 误差 0.5s
osd_pool_default_size: 2 # 指定默认对象副本数 2
osd_pool_default_min_size: 1 # 最少副本数 1
mon_pg_warn_min_per_osd: 0 # 当一个pg 没有osd关联时 , 发送日志 通知管理员
client:
rbd_default_features: 1
客户端需要安装的软件包:
ceph-common # RBD客户端
ceph-fuse # 原生librados 客户端
rgw: http
cephfs : ceph-common
客户端访问ceph存储:
创建池:pg=pgs
osd pool create <poolname> <int[0-]> {<int[0-]>} {replicated|erasure} create pool
pg pgs pool类型 默认replicated
{<erasure_code_profile>} {<rule>} {<int>}
crush rule #0 副本size # 在ceph.conf global定义 size 2
CRUSH RULE:
[
{
"rule_id": 0,
"rule_name": "replicated_rule",
"ruleset": 0,
"type": 1,
"min_size": 1,
"max_size": 10,
"steps": [
{
"op": "take",
"item": -1,
"item_name": "default"
},
{
"op": "chooseleaf_firstn",
"num": 0,
"type": "host" # 默认 基于 host 故障转移域
},
{
"op": "emit"
}
]
}
]
1) 复制池 replicated
[ceph@servera ~]$ ceph osd pool create mypool 16 16
pool 'mypool' created
[ceph@servera ~]$
查看池信息:
[ceph@servera ~]$ ceph osd pool ls detail
pool 1 'mypool' replicated size 2 min_size 1 crush_rule 0 object_hash rjenkins pg_num 16 pgp_num 16 last_change 24 flags hashpspool stripe_width 0
[ceph@servera ~]$
2) 纠删池 erasure
客户端访问ceph条件:
1)软件包:ceph-common,ceph-fuse
2)/etc/ceph/ceph.conf #打到monitor和public_network
[ceph@servera ~]$ cat /etc/ceph/ceph.conf
[global]
cluster network = 172.25.250.0/24
fsid = 4644e84e-57b4-489e-810c-122667a4ade3
mon host = 172.25.250.12,172.25.250.13,172.25.250.14
mon_allow_pool_delete = True
mon_clock_drift_allowed = 0.5
mon_osd_allow_primary_affinity = 1
mon_pg_warn_min_per_osd = 0
osd_pool_default_min_size = 1
osd_pool_default_size = 2
public network = 172.25.250.0/24
[client.libvirt]
admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok # must be writable by QEMU and allowed by SELinux or AppArmor
log file = /var/log/ceph/qemu-guest-$pid.log # must be writable by QEMU and allowed by SELinux or AppArmor
[client]
rbd_default_features = 1
[ceph@servera ~]$
3) keyring 文件认证客户身份:
[ceph@servera ~]$ cat /etc/ceph/ceph.client.admin.keyring
[client.admin]
key = AQCywBFicG+uFxAADoe1pg1lTzyDjtscIae3Bw==
[ceph@servera ~]$
[student@servera ceph]$
[student@servera ceph]$ su - ceph
Password:
Last login: Sun Feb 20 14:10:29 CST 2022 on pts/2
[ceph@servera ~]$ cd /etc/ceph/
[ceph@servera ceph]$ ls
ceph.client.admin.keyring ceph.conf ceph.d rbdmap
[ceph@servera ceph]$ chmod a+r ceph.client.admin.keyring
[ceph@servera ceph]$ ls -l
total 12
-rw-r--r-- 1 ceph ceph 63 Feb 20 12:20 ceph.client.admin.keyring
-rw-r--r-- 1 ceph ceph 639 Feb 20 12:20 ceph.conf
drwxr-xr-x 2 ceph ceph 23 Feb 20 12:20 ceph.d
-rw-r--r-- 1 root root 92 Nov 23 2017 rbdmap
[ceph@servera ceph]$ su - student
Password:
Last login: Sun Feb 20 14:31:37 CST 2022 on pts/2
[student@servera ~]$ ceph -s
cluster:
id: 4644e84e-57b4-489e-810c-122667a4ade3
health: HEALTH_OK
services:
mon: 3 daemons, quorum serverc,serverd,servere
mgr: serverc(active), standbys: serverd, servere
osd: 9 osds: 9 up, 9 in
data:
pools: 1 pools, 16 pgs
objects: 0 objects, 0 bytes
usage: 967 MB used, 169 GB / 170 GB avail
pgs: 16 active+clean
[student@servera ~]$
实验:在mypool池中 创建 object : 将/etc/hosts 文件 存储 到 pool中
列出池中的对象:
[ceph@servera ceph]$ rados -p mypool ls
将本地的文件/etc/hosts ,上传到ceph 的mypool池中, 保存为 对象: hosts
[ceph@servera ceph]$ rados -p mypool put hosts /etc/hosts
[ceph@servera ceph]$ rados -p mypool ls
hosts
[ceph@servera ceph]$
|
|