CEPH - ubuntu install ceph
環境需求:
ceph-deploy * 1 (hostname : ceph-admin ip:192.168.100.12)
ceph-mon * 1 (hostname : ceph-mon ip:192.168.100.13)
ceph-osd * 3 (hostname : ceph-1/2/3 ip:192.168.100.9-11)
加入ceph的release key
安裝ceph及ntp套件
設定讓ceph-admin在deploy其他台時,不需要輸入密碼
安裝ceph
驗證ceph是否安裝成功
ceph-deploy * 1 (hostname : ceph-admin ip:192.168.100.12)
ceph-mon * 1 (hostname : ceph-mon ip:192.168.100.13)
ceph-osd * 3 (hostname : ceph-1/2/3 ip:192.168.100.9-11)
首先,先修改/etc/hosts
# 修改 每台node的host ip
$ vim /etc/hosts
192.168.100.12 ceph-admin
192.168.100.13 ceph-mon
192.168.100.9 ceph-1
192.168.100.10 ceph-2
192.168.100.11 ceph-3
|
加入ceph的release key
$ wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
|
新增ceph的source.list
# {ceph-stable-release}這邊來決定想安裝ceph的哪個版本,firefly或是hammer等等。
$ echo deb http://download.ceph.com/debian-{ceph-stable-release}/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
|
安裝ceph及ntp套件
$ apt-get update && apt-get upgrade
$ apt-get install ceph-deploy ntp -y
or
$ pip install ceph-deploy
|
設定讓ceph-admin在deploy其他台時,不需要輸入密碼
$ ssh-keygen
$ ssh-copy-id ceph-mon
$ ssh-copy-id ceph-1
$ ssh-copy-id ceph-2
$ ssh-copy-id ceph-3
|
安裝ceph
$ mkdir ~/ceph-deploy
$ cd ~/ceph-deploy
# 建立叢集 ceph-deploy new ceph-mon {ceph monitors} # 修改default replica size # 在ceph.conf的global section新增下列設定檔 $ vim ceph.conf [global] osd pool default size = 2
mon_clock_drift_allowed = 2
mon_clock_drift_warn_backoff = 30
# 安裝 ceph至其他機器上,我們這邊安裝hammer版本 ceph-deploy install --release hammer ceph-mon ceph-3 ceph-2 ceph-1 # 初始化 Monitor $ ceph-deploy mon create-initial $ ceph-deploy gatherkeys ceph-mon # 增加osd (我們這邊是把整個sdb當作osd來使用) # ceph-deploy osd prepare {node-name}:{data-disk}[:{journal-disk}] # 如果journal是放在同一顆device $ ceph-deploy osd prepare ceph-1:sdb ceph-2:sdb ceph-3:sdb $ ceph-deploy osd activate ceph-1:/dev/sdb1 ceph-2:/dev/sdb1 ceph-3:/dev/sdb1 # 如果journal是放不同的device $ ceph-deploy osd prepare ceph-1:sdb:/dev/ssd ceph-2:sdb:/dev/ssd ceph-3:sdb:/dev/ssd $ ceph-deploy osd activate ceph-1:/dev/sdb1:/dev/ssd1 ceph-2:/dev/sdb1:/dev/ssd1 ceph-3:/dev/sdb1:/dev/ssd1 # 把ceph相關的conf複製到/ect/ceph/ $ mkdir /etc/ceph $ cp ~/ceph-deploy/ceph.c* /etc/ceph # 複製admin.keyring到其他ceph host $ ceph-deploy admin ceph-mon ceph-3 ceph-2 ceph-1 |
驗證ceph是否安裝成功
$ ceph -s
or
$ ceph health
# cluster 64885d4a-4c31-43dd-abae-4de4ec8062a9
health HEALTH_OK
monmap e1: 1 mons at {ceph-mon=192.168.100.13:6789/0}
election epoch 2, quorum 0 ceph-mon
osdmap e19: 3 osds: 3 up, 3 in
pgmap v34: 128 pgs, 1 pools, 0 bytes data, 0 objects
104 MB used, 9078 MB / 9182 MB avail
128 active+clean
|
==Note==
# 如果出現too few PGs per OSD (xx < min xx)的話,須修改pg_num和
pgp_num的參數值,注意參數值只能比之前的大。
$ ceph osd pool set rbd pg_num 128
$ ceph osd pool set rbd pgp_num 128
# cluster 64885d4a-4c31-43dd-abae-4de4ec8062a9
health HEALTH_WARN
64 pgs degraded
64 pgs stuck degraded
64 pgs stuck inactive
64 pgs stuck unclean
64 pgs stuck undersized
64 pgs undersized
too few PGs per OSD (21 < min 30)
monmap e1: 1 mons at {ceph-mon=192.168.100.13:6789/0}
election epoch 2, quorum 0 ceph-mon
osdmap e11: 3 osds: 3 up, 3 in
pgmap v19: 64 pgs, 1 pools, 0 bytes data, 0 objects
100792 kB used, 9084 MB / 9182 MB avail
64 undersized+degraded+peered
# 如果出現了degraded、stuck inactive、stuck unclean、undersized, 可以嘗試把osd的權重重新調整。 # 先列出所有的osd $ ceph osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 3.00000 root default -2 1.00000 host ceph-1 0 1.00000 osd.0 up 1.00000 1.00000 -3 1.00000 host ceph-2 1 1.00000 osd.1 up 1.00000 1.00000 -4 1.00000 host ceph-3 2 1.00000 osd.2 up 1.00000 1.00000 # 重新設定權重 $ ceph osd crush reweight osd.0 1 $ ceph osd crush reweight osd.1 1 $ ceph osd crush reweight osd.2 1 # cluster 64885d4a-4c31-43dd-abae-4de4ec8062a9 health HEALTH_WARN 128 pgs degraded 128 pgs stuck inactive 128 pgs stuck unclean 128 pgs undersized monmap e1: 1 mons at {ceph-mon=192.168.100.13:6789/0} election epoch 2, quorum 0 ceph-mon osdmap e11: 3 osds: 3 up, 3 in pgmap v19: 128 pgs, 1 pools, 0 bytes data, 0 objects 100792 kB used, 9084 MB / 9182 MB avail 128 undersized+degraded+peered # 重啟ceph service $ sudo start ceph-osd-all $ sudo start ceph-mon-all $ sudo start ceph-mds-all $ sudo start ceph-osd id={id} $ sudo start ceph-mon id={hostname} $ sudo start ceph-mds id={hostname} |
留言
張貼留言