發表文章

目前顯示的是 5月, 2016的文章

Openstack - nova live migration - Live Migration failure: operation failed xxxx : Connection refused

今天我們以cmd的方式進行live migration(我們這邊是用ceph rbd當作instance的backend storage) $ nova live-migration instance-id compute-hostname # 結果檢查nova-compute.log的時候出現了 Live Migration failure: operation failed: Failed to connect to remote libvirt URI qemu+tcp:// compute-hostname /system: unable to connect to server at ' compute-hostname :16509': Connection refused # 這時請修改libvirt的相關conf參數 $ vim /etc/libvirt/libvirtd.conf listen_tls = 0 listen_tcp = 1 auth_tcp = "none" $ vim /etc/default/libvirt-bin libvirtd_opts=" -d -l" # 重啟libvirt服務 $ restart libvirt-bin # 這樣應該就可以正常了 $ nova live-migration instance-id compute-hostname

如何使用ceph當作nova backend storage

在ceph node底下 # 建立ceph pool 給nova使用,pool名稱為vms $ sudo ceph osd pool create vms 128 建立一個user為client.nova並對pool vms相關權限設定,產生一個keyring file $ ceph auth get-or-create client.nova mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=vms, allow rx pool=images' -o /etc/ceph/ceph.client.nova.keyring # 複製ceph.conf與nova.keyring到openstack controller上 $ scp /etc/ceph/ceph.conf root@ openstack node ip :/etc/ceph $ scp /etc/ceph/ceph.client.nova.keyring root@ openstack node ip :/etc/ceph # 複製client.nova的key複製到controller /tmp/ client.cinder.key 上 $ ceph auth get-key client.nova | ssh openstack node ip tee /tmp/client.nova.key 添加一个secret key到libvirt $ uuidgen 7ad56fa3-23e2-4d86-ae9e-253326a704f5 $ cat > secret.xml <<EOF <secret ephemeral='no' private='no'> <uuid> 7ad56fa3-23e2-4d86-ae9e-253326a704f5 </uuid> <usage type='ceph'> <name>client.no

Openstack - cinder如何使用multi backend storage

假如今天我們今天要使用lvm及ceph當作我們的cinder storage,在修改cinder.conf之前須做一些前置設定 ,如lvm的套件或是ceph的相關帳號權限設定, 可參考 利用opensource 安裝openstack 結合 linuxbridge與vxlan– All In One (kilo) 如何使用ceph當作cinder backend storage 在cinder.conf中 [DEFAULT] enabled_backends = rbd,lvm [lvm] volume_backend_name = lvm volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_group = cinder-volumes iscsi_protocol = iscsi iscsi_helper = tgtadm [rbd] volume_backend_name = rbd volume_driver = cinder.volume.drivers.rbd.RBDDriver rbd_pool = volumes rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = false rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1 glance_api_version = 2 rbd_user = cinder rbd_secret_uuid = 457eb676-33da-42ec-9a8c-9293d545c337 然後利用cinder cmd新增type # 建立type為lvm跟ceph $ cinder type-create lvm $ cinder type-create ceph # 設定lvm type的backend為lvm $ cinder type-key lvm set volume_backend_name=lvm # 設定ceph type的bac

如何使用ceph當作cinder backend storage

在ceph node底下 # 建立ceph pool 給cinder使用,pool名稱為volumes $ sudo ceph osd pool create volumes 128 建立一個user為client.cinder並對pool volumes相關權限設定,產生一個keyring file $ sudo ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images' -o /etc/ceph/ceph.client.cinder.keyring # 複製ceph.conf與cinder.keyring到openstack controller上 $ scp /etc/ceph/ceph.conf root@ openstack node ip :/etc/ceph $ scp /etc/ceph/ceph.client.glance.keyring root@ openstack node ip :/etc/ceph # 複製client.cinder的key複製到controller /tmp/ client.cinder.key 上 $ ceph auth get-key client.cinder | ssh openstack node ip tee /tmp/client.cinder.key 添加一个secret key到libvirt $ uuidgen d65bc749-e160-44d3-a903-9c50291b4293 $ cat > secret.xml <<EOF <secret ephemeral='no' private='no'> <uuid> d65bc749-e160-44d3-a903-9c50291b4293 </uuid>

LVM - 刪除 cinder volume 出現remove ioctl on failed: Device or resource busy

$ lvdisplay --- Logical volume --- LV Path /dev/cinder-volumes/volume-8394a77f-c9b6-426d-b31d-33d0682b394f LV Name volume-8394a77f-c9b6-426d-b31d-33d0682b394f VG Name cinder-volumes LV UUID vWeepZ-rbPL-tKas-Om26-ziQg-F3vF-aINUXE LV Write Access read/write LV Creation host, time controller, 2016-05-30 11:59:02 +0800 LV Status available # open 1 LV Size 1.00 GiB Current LE 256 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:0 # 今天想刪除volume- 8394a77f-c9b6-426d-b31d-33d0682b394f,結果報錯 $ lvremove /dev/cinder-volumes/volume-8394a77f-c9b6-426d-b31d-33d0682b394f device-mapper: remove ioctl on failed: Device or resource busy # 強制刪除也一樣 $ lvremove -f /dev/cinder-volumes/volume-8394a77f-c9b6-426d-b31d-33d0682b394f device-mapper: remove i

如何使用ceph當作glance backend storage

在ceph node底下 # 建立ceph pool 給glance使用,pool名稱為images $ sudo ceph osd pool create images 128 建立一個user為client.glance 並對pool images相關權限設定,產生一個keyring file $ sudo ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images' -o /etc/ceph/ceph.client.glance.keyring # 複製ceph.conf與glance.keyring到openstack controller上 $ scp /etc/ceph/ceph.conf root@ openstack node ip :/etc/ceph $ scp /etc/ceph/ceph.client.glance.keyring root@ openstack node ip :/etc/ceph 我們這邊的glance服務是跑在openstack controller上 # 在opentack controller上的ceph.conf新增glance的setion $ vim /etc/ceph/ceph.conf [client.glance] keyring = /etc/ceph/ceph.client.glance.keyring # 修改權限 $ chmod 0640 /etc/ceph/ceph.client.glance.keyring $ chown glance:glance /etc/ceph/ceph.client.glance.keyring # 修改/etc/glance/glance-api.conf $ sudo cp /etc/glance/glance-api.conf /etc/glance/glance-api.conf.bak $ sudo vim /etc/gl

CEPH - ubuntu install radosgw by manual

如果安裝的node是完全沒安裝過ceph的需加入下列到source.list $ wget -q -O- https://raw.github.com/ceph/ceph/master/keys/autobuild.asc | sudo apt-key add - echo deb http://gitbuilder.ceph.com/apache2-deb-$(lsb_release -sc)-x86_64-basic/ref/master $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph-apache.list $ echo deb http://gitbuilder.ceph.com/libapache-mod-fastcgi-deb-$(lsb_release -sc)-x86_64-basic/ref/master $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph-fastcgi.list 我們在這邊是把radosgw安裝至ceph-mon上,所以跳過上述步驟直接安裝 $ sudo apt-get update && sudo apt-get install apache2 libapache2-mod-fastcgi # 增加一行ServerName {fqdn},這邊是ceph-mon $ sudo vim /etc/apache2/apache2.conf ServerName ceph-mon $ sudo a2enmod rewrite $ sudo a2enmod fastcgi $ sudo service apache2 restart # 安裝radosgw,radosgw-agent $ sudo apt-get install radosgw radosgw-agent 在ceph-mon上執行以下動作 # 建立user跟設定權限,並產生keyring到/etc/ceph底下 $ sudo ceph auth get-or-create client