利用opensource 安裝openstack 結合 linuxbridge與vxlan– All In One (kilo)

環境說明
http://www.chenshake.com/centos-7-x-openstack-liberty-linux-bridgevxlan/

這邊利用virtualbox開啟一台vm,網卡第一張設定Bridge Adapter模式,第二張設定Nat Network模式





OS
14.04
kernel
3.19.0-30-generic

HostPrivate IP (Management IP)/Interface(eth0)Public IP/Interface(eth1)
controller172.20.3.53

編輯/etc/hosts,方便利用主機名稱進行訪問
$ cat >> /etc/hosts << EOF 172.20.3.53 controller EOF

首先我們在 host : controller 更新套件
$ apt-get update; apt-get dist-upgrade -y;reboot

設定環境參數
$ cat >> .bashrc << EOF MY_IP=172.20.3.53 MY_PRIVATE_IP=172.20.3.53 MY_PUBLIC_IP=172.20.3.53 CONTROLLER_IP=172.20.3.53
EOF
$ source .bashrc

安裝rabbit-mq及相關設定
# 安裝rabbitmsq-server套件
$ apt-get install -y rabbitmq-server

# 我們這邊是採用預設的guest使用者,如果想要新增使用者可以用下列的cmd執行:
$ rabbitmqctl add_user newUser password

# 設定newUser存取權限:
$ rabbitmqctl set_permissions newUser ".*" ".*" ".*"

===============================分隔線===========================

# 更換guest使用者密碼
$ rabbitmqctl change_password guest password

# 設定rabbitmq-server的ip為private ip,並更改存取權限,設定好之後重新動即可

$ cat >> /etc/rabbitmq/rabbitmq-env.conf <<EOF RABBITMQ_NODE_IP_ADDRESS=$MY_PRIVATE_IP EOF
$ chmod 644 /etc/rabbitmq/rabbitmq-env.conf $ service rabbitmq-server restart

安裝mysql及相關設定
# 安裝mysql-server套件
$ apt-get install -y mysql-server

# 修改部分基本設定 $ sed -i "s/127.0.0.1/$MY_PRIVATE_IP\nskip-name-resolve\ncharacter-set-server = utf8\ncollation-server = utf8_general_ci\ninit-connect = 'SET NAMES utf8'/g" /etc/mysql/my.cnf

# 寫進的內容如下
bind-address = 172.20.3.53
skip-name-resolve 
character-set-server = utf8 
collation-server = utf8_general_ci 
init-connect = 'SET NAMES utf8'
===============================分隔線===========================

# 重啟mysql並做安全性設定 $ service mysql restart

$ mysql_secure_installation

==Keystone==

建立keystone資料庫、使用者及相關權限設定
$ mysql -u root -ppassword -e "create database keystone;"

$ mysql -u root -ppassword -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'keystone';"

$ mysql -u root -ppassword -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone';"
===============================分隔線===========================
mysql -u root -ppassword -e "create database keystone;"
mysql -u root -ppassword -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'keystone';"
mysql -u root -ppassword -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone';"

安裝keystone之前,先需安裝的相依套件
$ apt-get install -y python-dev libmysqlclient-dev libffi-dev libssl-dev python-pip git

$ pip install pip==7.1.2
$ pip install python-openstackclient==1.0.5
$ pip install repoze.lru pbr mysql-python

建立openstack 相關服務的使用者,我們利用下面的腳本幫我們創立
# 新增一個createOpenstackServiceUsers.sh腳本
$ vim createOpenstackServiceUsers.sh

# 把下面內容複製貼上 #!/bin/bash for SERVICE in keystone glance neutron nova horizon cinder do useradd --home-dir "/var/lib/$SERVICE" --create-home --system --shell /bin/false $SERVICE #Create essential dirs mkdir -p /var/log/$SERVICE mkdir -p /etc/$SERVICE #Set ownership of the dirs chown -R $SERVICE:$SERVICE /var/log/$SERVICE chown -R $SERVICE:$SERVICE /var/lib/$SERVICE chown $SERVICE:$SERVICE /etc/$SERVICE #Some neutron only dirs if [ "$SERVICE" = 'neutron' ] then mkdir -p /etc/neutron/plugins/ml2 mkdir -p /etc/neutron/rootwrap.d chown -R neutron:neutron /etc/neutron/plugins fi

if [ "$SERVICE" = 'glance' ] then mkdir -p /var/lib/glance/images mkdir -p /var/lib/glance/scrubber mkdir -p /var/lib/glance/image-cache chown -R glance:glance /var/lib/glance/ fi done

# 執行此腳本 $ sh createOpenstackServiceUsers.sh

接著就是重頭戲,開始安裝keystone kilo版本了
#  clone keystone kilo版本的source code
$ git clone https://github.com/openstack/keystone.git -b stable/kilo

#  複製keystone相關conf範本到 /etc/keystone底下
$ cp -R keystone/etc/* /etc/keystone/

$ cd keystone

#  安裝相關相依套件
$ sudo pip install -r requirements.txt

#  最後安裝keystone
$ python setup.py install
===============================分隔線===========================
git clone https://github.com/openstack/keystone.git -b stable/kilo
cp -R keystone/etc/* /etc/keystone/
cd keystone
sudo pip install -r requirements.txt
python setup.py install


產生一個隨機token並配置keystone.conf,並利用keystone-manage產生keystone所需的table
#  利用以下cmd產生一個隨機token
$ openssl rand -hex 10
9c3c8d455f9d340e1f6a

#  重新命名/etc/keystone底下的keystone.conf.sample為keystone.conf,並開始設定keystone.conf
$ mv /etc/keystone/keystone.conf.sample /etc/keystone/keystone.conf

#  設定連到keystone資料庫存取帳號密碼設定
$ sed -i "s|database]|database]\nconnection = mysql://keystone:keystone@$MY_IP/keystone|g" /etc/keystone/keystone.conf

#  把剛剛隨機產生的token當作admin_token
$ sed -i 's/#admin_token = ADMIN/admin_token = 9c3c8d455f9d340e1f6a/g' /etc/keystone/keystone.conf

#  上述的指令在/etc/keystone/keystone.conf下會修改admin_token及[database]為
admin_token = 9c3c8d455f9d340e1f6a
[database]

connection = mysql://keystone:keystone@172.20.3.53/keystone

#  最後利用keystone-manage來同步資料庫並建立相關資料表
$ cd ~

$ su -s /bin/sh -c "keystone-manage db_sync" keystone

===============================分隔線===========================
mv /etc/keystone/keystone.conf.sample /etc/keystone/keystone.conf
sed -i "s|database]|database]\nconnection = mysql://keystone:keystone@$MY_IP/keystone|g" /etc/keystone/keystone.conf
sed -i 's/#admin_token = ADMIN/admin_token = 9c3c8d455f9d340e1f6a/g' /etc/keystone/keystone.conf

設定rotate keystone 服務的日誌
$ cat >> /etc/logrotate.d/keystone << EOF /var/log/keystone/*.log { daily missingok rotate 7 compress notifempty nocreate } EOF

建立keystone upstart 腳本,並啟動keystone
$ cat > /etc/init/keystone.conf << EOF description "Keystone API server" author "Duncan" start on runlevel [2345] stop on runlevel [!2345] respawn exec start-stop-daemon --start --chuid keystone --chdir /var/lib/keystone --name keystone --exec /usr/local/bin/keystone-all -- --config-file=/etc/keystone/keystone.conf --log-file=/var/log/keystone/keystone.log EOF $ start keystone

# 確認keystone是否有正常啟用,如果沒有的話可以利用下面的cmd來觀察哪裡出問題
$ ps aux | grep keystone

$ sudo -u keystone /usr/local/bin/keystone-all --config-file=/etc/keystone/keystone.conf --log-file=/var/log/keystone/keystone.log

建立能使用openstack cmd並具備admin權限的環境變數腳本,這邊的SERVICE_TOKEN請帶入/etc/keystone/keystone.conf中設定admin_token的值
$ cat >> openrc_admin_v3 << EOF
export OS_TOKEN=9c3c8d455f9d340e1f6a
export OS_URL=http://$MY_IP:35357/v3
export OS_IDENTITY_API_VERSION=3
EOF

$ source openrc_admin_v3

如果利用cmd發生locale.Error: unsupported locale setting,請輸入以下cmd
$ export LANGUAGE=en_US.UTF-8 $ export LANG=en_US.UTF-8 $ export LC_ALL=en_US.UTF-8 $ locale-gen en_US.UTF-8 $ sudo dpkg-reconfigure locales
===============================分隔線===========================
export LANGUAGE=en_US.UTF-8 export LANG=en_US.UTF-8 export LC_ALL=en_US.UTF-8 locale-gen en_US.UTF-8 sudo dpkg-reconfigure locales

我們將採用keystone v3當作我們的endpoint,如果要用v2.0請參考此篇。
# 建立 keystone identity service
$ openstack service create --name keystone --description "OpenStack Identity" identity

# 建立 keystone public/internal/admin endpoint
$ openstack endpoint create --region RegionOne identity public http://controller:5000/v3 $ openstack endpoint create --region RegionOne identity internal http://controller:5000/v3 $ openstack endpoint create --region RegionOne identity admin http://controller:35357/v3

# 建立admin project
$ openstack project create --domain default --description "Admin Project" admin

# 建立admin user
$ openstack user create admin --domain default --password password

# 建立admin role
$ openstack role create admin

# admin user綁定在admin project底下為admin的role
$ openstack role add --project admin --user admin admin

# 建立service project
$ openstack project create --domain default --description "Service Project" service

# 建立demo project
$ openstack project create --domain default --description "Demo Project" demo

# 建立demo user
$ openstack user create demo --domain default --password password


# 建立user role

$ openstack role create user

# 綁定demo user在demo project底下的role為user
$ openstack role add --project demo --user demo user

當上述步驟完成,我們來驗證keystone的服務是否正常
$ cat >> adminrc << EOF
export OS_PROJECT_DOMAIN_ID=default
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_NAME=admin
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=password
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
EOF

$ unset OS_TOKEN OS_URL

$ source adminrc

$ openstack token issue

# 成功會出現類似下列結果
+------------+----------------------------------+ | Field | Value | +------------+----------------------------------+ | expires | 2016-05-06T08:11:26.737320Z | | id | 7aa02151b545412182fb93927aa46cf6 | | project_id | f90a4545a1264f14a806c13c91057383 | | user_id | 051e0c7c171241c39094c4666bcbc3d9 | +------------+----------------------------------+

==Glance==

確認/var/lib/glance/底下是否有下列相關子目錄,如果沒有請先手動建立
$ mkdir -p /var/lib/glance/images $ mkdir -p /var/lib/glance/scrubber $ mkdir -p /var/lib/glance/image-cache
$ chown -R glance:glance /var/lib/glance/
===============================分隔線===========================
mkdir -p /var/lib/glance/images mkdir -p /var/lib/glance/scrubber mkdir -p /var/lib/glance/image-cache

chown -R glance:glance /var/lib/glance/

建立glance資料庫、使用者及相關權限設定
$ mysql -u root -ppassword -e "create database glance;"
$ mysql -u root -ppassword -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'glance';"
$ mysql -u root -ppassword -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glance';"
===============================分隔線===========================
mysql -u root -ppassword -e "create database glance;"
mysql -u root -ppassword -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'glance';"
mysql -u root -ppassword -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glance';"

安裝glance
$ git clone https://github.com/openstack/glance.git -b stable/kilo

$ cp -R glance/etc/* /etc/glance/

$ cd glance

$ sudo pip install -r requirements.txt

$ python setup.py install && cd
===============================分隔線===========================
git clone https://github.com/openstack/glance.git -b stable/kilo
cp -R glance/etc/* /etc/glance/
cd glance
sudo pip install -r requirements.txt
python setup.py install


向keystone註冊glance service及endpoint
# 建立 glance image service
$ openstack service create --name glance --description "OpenStack Image service" image

# 建立 glance public/internal/admin endpoint
$ openstack endpoint create --region RegionOne image public http://controller:9292 $ openstack endpoint create --region RegionOne image internal http://controller:9292 $ openstack endpoint create --region RegionOne image admin http://controller:9292

# 建立glance user
$ openstack user create glance --domain default --password glance

# glance user綁定在service project底下為admin的role
$ openstack role add --project service --user glance admin


配置/etc/glance/glance-api.conf,請修改或是新增下面的參數
[DEFAULT]
verbose = True

notification_driver = noop

[database] connection = mysql://glance:glance@controller/glance


[glance_store]
default_store = file
filesystem_store_datadir = /var/lib/glance/images/

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = glance
password = glance

revocation_cache_time = 10

[paste_deploy]

flavor = keystone
配置/etc/glance/glance-registry.conf,請修改或是新增下面的參數
[DEFAULT]
verbose = True
notification_driver = noop

[database] connection = mysql://glance:glance@controller/glance

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = glance
password = glance

[paste_deploy]
flavor = keystone
初始化glance資料庫
$ su -s /bin/sh -c "glance-manage db_sync" glance

設定rotate glance 服務的日誌
$ cat >> /etc/logrotate.d/glance << EOF /var/log/glance/*.log { daily missingok rotate 7 compress notifempty nocreate } EOF

建立glance upstart 腳本
# 建立glance-api啟動服務腳本
$ cat >> /etc/init/glance-api.conf << EOF description "Glance API server" author "Duncan" start on runlevel [2345] stop on runlevel [!2345] respawn exec start-stop-daemon --start --chuid glance --exec /usr/local/bin/glance-api -- --config-file=/etc/glance/glance-api.conf --config-file=/etc/glance/glance-api-paste.ini EOF # 建立glance-registry啟動服務腳本 $ cat >> /etc/init/glance-registry.conf << EOF description "Glance registry server" author "Duncan" start on runlevel [2345] stop on runlevel [!2345] respawn exec start-stop-daemon --start --chuid glance --exec /usr/local/bin/glance-registry -- --config-file=/etc/glance/glance-registry.conf --config-file=/etc/glance/glance-registry-paste.ini EOF

啟動glance
$ glance-control all start

或者分開執行
$ start glance-api

$ start glance-registry

# 確認glance是否有正常執行
$ ps aux | grep glance

如果沒有發現glance 的process
$ sudo -u glance glance-api --config-file=/etc/glance/glance-api.conf --config-file=/etc/glance/glance-api-paste.ini


$ sudo -u glance glance-registry --config-file=/etc/glance/glance-registry.conf --config-file=/etc/glance/glance-registry-paste.ini

利用glance cmd或是openstack cmd 查詢image-list
$ cat >> ~/adminrc << EOF
export OS_IMAGE_API_VERSION=2
EOF

$ source ~/adminrc

$ glance image-list / openstack image list

==Nova==
建立nova資料庫、使用者及相關權限設定
$ mysql -u root -ppassword -e "create database nova;"
$ mysql -u root -ppassword -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';"
$ mysql -u root -ppassword -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova';"
===============================分隔線===========================
mysql -u root -ppassword -e "create database nova;"

mysql  -u root -ppassword -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';"
mysql  -u root -ppassword -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova';"

安裝nova
$ apt-get install -y libpq-dev python-libvirt libxml2-dev libxslt1-dev
$ git clone https://github.com/openstack/nova.git -b stable/kilo
$ cd nova
$ cp -r etc/nova/* /etc/nova/
$ sudo pip install -r requirements.txt
$ python setup.py install
# cd ~
===============================分隔線===========================
apt-get install -y libpq-dev python-libvirt libxml2-dev libxslt1-dev
git clone https://github.com/openstack/nova.git -b stable/kilo
cd nova
cp -r etc/nova/* /etc/nova/
sudo pip install -r requirements.txt
python setup.py install
cd ~

向keystone註冊nova service及endpoint
# 建立 nova compute service
$ openstack service create --name nova --description "OpenStack Compute" compute

# 建立 nova public/internal/admin endpoint
$ openstack endpoint create --region RegionOne compute public http://controller:8774/v2/%\(tenant_id\)s

$ openstack endpoint create --region RegionOne compute internal http://controller:8774/v2/%\(tenant_id\)s

$ openstack endpoint create --region RegionOne compute admin http://controller:8774/v2/%\(tenant_id\)s

# 建立nova user
$ openstack user create nova --domain default --password nova

# nova user綁定在service project底下為admin的role
$ openstack role add --project service --user nova admin

如果想直接建立原始nova.conf,可以利用以下cmd,但在這邊我們直接編輯nova.conf,所
以跳過這一步驟
$ apt-get install python-tox $ tox -egenconfig

編輯nova.conf
$ cat > /etc/nova/nova.conf << EOF
[DEFAULT] verbose = True
log_dir = /var/log/nova
rpc_backend = rabbit
auth_strategy = keystone my_ip = $MY_PRIVATE_IP network_api_class = nova.network.neutronv2.api.API security_group_api = neutron linuxnet_interface_driver = nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver firewall_driver = nova.virt.firewall.NoopFirewallDriver root_helper = sudo /usr/local/bin/nova-rootwrap /etc/nova/rootwrap.conf state_path = /var/lib/nova
enabled_apis = osapi_compute,metadata vncserver_listen = 0.0.0.0 vncserver_proxyclient_address = $MY_PRIVATE_IP [glance] host = controller [oslo_concurrency] lock_path = /var/lib/nova/tmp [database] connection = mysql://nova:nova@controller/nova [oslo_messaging_rabbit] rabbit_host = controller rabbit_userid = guest rabbit_password = password [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = nova password = nova
EOF

接下來須設定讓nova執行一些cmd不用切換root即可執行,利用下面腳本來完成
$ vim novaNoPwdPermission.sh

#!/bin/bash
for SERVICE in nova do cat > '/etc/sudoers.d/'$SERVICE'_sudoers' << EOF Defaults:$SERVICE !requiretty $SERVICE ALL = (root) NOPASSWD: /usr/local/bin/$SERVICE-rootwrap /etc/$SERVICE/rootwrap.conf * EOF
chown -R $SERVICE:$SERVICE /etc/$SERVICE
chmod 440 /etc/sudoers.d/$SERVICE_sudoers done chmod 750 /etc/sudoers.d

$ sh novaNoPwdPermission.sh

設定rotate nova 服務的日誌
$ cat >> /etc/logrotate.d/nova<< EOF /var/log/nova/*.log { daily missingok rotate 7 compress notifempty nocreate } EOF

修改/etc/nova底下的owner為nova,並建立nova table
$ chown nova:nova /etc/nova/*.{conf,json,ini}
$ su -s /bin/sh -c "nova-manage db sync" nova

建立nova-api upstart 腳本
$ cat > /etc/init/nova-api.conf << EOF start on runlevel [2345] stop on runlevel [!2345] exec start-stop-daemon --start --chuid nova --exec /usr/local/bin/nova-api -- --config-file=/etc/nova/nova.conf EOF

建立nova-cert upstart 腳本
$ cat > /etc/init/nova-cert.conf << EOF start on runlevel [2345] stop on runlevel [!2345] exec start-stop-daemon --start --chuid nova --exec /usr/local/bin/nova-cert -- --config-file=/etc/nova/nova.conf EOF

建立nova-consoleauth upstart 腳本
$ cat > /etc/init/nova-consoleauth.conf << EOF start on runlevel [2345] stop on runlevel [!2345] respawn chdir /var/run exec start-stop-daemon --start --chuid nova --exec /usr/local/bin/nova-consoleauth -- --config-file=/etc/nova/nova.conf EOF

建立nova-conductor upstart 腳本
$ cat > /etc/init/nova-conductor.conf << EOF description "Nova conductor" author "Duncan" start on runlevel [2345] stop on runlevel [!2345] chdir /var/run pre-start script mkdir -p /var/run/nova chown nova:root /var/run/nova/ mkdir -p /var/lock/nova chown nova:root /var/lock/nova/ end script exec start-stop-daemon --start --chuid nova --exec /usr/local/bin/nova-conductor -- --config-file=/etc/nova/nova.conf EOF

建立nova-scheduler upstart 腳本
$ cat > /etc/init/nova-scheduler.conf << EOF description "Nova scheduler" author "Duncan" start on runlevel [2345] stop on runlevel [!2345] chdir /var/run pre-start script mkdir -p /var/run/nova chown nova:root /var/run/nova/ mkdir -p /var/lock/nova chown nova:root /var/lock/nova/ end script exec start-stop-daemon --start --chuid nova --exec /usr/local/bin/nova-scheduler -- --config-file=/etc/nova/nova.conf EOF


啟動nova相關服務
$ start nova-api $ start nova-cert $ start nova-consoleauth $ start nova-conductor $ start nova-scheduler

# 確認nova相關服務是否有正常執行
$ ps aux | grep nova


# 如果發現沒有上面其中的nova process,可以利用以下cmd單獨執行看看錯誤訊息
$ sudo -u nova nova-api --config-file=/etc/nova/nova.conf
$ sudo -u nova nova-cert --config-file=/etc/nova/nova.conf
$ sudo -u nova nova-consoleauth --config-file=/etc/nova/nova.conf
$ sudo -u nova nova-conductor --config-file=/etc/nova/nova.conf

$ sudo -u nova nova-scheduler --config-file=/etc/nova/nova.conf


配置nova-compute
首先先利用cmd判斷我們適合的libvirt type
If this command returns a value of one or greater, your compute node supports hardware acceleration which typically requires no additional configuration.
If this command returns a value of zero, your compute node does not support hardware acceleration and you must configure libvirt to use QEMU instead of KVM.

$ egrep -c '(vmx|svm)' /proc/cpuinfo

安裝套件
$ apt-get install -y libvirt-bin qemu-kvm libpq-dev python-libvirt python-libguestfs libguestfs-tools

建立nova-compute.conf
$ cat > /etc/nova/nova-compute.conf << EOF [DEFAULT] compute_driver=libvirt.LibvirtDriver
resume_guests_state_on_host_boot=true
vnc_enabled = True
novncproxy_base_url = http://$MY_PRIVATE_IP:6080/vnc_auto.html

[libvirt] virt_type=qemu
EOF

建立nova-compute所需的目錄
$ mkdir /var/lib/nova/keys
$ mkdir /var/lib/nova/locks
$ mkdir /var/lib/nova/instances
$ chown -R nova:nova /var/lib/nova

建立nova-compute upstart腳本
$ cat > /etc/init/nova-compute.conf << EOF description "Nova compute worker" author "Duncan" start on runlevel [2345] stop on runlevel [!2345] chdir /var/run pre-start script mkdir -p /var/run/nova chown nova:root /var/run/nova/ mkdir -p /var/lock/nova chown nova:root /var/lock/nova/ modprobe nbd end script exec start-stop-daemon --start --chuid nova --exec /usr/local/bin/nova-compute -- --config-file=/etc/nova/nova.conf --config-file=/etc/nova/nova-compute.conf EOF

啟動nova-compute
$ usermod -G libvirtd nova

$ start nova-compute

# 確認nova-compute是否正常
$ ps aux|grep nova


# 如果不正常可以用下列指令執行看錯誤訊息

$ sudo -u nova nova-compute --config-file=/etc/nova/nova.conf --config-file=/etc/nova/nova-compute.conf

如果錯誤訊息發生"HypervisorUnavailable: Connection to the hypervisor is broken on host"

# 請先確認user:nova 是否有加入到libvirtd底下
$ getent group libvirtd

# 如果還是不行嘗試下面做法
$ vim /etc/libvirt/libvirtd.conf
把unix_sock_rw_perms = "0770" 改成 unix_sock_rw_perms = "0777"

$ /etc/init.d/libvirt-bin restart

==Nrutron==
安裝neutron之前,先設定 kernel 網路參數,透過 /etc/sysctl.conf 加入以下參數
$vim /etc/sysctl.conf

net.ipv4.ip_forward=1
net.ipv4.conf.all.rp_filter=0 net.ipv4.conf.default.rp_filter=0 net.bridge.bridge-nf-call-iptables=1 net.bridge.bridge-nf-call-ip6tables=1

# 編輯完成,利用cmd載入參數
$ sudo sysctl -p

===============================分隔線===========================
# case 1
# 如果出現下列訊息
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory

sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory

# 請執行以下cmd啟用br_netfilter module
$ modprobe br_netfilter

# 確認是否成功啟用
$ lsmod |grep  br_netfilter
br_netfilter           20480  0

bridge                110592  1 br_netfilter

# case 2
# 如果出現下列訊息,請確認你的kernel版本在3.19.0-15以上
modprobe: FATAL: Module br_netfilter not found.

$ uname -r

# 如果低於這版本請更新kernel
$ apt-get update

# 查詢你想安裝的版本
$ apt-cache search linux-image

$ apt-get install linux-image-xxxversion

$ reboot

# 將之前的 kernel 移除
$ apt-get remove linux-image-xxxversion

建立neutron資料庫、使用者及相關權限設定
$ mysql -u root -ppassword -e "create database neutron;"
$ mysql -u root -ppassword -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutron';"
$ mysql -u root -ppassword -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron';"
===============================分隔線===========================

mysql  -u root -ppassword -e "create database neutron;"
mysql  -u root -ppassword -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutron';"
mysql  -u root -ppassword -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron';"

安裝neutron
$ apt-get install -y git ipset keepalived conntrack conntrackd arping openvswitch-switch dnsmasq-utils dnsmasq libffi-dev libssl-dev libmysqlclient-dev

$ git clone https://github.com/openstack/neutron.git -b stable/kilo
$ cp neutron/etc/* /etc/neutron/
$ cp -R neutron/etc/neutron/plugins/ml2/* /etc/neutron/plugins/ml2
$ cp -R neutron/etc/neutron/rootwrap.d/* /etc/neutron/rootwrap.d

$ cd neutron
$ pip install -r requirements.txt
$ python setup.py install && cd
===============================分隔線===========================

apt-get install -y git ipset keepalived conntrack conntrackd arping openvswitch-switch dnsmasq-utils dnsmasq libffi-dev libssl-dev libmysqlclient-dev
git clone https://github.com/openstack/neutron.git -b stable/kilo
cp neutron/etc/* /etc/neutron/
cp -R neutron/etc/neutron/plugins/ml2/* /etc/neutron/plugins/ml2
cp -R neutron/etc/neutron/rootwrap.d/* /etc/neutron/rootwrap.d
cd neutron
pip install -r requirements.txt
python setup.py install && cd

接下來須設定讓neutron執行一些cmd不用切換root即可執行,利用下面腳本來完成
$ vim neutronNoPwdPermission.sh #!/bin/bash for SERVICE in neutron do cat > '/etc/sudoers.d/'$SERVICE'_sudoers' << EOF Defaults:$SERVICE !requiretty $SERVICE ALL = (root) NOPASSWD: /usr/local/bin/$SERVICE-rootwrap /etc/$SERVICE/rootwrap.conf * EOF
chown -R $SERVICE:$SERVICE /etc/$SERVICE chmod 440 /etc/sudoers.d/$SERVICE_sudoers done chmod 750 /etc/sudoers.d $ sh neutronNoPwdPermission.sh

向keystone註冊neutron service及endpoint
# 建立 neutron network service
$ openstack service create --name neutron --description "OpenStack Networking" network

# 建立 neutron public/internal/admin endpoint
$ openstack endpoint create --region RegionOne network public http://controller:9696

$ openstack endpoint create --region RegionOne network internal http://controller:9696

$ openstack endpoint create --region RegionOne network admin http://controller:9696

# 建立neutron user
$ openstack user create neutron --domain default --password neutron

# neutron user綁定在service project底下為admin的role
$ openstack role add --project service --user neutron admin

編輯/etc/neutron/neutron.conf
$ rm /etc/neutron/neutron.conf

SERVICE_TENANT_ID=`openstack project show service | awk '/ id / { print $4 }'`
$ cat > /etc/neutron/neutron.conf << EOF
[DEFAULT] verbose = True debug = True
rpc_backend=rabbit
core_plugin = ml2 service_plugins = router auth_strategy = keystone allow_overlapping_ips = True
notify_nova_on_port_status_changes = True notify_nova_on_port_data_changes = True nova_url = http://controller:8774/v2
notification_driver=neutron.openstack.common.notifier.rpc_notifier [nova] auth_plugin = password project_domain_id = default user_domain_id = default region_name = RegionOne project_name = service username = nova password = nova auth_url = http://controller:35357
[agent] root_helper=sudo /usr/local/bin/neutron-rootwrap /etc/neutron/rootwrap.conf [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 project_domain_id = default user_domain_id = default project_name = service username = neutron password = neutron auth_plugin = password [database] connection = mysql://neutron:neutron@controller/neutron [oslo_concurrency] lock_path = /var/lock/neutron/tmp [oslo_messaging_rabbit] rabbit_host = controller rabbit_userid = guest rabbit_password = password
EOF

編輯 /etc/neutron/plugins/ml2/ml2_conf.ini
$ rm /etc/neutron/plugins/ml2/ml2_conf.ini

$ cat > /etc/neutron/plugins/ml2/ml2_conf.ini << EOF
[ml2] tenant_network_types = vxlan extension_drivers = port_security type_drivers = flat,vxlan mechanism_drivers = linuxbridge,l2population [ml2_type_flat] flat_networks = public [ml2_type_vxlan] vni_ranges = 1001:2000 [securitygroup] enable_ipset = True
EOF

編輯 /etc/neutron/plugins/ml2/linuxbridge_agent.ini
$ cat > /etc/neutron/plugins/ml2/linuxbridge_agent.ini << EOF
[linux_bridge]
physical_interface_mappings = public:eth1 [vxlan] enable_vxlan = True local_ip = 172.20.3.53 l2_population = True [agent] prevent_arp_spoofing = True [securitygroup] enable_security_group = True firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
EOF

編輯 /etc/neutron/dhcp_agent.ini
$ vim /etc/neutron/dhcp_agent.ini
[DEFAULT] interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = True verbose = True

編輯 /etc/neutron/metadata_agent.ini
$ vim /etc/neutron/metadata_agent.ini
[DEFAULT] auth_uri = http://controller:5000 auth_url = http://controller:35357
nova_metadata_ip = controller
metadata_proxy_shared_secret = password
user_domain_id = default
project_domain_id = default
auth_region = RegionOne
auth_plugin = password
admin_tenant_name = service
username = neutron
password = neutron
verbose = True

編輯 /etc/neutron/l3_agent.ini
$ vim /etc/neutron/l3_agent.ini
[DEFAULT] interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver external_network_bridge = verbose = True

編輯 /etc/nova/nova.conf
$ vim /etc/nova/nova.conf

# 新增neutron section
[neutron] url = http://controller:9696 auth_uri = http://controller:5000 auth_url = http://controller:35357 auth_plugin = password username = neutron password = neutron project_name = service auth_strategy = keystone user_domain_id = default project_domain_id = default region_name = RegionOne
metadata_proxy_shared_secret = password

service_metadata_proxy = True


建立neutron plugin soft link,並同步資料庫
# 編輯檔案權限
$ chown neutron:neutron /etc/neutron/*.{conf,json,ini}

$ chown -R neutron:neutron /etc/neutron/plugins

$ ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
$ su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

設定rotate neutron 服務的日誌
$ cat >> /etc/logrotate.d/neutron << EOF /var/log/neutron/*.log { daily missingok rotate 7 compress notifempty nocreate } EOF

建立neutron-server upstart腳本
$ cat > /etc/default/neutron-server << EOF NEUTRON_PLUGIN_CONFIG="/etc/neutron/plugins/ml2/ml2_conf.ini" EOF

$ cat > /etc/init/neutron-server.conf << EOF # vim:set ft=upstart ts=2 et: start on runlevel [2345] stop on runlevel [!2345] script [ -r /etc/default/neutron-server ] && . /etc/default/neutron-server [ -r "\$NEUTRON_PLUGIN_CONFIG" ] && CONF_ARG="--config-file \$NEUTRON_PLUGIN_CONFIG" exec start-stop-daemon --start --chuid neutron --exec /usr/local/bin/neutron-server -- \ --config-file /etc/neutron/neutron.conf \ --log-file /var/log/neutron/server.log \$CONF_ARG end script EOF

建立neutron-l3-agent upstart腳本
$ cat > /etc/init/neutron-l3-agent.conf << EOF # vim:set ft=upstart ts=2 et: respawn start on runlevel [2345] stop on runlevel [!2345] script exec start-stop-daemon --start --chuid neutron --exec /usr/local/bin/neutron-l3-agent -- --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/l3_agent.ini --log-file=/var/log/neutron/l3-agent.log end script EOF

建立neutron-dhcp-agent upstart腳本
$ cat > /etc/init/neutron-dhcp-agent.conf << EOF # vim:set ft=upstart ts=2 et: start on runlevel [2345] stop on runlevel [!2345] script exec start-stop-daemon --start --chuid neutron --exec /usr/local/bin/neutron-dhcp-agent -- --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/dhcp_agent.ini --log-file=/var/log/neutron/dhcp-agent.log end script EOF

建立neutron-metadata-agent upstart腳本
$ cat > /etc/init/neutron-metadata-agent.conf << EOF # vim:set ft=upstart ts=2 et: start on runlevel [2345] stop on runlevel [!2345] script exec start-stop-daemon --start --chuid neutron --exec /usr/local/bin/neutron-metadata-agent -- \ --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/metadata_agent.ini \ --log-file=/var/log/neutron/metadata-agent.log end script EOF

建立neutron-linuxbridge-agent upstart腳本
$ cat > /etc/init/neutron-linuxbridge-agent.conf << EOF # vim:set ft=upstart ts=2 et: #start on runlevel [2345] #stop on runlevel [!2345] script [ -r /etc/default/neutron-server ] && . /etc/default/neutron-server [ -r "\$NEUTRON_PLUGIN_CONFIG" ] && CONF_ARG="--config-file \$NEUTRON_PLUGIN_CONFIG" exec start-stop-daemon --start --chuid neutron --exec /usr/local/bin/neutron-linuxbridge-agent -- --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini --config-file=/etc/neutron/plugins/ml2/linuxbridge_agent.ini --log-file=/var/log/neutron/linuxbridge_agent.log \$CONF_ARG end script EOF

啟動neutron相關服務
$ restart nova-api
$ start neutron-server $ start neutron-linuxbridge-agent $ start neutron-dhcp-agent $ start neutron-l3-agent $ start neutron-metadata-agent

===============================分隔線===========================
restart nova-api
start neutron-server
start neutron-linuxbridge-agent
start neutron-dhcp-agent
start neutron-l3-agent
start neutron-metadata-agent


# 確認neutron相關服務是否有正常執行
$ ps aux | grep neutron


# 如果發現沒有上面其中的neutron process,可以利用以下cmd單獨執行看看錯誤訊息
$ sudo -u neutron neutron-server --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini --log-file /var/log/neutron/server.log
$ sudo -u neutron neutron-linuxbridge-agent --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini --config-file=/etc/neutron/plugins/ml2/linuxbridge_agent.ini --log-file=/var/log/neutron/linuxbridge_agent.log
$ sudo -u neutron neutron-metadata-agent --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/metadata_agent.ini --log-file=/var/log/neutron/metadata-agent.log
$ sudo -u neutron neutron-dhcp-agent --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/dhcp_agent.ini --log-file=/var/log/neutron/dhcp-agent.log

$ sudo -u neutron neutron-l3-agent --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/l3_agent.ini --log-file=/var/log/neutron/l3-agent.log


驗證neutrong服務api
$ neutron ext-list

+-----------------------+-----------------------------------------------+

| alias                 | name                                          |

+-----------------------+-----------------------------------------------+

| port-security         | Port Security                                 |

| security-group        | security-group                                |

| l3_agent_scheduler    | L3 Agent Scheduler                            |

| net-mtu               | Network MTU                                   |

| ext-gw-mode           | Neutron L3 Configurable external gateway mode |

| binding               | Port Binding                                  |

| provider              | Provider Network                              |

| agent                 | agent                                         |

| quotas                | Quota management support                      |

| subnet_allocation     | Subnet Allocation                             |

| dhcp_agent_scheduler  | DHCP Agent Scheduler                          |

| l3-ha                 | HA Router extension                           |

| multi-provider        | Multi Provider Network                        |

| external-net          | Neutron external network                      |

| router                | Neutron L3 Router                             |

| allowed-address-pairs | Allowed Address Pairs                         |

| extraroute            | Neutron Extra Route                           |

| extra_dhcp_opt        | Neutron Extra DHCP opts                       |

| dvr                   | Distributed Virtual Router                    |

+-----------------------+-----------------------------------------------+


$ neutron agent-list

+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+

| id                                   | agent_type         | host       | alive | admin_state_up | binary                    |

+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+

| 751f5921-0e16-47db-acca-1119533c1952 | Linux bridge agent | controller | :-)   | True           | neutron-linuxbridge-agent |

| 8fba218e-7ab8-4daf-9314-992b13a28a05 | Metadata agent     | controller | :-)   | True           | neutron-metadata-agent    |

| e2bb7478-fe02-418e-ae79-edec24859cd9 | L3 agent           | controller | :-)   | True           | neutron-l3-agent          |

| ee54be6d-82d0-4f59-a971-9c4787921ba1 | DHCP agent         | controller | :-)   | True           | neutron-dhcp-agent        |

+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+


==Horizon==
安裝hoizon
$ sudo apt-get install -y python-setuptools python-virtualenv python-dev gettext git gcc libpq-dev python-pip python-tox libffi-dev python-memcache memcached

$ git clone https://github.com/openstack/horizon.git /opt/horizon -b stable/kilo

$ sudo chown -R horizon:horizon /opt/horizon

$ cd /opt/horizon

$ pip install -r requirements.txt

$ python setup.py install

$ cp openstack_dashboard/local/local_settings.py.example openstack_dashboard/local/local_settings.py
===============================分隔線===========================

sudo apt-get install -y python-setuptools python-virtualenv python-dev gettext git gcc libpq-dev python-pip python-tox libffi-dev python-memcache memcached
git clone https://github.com/openstack/horizon.git /opt/horizon -b stable/kilo
sudo chown -R horizon:horizon /opt/horizon
cd /opt/horizon
pip install -r requirements.txt
python setup.py install
cp openstack_dashboard/local/local_settings.py.example openstack_dashboard/local/local_settings.py


修改openstack_dashboard/local/local_settings.py
COMPRESS_OFFLINE = True OPENSTACK_HOST = "controller"
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"

SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
    'default': {
        'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
        'LOCATION': '127.0.0.1:11211',
    }
}

# 把原本的CACHES註解掉
#CACHES = { # 'default': { # 'BACKEND': 'django.core.cache.backends.locmem.LocMemCache', # } #}

# 修改完成執行以下動作
$ ./manage.py collectstatic
$ ./manage.py compress

設定一個支援 WSGI 的 Web server,及相關conf
# 安裝相關套件
$ sudo apt-get install -y apache2 libapache2-mod-wsgi

# 我們需為 Apache2 提供一個 WSGI 設定檔案 /etc/apache2/sites-available/horizon.conf
$ ./manage.py make_web_conf --apache | sudo tee /etc/apache2/sites-available/horizon.conf

# 修改與設定 Apache2 的 sites-available 下的 /etc/apache2/sites-available/horizon.conf為
<VirtualHost *:80>

    DocumentRoot /opt/horizon/

    LogLevel warn
    ErrorLog /var/log/apache2/openstack_dashboard-error.log
    CustomLog /var/log/apache2/openstack_dashboard-access.log combined

    WSGIDaemonProcess horizon user=horizon group=horizon processes=3 threads=10 home=/opt/horizon display-name=%{GROUP}
    WSGIApplicationGroup %{GLOBAL}

    SetEnv APACHE_RUN_USER horizon
    SetEnv APACHE_RUN_GROUP horizon
    WSGIProcessGroup horizon
    WSGIScriptAlias / /opt/horizon/openstack_dashboard/wsgi/django.wsgi

    <Location "/">
        Require all granted
    </Location>

    Alias /static /opt/horizon/static
    <Location "/static">
        SetHandler None
    </Location>

</Virtualhost>

$ sudo a2ensite horizon
$ sudo a2dissite 000-default $ sudo service apache2 restart
$ chown -R horizon:horizon /opt/horizon/


在瀏覽器輸入http://controller導入openstack首頁,代表成功安裝horizon。

==安裝Novnc==

$ git clone git://github.com/kanaka/noVNC
$ mkdir /usr/share/novnc

$ cp -r noVNC/* /usr/share/novnc
$ apt-get install libjs-jquery libjs-sphinxdoc libjs-swfobject libjs-underscore

建立nova-novncproxy腳本
$ cat >> /etc/init/nova-novncproxy.conf << EOF description "Nova novnc proxy worker" start on runlevel [2345] stop on runlevel [!2345] chdir /var/run pre-start script mkdir -p /var/run/nova chown nova:root /var/run/nova/ mkdir -p /var/lock/nova chown nova:root /var/lock/nova/ modprobe nbd end script
exec start-stop-daemon --start --chuid nova --exec /usr/local/bin/nova-novncproxy -- --config-file=/etc/nova/nova.conf EOF

啟動novanovncproxy服務
$ start nova-novncproxy
# 確認nova-novncproxy服務是否有正常執行 $ ps aux | grep nova-novncproxy

# 如果發現沒有上面其中的nova-novncproxy,可以利用以下cmd單獨執行看看錯誤訊息

$ sudo -u nova nova-novncproxy --config-file=/etc/nova/nova.conf


==如何讓instance ping到8.8.8.8==

# 假設我們private網路為10.0.0.0/24網段,public網路為192.168.100.0/24網段,instance的private ip為10.0.0.3,floating ip為192.168.100.51
# 利用horizon或是neutron cmd 找出instance所在的router id
$ neutron router-list

# 假設所對應到的router id為9cee2e48-16f5-4703-82bb-7e9b38ff6de3
# 列出 目前的router及dhcp,並分別進入bash查詢對應的interface
$ ip netns
qrouter-9cee2e48-16f5-4703-82bb-7e9b38ff6de3
qdhcp-a0789fb6-da80-4079-b514-c95b67dda6a0
# 利用ip cmd進入到router底下,並查到對外網路的gw interface名稱
# 可以知道說這邊對外的interface名稱為qg-e61bd68a-0c
$ ip netns exec qrouter-9cee2e48-16f5-4703-82bb-7e9b38ff6de3 bash
$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: qg-e61bd68a-0c: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether fa:16:3e:e9:94:38 brd ff:ff:ff:ff:ff:ff inet 192.168.100.101/24 brd 192.168.100.255 scope global qg-e61bd68a-0c valid_lft forever preferred_lft forever inet6 fe80::f816:3eff:fee9:9438/64 scope link valid_lft forever preferred_lft forever 3: qr-cc10fbd9-cb: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether fa:16:3e:7f:98:79 brd ff:ff:ff:ff:ff:ff inet 10.0.0.1/24 brd 10.0.0.255 scope global qr-cc10fbd9-cb valid_lft forever preferred_lft forever inet6 fe80::f816:3eff:fe7f:9879/64 scope link valid_lft forever preferred_lft forever

$ ethtool -S qg-e61bd68a-0c
NIC statistics:

     peer_ifindex: 19

$ exit

$ ip a| grep 19:

19: tape61bd68a-0c: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc pfifo_fast master brqca3c0a24-b6 state UP group default qlen 1000

# 利用cmd列出目前的bridge
$ brctl show

bridge name bridge id STP enabled interfaces brq5b5ed8d9-f3 8000.5e2cd747775b no tapc5f079d5-05 tapcc10fbd9-cb vxlan-1025 brqca3c0a24-b6 8000.08002786030a no tap97d5f1f4-17 tape61bd68a-0c vxlan-1095


# 可以知道最後會對應到brqca3c0a24-b6的tape61bd68a-0c出去,因此利用brctl 指令加入eth1
$ brctl addif brqca3c0a24-b6 eth1
==如何直接在controller ssh到instance==


# 我們利用建立虛擬的pair nic
$ ip link add type veth
$ ifconfig
veth0     Link encap:Ethernet  HWaddr 56:5c:15:18:5a:64
          inet6 addr: fe80::545c:15ff:fe18:5a64/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:30 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1052 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:5748 (5.7 KB)  TX bytes:49191 (49.1 KB)

veth1     Link encap:Ethernet  HWaddr 5a:32:1e:82:79:ce
          inet6 addr: fe80::5832:1eff:fe82:79ce/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1052 errors:0 dropped:0 overruns:0 frame:0
          TX packets:30 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000

          RX bytes:49191 (49.1 KB)  TX bytes:5748 (5.7 KB)

# 把其中一張虛擬nic加入對外的bridge
$ brctl addif brqca3c0a24-b6 veth1

# 再把veth0建立對應的相同網段的ip即可直接ssh到instance了
$ ifconfig veth0 192.168.100.10 netmask 255.255.255.0

# 其他常用ip link指令
# 如果不想讓系統指定veth編號,可以用下面指令設定編號
$ ip link add <veth name> type veth peer name <peer veth name>

# 刪除虛擬nic $ ip link delete <veth name>
==Cinder==

確認/var/cache/cinder目錄是否建立,如果沒有請先手動建立
$ mkdir /var/cache/cinder $ chown cinder:cinder /var/cache/cinder/ $ chmod 700 /var/cache/cinder
===============================分隔線===========================
mkdir /var/cache/cinder chown cinder:cinder /var/cache/cinder/ chmod 700 /var/cache/cinder

建立cinder資料庫、使用者及相關權限設定
$ mysql -u root -ppassword -e 'CREATE DATABASE cinder;'

$ mysql -u root -ppassword -e "GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'cinder';"
$ mysql -u root -ppassword -e "GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'cinder';"
===============================分隔線===========================
mysql -u root -pmysql -e 'CREATE DATABASE cinder;' mysql -u root -pmysql -e "GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'cinder';" mysql -u root -pmysql -e "GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'cinder';"
安裝cinder
$ apt-get install -y open-iscsi tgt lvm2

$ git clone https://github.com/openstack/cinder.git -b stable/kilo
$ cd cinder
$ pip install --requirement requirements.txt
$ python setup.py install
===============================分隔線===========================
git clone https://github.com/openstack/cinder.git -b stable/kilo cd cinder pip install --requirement requirements.txt python setup.py install
由於我們這邊是用lvm當作cinder backend的storage,所以須先做以下的前置作業

# 這邊將 /dev/sdb 硬碟當作 Physical Volume 使用: $ sudo pvcreate /dev/sdb Physical volume "/dev/sdb" successfully created

# 如果要將已用過的dev重新給lvm使用,可以用以下cmd重新format

$ sudo fdisk /dev/sdb

$ sudo mkfs -t ext4 /dev/sdb



# 接著要建立一個volume group(VG) 來讓多顆硬碟當作一個邏輯儲存使用: $ sudo vgcreate cinder-volumes /dev/sdb Volume group "cinder-volumes" successfully created



# 預設下的lvm會透過工具搜尋包含/dev的目錄。如果部署時是使用lvm來提供volume的話,該工具會檢查這些volume,並試著快取目錄,這樣將會造成各式各樣的問題,因此要編輯 /etc/lvm/lvm.conf 來正確的提供VG的硬碟使用。

# 這邊設定只使用 /dev/sdb

$ vim /etc/lvm/lvm.conf
# 這邊以 a 開頭的表示 accept,而 r 則表示 reject。 filter = [ 'a/sdb/', 'r/.*/']

向keystone註冊cinder service及endpoint
# 建立 cinder block storage service v1及v2 $ openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev1 $ openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2 # 建立 cinder public/internal/admin endpoint $ openstack endpoint create --region RegionOne volumev1 public http://controller:8776/v1/%\(tenant_id\)s $ openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(tenant_id\)s $ openstack endpoint create --region RegionOne volumev1 internal http://controller:8776/v1/%\(tenant_id\)s $ openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(tenant_id\)s $ openstack endpoint create --region RegionOne volumev1 admin http://controller:8776/v1/%\(tenant_id\)s $ openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(tenant_id\)s # 建立cinder user $ openstack user create cinder --domain default --password cinder # cinder user綁定在service project底下為admin的role $ openstack role add --project service --user cinder admin

如果想直接建立原始cinder.conf,可以利用以下cmd,但在這邊我們直接編輯cinder.conf,所
以跳過這一步驟
$ apt-get install python-tox
$ tox -egenconfig
$ cd ~

編輯cinder.conf
$ cat > /etc/cinder/cinder.conf << EOF [DEFAULT]
verbose = True rpc_backend=rabbit osapi_volume_listen=$MY_PRIVATE_IP api_paste_config = /etc/cinder/api-paste.ini auth_strategy = keystone my_ip = $MY_PRIVATE_IP 
enabled_backends = lvm glance_host = controller

[DATABASE] connection = mysql://cinder:cinder@controller/cinder?charset=utf8 [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 project_domain_id = default user_domain_id = default project_name = service username = cinder password = cinder auth_plugin = password signing_dir = /var/cache/cinder [oslo_messaging_rabbit] rabbit_host = controller rabbit_userid = guest rabbit_password = password

[lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_group = cinder-volumes iscsi_protocol = iscsi iscsi_helper = tgtadm


[oslo_concurrency]

lock_path = /var/lock/cinder/tmp


EOF

接下來須設定讓cinder執行一些cmd不用切換root即可執行,利用下面腳本來完成
$ vim cinderNoPwdPermission.sh

#!/bin/bash
for SERVICE in cinder do cat > '/etc/sudoers.d/'$SERVICE'_sudoers' << EOF Defaults:$SERVICE !requiretty $SERVICE ALL = (root) NOPASSWD: /usr/local/bin/$SERVICE-rootwrap /etc/$SERVICE/rootwrap.conf * EOF
chown -R $SERVICE:$SERVICE /etc/$SERVICE
chmod 440 /etc/sudoers.d/$SERVICE_sudoers done chmod 750 /etc/sudoers.d

$ sh cinderNoPwdPermission.sh

修改/etc/cinder底下的owner為cinder,並建立cinder table
$ chown cinder:cinder /etc/cinder/*.{conf,json,ini}
$ su -s /bin/sh -c "cinder-manage db sync" cinder

設定rotate cinder 服務的日誌
$ cat >> /etc/logrotate.d/cinder << EOF /var/log/cinder/*.log { daily missingok rotate 7 compress notifempty nocreate } EOF

建立cinder upstart 腳本
# 建立cinder-api啟動服務腳本
$ cat > /etc/init/cinder-api.conf << EOF description "Cinder API" start on runlevel [2345] stop on runlevel [!2345] chdir /var/run pre-start script mkdir -p /var/run/cinder chown cinder:root /var/run/cinder/ mkdir -p /var/lock/cinder chown cinder:root /var/lock/cinder/ end script exec start-stop-daemon --start --chuid cinder --exec /usr/local/bin/cinder-api -- --config-file=/etc/cinder/cinder.conf --log-file=/var/log/cinder/api.log EOF




# 建立cinder-scheduler啟動服務腳本

$ cat > /etc/init/cinder-scheduler.conf << EOF description "Cinder Scheduler" start on runlevel [2345] stop on runlevel [!2345] chdir /var/run pre-start script mkdir -p /var/run/cinder chown cinder:root /var/run/cinder/ mkdir -p /var/lock/cinder chown cinder:root /var/lock/cinder/ end script exec start-stop-daemon --start --chuid cinder --exec /usr/local/bin/cinder-scheduler -- --config-file=/etc/cinder/cinder.conf --log-file=/var/log/cinder/scheduler.log EOF

# 建立cinder-volume啟動服務腳本

$ cat > /etc/init/cinder-volume.conf << EOF description "Cinder Volume" start on runlevel [2345] stop on runlevel [!2345] chdir /var/run pre-start script mkdir -p /var/run/cinder chown cinder:root /var/run/cinder/ mkdir -p /var/lock/cinder chown cinder:root /var/lock/cinder/ end script exec start-stop-daemon --start --chuid cinder --exec /usr/local/bin/cinder-volume -- --config-file=/etc/cinder/cinder.conf --log-file=/var/log/cinder/volume.log EOF

啟動cinder
$ cat >> /etc/tgt/conf.d/cinder_tgt.conf << EOF
include /var/lib/cinder/volumes/*

EOF

$ service tgt restart $ service open-iscsi restart

$ start cinder-api

$ start cinder-volume

$ start cinder-scheduler



# 確認ciner是否有正常執行
$ ps aux | grep cinder

# 如果沒有發現cinder 的process

$ sudo -u cinder cinder-api --config-file=/etc/cinder/cinder.conf --log-file=/var/log/cinder/api.log


$ sudo -u cinder cinder-scheduler --config-file=/etc/cinder/cinder.conf --log-file=/var/log/cinder/scheduler.log


$ sudo -u cinder cinder-volume --config-file=/etc/cinder/cinder.conf --log-file=/var/log/cinder/volume.log

設定rotate glance 服務的日誌
$ cat >> /etc/logrotate.d/glance << EOF /var/log/glance/*.log { daily missingok rotate 7 compress notifempty nocreate } EOF

利用cinder cmd查詢service-list
$ cat >> ~/adminrc << EOF
export OS_VOLUME_API_VERSION=2
EOF

$ source ~/adminrc

$ cinder service-list

留言

這個網誌中的熱門文章

Python - 計算特定目錄底下的檔案以及目錄數量

devstack安裝all in one openstack(pike)

利用ATOM 編輯器在Windows開發PHP