利用opensource 安裝openstack 結合 linuxbridge與vxlan– Multinode (kilo)


==環境說明== 

OS: ubuntu 14.04 x86_64




HostName
Eth0(Management)
Eth1(Public)
controller
172.20.3.49

compute1
172.20.3.58

network
172.20.3.62
192.168.100.17
storage
172.20.3.54




==前置作業==

編輯四台的/etc/hosts,方便利用主機名稱進行訪問
$ cat >> /etc/hosts << EOF 172.20.3.49 controller 172.20.3.58 compute1 172.20.3.62 network
172.20.3.54 storage EOF

三台都更新套件
$ apt-get update; apt-get dist-upgrade -y;reboot

在這三台都建立openstack 相關服務的使用者,我們利用下面的腳本幫我們創立
# 新增一個createOpenstackServiceUsers.sh腳本
$ vim createOpenstackServiceUsers.sh

# 把下面內容複製貼上 #!/bin/bash for SERVICE in keystone glance neutron nova horizon cinder do useradd --home-dir "/var/lib/$SERVICE" --create-home --system --shell /bin/false $SERVICE #Create essential dirs mkdir -p /var/log/$SERVICE mkdir -p /etc/$SERVICE #Set ownership of the dirs chown -R $SERVICE:$SERVICE /var/log/$SERVICE chown -R $SERVICE:$SERVICE /var/lib/$SERVICE chown $SERVICE:$SERVICE /etc/$SERVICE #Some neutron only dirs if [ "$SERVICE" = 'neutron' ] then mkdir -p /etc/neutron/plugins/ml2 mkdir -p /etc/neutron/rootwrap.d chown -R neutron:neutron /etc/neutron/plugins fi


if [ "$SERVICE" = 'glance' ] then mkdir -p /var/lib/glance/images mkdir -p /var/lib/glance/scrubber mkdir -p /var/lib/glance/image-cache chown -R glance:glance /var/lib/glance/ fi done


# 執行此腳本 $ sh createOpenstackServiceUsers.sh

接下來須設定openstack相關user執行一些cmd不用切換root即可執行,利用下面腳本來完成
$ vim NoPwdPermission.sh

#!/bin/bash
for SERVICE in nova neutron cinder do cat > '/etc/sudoers.d/'$SERVICE'_sudoers' << EOF Defaults:$SERVICE !requiretty $SERVICE ALL = (root) NOPASSWD: /usr/local/bin/$SERVICE-rootwrap /etc/$SERVICE/rootwrap.conf * EOF
chown -R $SERVICE:$SERVICE /etc/$SERVICE
chmod 440 /etc/sudoers.d/$SERVICE_sudoers done chmod 750 /etc/sudoers.d

$ sh NoPwdPermission.sh

==Keystone==

設定controller環境參數
$ cat >> .bashrc << EOF MY_IP=172.20.3.49 MY_PRIVATE_IP=172.20.3.49 CONTROLLER_IP=172.20.3.49
EOF
$ source .bashrc

安裝rabbit-mq及相關設定

# 安裝rabbitmsq-server套件
$ apt-get install -y rabbitmq-server

# 我們這邊是採用預設的guest使用者,如果想要新增使用者可以用下列的cmd執行:
$ rabbitmqctl add_user newUser password

# 設定newUser存取權限:
$ rabbitmqctl set_permissions newUser ".*" ".*" ".*"
===============================分隔線===========================

# 更換guest使用者密碼
$ rabbitmqctl change_password guest password

# 設定rabbitmq-server的ip為private ip,並更改存取權限,設定好之後重新動即可


$ cat >> /etc/rabbitmq/rabbitmq-env.conf <<EOF RABBITMQ_NODE_IP_ADDRESS=$MY_PRIVATE_IP EOF

$ chmod 644 /etc/rabbitmq/rabbitmq-env.conf $ service rabbitmq-server restart

安裝mysql及相關設定

# 安裝mysql-server套件
$ apt-get install -y mysql-server

# 修改部分基本設定 $ sed -i "s/127.0.0.1/$MY_PRIVATE_IP\nskip-name-resolve\ncharacter-set-server = utf8\ncollation-server = utf8_general_ci\ninit-connect = 'SET NAMES utf8'/g" /etc/mysql/my.cnf

# 寫進的內容如下
bind-address = 172.20.3.53
skip-name-resolve 
character-set-server = utf8 
collation-server = utf8_general_ci 
init-connect = 'SET NAMES utf8'
===============================分隔線===========================

# 重啟mysql並做安全性設定 $ service mysql restart

$ mysql_secure_installation

建立keystone資料庫、使用者及相關權限設定
$ mysql -u root -ppassword -e "create database keystone;"

$ mysql -u root -ppassword -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'keystone';"


$ mysql -u root -ppassword -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone';"

===============================分隔線===========================
mysql -u root -ppassword -e "create database keystone;"
mysql -u root -ppassword -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'keystone';"
mysql -u root -ppassword -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone';"

安裝keystone之前,先需安裝的相依套件

$ apt-get install -y python-dev libmysqlclient-dev libffi-dev libssl-dev python-pip git

$ pip install pip==7.1.2

$ pip install python-openstackclient==1.0.5
$ pip install repoze.lru pbr mysql-python

開始安裝keystone kilo版本
#  clone keystone kilo版本的source code
$ git clone https://github.com/openstack/keystone.git -b stable/kilo

#  複製keystone相關conf範本到 /etc/keystone底下

$ cp -R keystone/etc/* /etc/keystone/

$ cd keystone


#  安裝相關相依套件

$ sudo pip install -r requirements.txt

#  最後安裝keystone

$ python setup.py install
===============================分隔線===========================

git clone https://github.com/openstack/keystone.git -b stable/kilo
cp -R keystone/etc/* /etc/keystone/
cd keystone
sudo pip install -r requirements.txt
python setup.py install

產生一個隨機token並配置keystone.conf,並利用keystone-manage產生keystone所需的table
#  利用以下cmd產生一個隨機token
$ openssl rand -hex 10
9c3c8d455f9d340e1f6a

#  重新命名/etc/keystone底下的keystone.conf.sample為keystone.conf,並開始設定keystone.conf

$ mv /etc/keystone/keystone.conf.sample /etc/keystone/keystone.conf

#  設定連到keystone資料庫存取帳號密碼設定

$ sed -i "s|database]|database]\nconnection = mysql://keystone:keystone@$MY_IP/keystone|g" /etc/keystone/keystone.conf

#  把剛剛隨機產生的token當作admin_token

$ sed -i 's/#admin_token = ADMIN/admin_token = 9c3c8d455f9d340e1f6a/g' /etc/keystone/keystone.conf

#  上述的指令在/etc/keystone/keystone.conf下會修改admin_token及[database]為

admin_token = 9c3c8d455f9d340e1f6a
[database]

connection = mysql://keystone:keystone@172.20.3.53/keystone

#  最後利用keystone-manage來同步資料庫並建立相關資料表

$ cd ~

$ su -s /bin/sh -c "keystone-manage db_sync" keystone

===============================分隔線===========================


mv /etc/keystone/keystone.conf.sample /etc/keystone/keystone.conf
sed -i "s|database]|database]\nconnection = mysql://keystone:keystone@$MY_IP/keystone|g" /etc/keystone/keystone.conf
sed -i 's/#admin_token = ADMIN/admin_token = 9c3c8d455f9d340e1f6a/g' /etc/keystone/keystone.conf


設定rotate keystone 服務的日誌
$ cat >> /etc/logrotate.d/keystone << EOF /var/log/keystone/*.log { daily missingok rotate 7 compress notifempty nocreate } EOF

建立keystone upstart 腳本,並啟動keystone
$ cat > /etc/init/keystone.conf << EOF description "Keystone API server" author "Duncan" start on runlevel [2345] stop on runlevel [!2345] respawn exec start-stop-daemon --start --chuid keystone --chdir /var/lib/keystone --name keystone --exec /usr/local/bin/keystone-all -- --config-file=/etc/keystone/keystone.conf --log-file=/var/log/keystone/keystone.log EOF $ start keystone

# 確認keystone是否有正常啟用,如果沒有的話可以利用下面的cmd來觀察哪裡出問題

$ ps aux | grep keystone

$ sudo -u keystone /usr/local/bin/keystone-all --config-file=/etc/keystone/keystone.conf --log-file=/var/log/keystone/keystone.log


建立能使用openstack cmd並具備admin權限的環境變數腳本,這邊的SERVICE_TOKEN帶入入/etc/keystone/keystone.conf中設定admin_token的值
$ cat >> openrc_admin_v3 << EOF
export OS_TOKEN=9c3c8d455f9d340e1f6a
export OS_URL=http://$MY_IP:35357/v3
export OS_IDENTITY_API_VERSION=3
EOF

$ source openrc_admin_v3

如果利用cmd發生locale.Error: unsupported locale setting,請輸入以下cmd
$ export LANGUAGE=en_US.UTF-8 $ export LANG=en_US.UTF-8 $ export LC_ALL=en_US.UTF-8 $ locale-gen en_US.UTF-8 $ sudo dpkg-reconfigure locales
===============================分隔線===========================

export LANGUAGE=en_US.UTF-8 export LANG=en_US.UTF-8 export LC_ALL=en_US.UTF-8 locale-gen en_US.UTF-8 sudo dpkg-reconfigure locales

我們將採用keystone v3當作我們的endpoint。
# 建立 keystone identity service
$ openstack service create --name keystone --description "OpenStack Identity" identity

# 建立 keystone public/internal/admin endpoint
$ openstack endpoint create --region RegionOne identity public http://controller:5000/v3 $ openstack endpoint create --region RegionOne identity internal http://controller:5000/v3 $ openstack endpoint create --region RegionOne identity admin http://controller:35357/v3

# 建立admin project
$ openstack project create --domain default --description "Admin Project" admin

# 建立admin user
$ openstack user create admin --domain default --password password

# 建立admin role
$ openstack role create admin

# admin user綁定在admin project底下為admin的role
$ openstack role add --project admin --user admin admin

# 建立service project
$ openstack project create --domain default --description "Service Project" service

# 建立demo project
$ openstack project create --domain default --description "Demo Project" demo

# 建立demo user
$ openstack user create demo --domain default --password password


# 建立user role
$ openstack role create user

# 綁定demo user在demo project底下的role為user
$ openstack role add --project demo --user demo user


==Glance==

確認是否有下列相關子目錄,如果沒有請先手動建立
$ mkdir -p /var/lib/glance/images $ mkdir -p /var/lib/glance/scrubber $ mkdir -p /var/lib/glance/image-cache
$ mkdir -p /var/log/glance
$ chown -R glance:glance /var/lib/glance/
$ chown -R glance:glance /var/log/glance
===============================分隔線===========================
mkdir -p /var/lib/glance/images mkdir -p /var/lib/glance/scrubber mkdir -p /var/lib/glance/image-cache

chown -R glance:glance /var/lib/glance/
chown -R glance:glance /var/log/glance

建立glance資料庫、使用者及相關權限設定
$ mysql -u root -ppassword -e "create database glance;"
$ mysql -u root -ppassword -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'glance';"
$ mysql -u root -ppassword -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glance';"
===============================分隔線===========================

mysql -u root -ppassword -e "create database glance;"
mysql -u root -ppassword -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'glance';"
mysql -u root -ppassword -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glance';"


當上述步驟完成,我們來驗證keystone的服務是否正常
$ cat >> adminrc << EOF
export OS_PROJECT_DOMAIN_ID=default
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_NAME=admin
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=password
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
EOF

$ unset OS_TOKEN OS_URL


$ source adminrc


$ openstack token issue


# 成功會出現類似下列結果

+------------+----------------------------------+ | Field | Value | +------------+----------------------------------+ | expires | 2016-05-06T08:11:26.737320Z | | id | 7aa02151b545412182fb93927aa46cf6 | | project_id | f90a4545a1264f14a806c13c91057383 | | user_id | 051e0c7c171241c39094c4666bcbc3d9 | +------------+----------------------------------+

安裝glance
$ git clone https://github.com/openstack/glance.git -b stable/kilo

$ cp -R glance/etc/* /etc/glance/

$ cd glance

$ sudo pip install -r requirements.txt

$ python setup.py install && cd
===============================分隔線===========================
git clone https://github.com/openstack/glance.git -b stable/kilo
cp -R glance/etc/* /etc/glance/
cd glance
sudo pip install -r requirements.txt
python setup.py install

向keystone註冊glance service及endpoint
# 建立 glance image service
$ openstack service create --name glance --description "OpenStack Image service" image

# 建立 glance public/internal/admin endpoint
$ openstack endpoint create --region RegionOne image public http://controller:9292 $ openstack endpoint create --region RegionOne image internal http://controller:9292 $ openstack endpoint create --region RegionOne image admin http://controller:9292

# 建立glance user
$ openstack user create glance --domain default --password glance

# glance user綁定在service project底下為admin的role
$ openstack role add --project service --user glance admin

配置/etc/glance/glance-api.conf,請修改或是新增下面的參數
[DEFAULT]
verbose = True

notification_driver = noop

[database] connection = mysql://glance:glance@controller/glance


[glance_store]
default_store = file
filesystem_store_datadir = /var/lib/glance/images/

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = glance
password = glance

revocation_cache_time = 10

[paste_deploy]


flavor = keystone

配置/etc/glance/glance-registry.conf,請修改或是新增下面的參數
[DEFAULT]
verbose = True
notification_driver = noop

[database] connection = mysql://glance:glance@controller/glance

[keystone_authtoken]

auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = glance
password = glance

[paste_deploy]
flavor = keystone

初始化glance資料庫
$ su -s /bin/sh -c "glance-manage db_sync" glance

設定rotate glance 服務的日誌

$ cat >> /etc/logrotate.d/glance << EOF /var/log/glance/*.log { daily missingok rotate 7 compress notifempty nocreate } EOF

立glance upstart 腳本
# 建立glance-api啟動服務腳本
$ cat >> /etc/init/glance-api.conf << EOF description "Glance API server" author "Duncan" start on runlevel [2345] stop on runlevel [!2345] respawn exec start-stop-daemon --start --chuid glance --exec /usr/local/bin/glance-api -- --config-file=/etc/glance/glance-api.conf --config-file=/etc/glance/glance-api-paste.ini EOF # 建立glance-registry啟動服務腳本 $ cat >> /etc/init/glance-registry.conf << EOF description "Glance registry server" author "Duncan" start on runlevel [2345] stop on runlevel [!2345] respawn exec start-stop-daemon --start --chuid glance --exec /usr/local/bin/glance-registry -- --config-file=/etc/glance/glance-registry.conf --config-file=/etc/glance/glance-registry-paste.ini EOF

啟動glance
$ glance-control all start

或者分開執行
$ start glance-api

$ start glance-registry

# 確認glance是否有正常執行

$ ps aux | grep glance

# 如果沒有發現glance 的process,請輸入以下cmd方便找出哪邊有問題

$ sudo -u glance glance-api --config-file=/etc/glance/glance-api.conf --config-file=/etc/glance/glance-api-paste.ini

$ sudo -u glance glance-registry --config-file=/etc/glance/glance-registry.conf --config-file=/etc/glance/glance-registry-paste.ini

利用glance cmd或是openstack cmd 查詢image-list
$ cat >> ~/adminrc << EOF
export OS_IMAGE_API_VERSION=2
EOF

$ source ~/adminrc

$ glance image-list / openstack image list

==Nova==

--Controller Host

建立nova資料庫、使用者及相關權限設定
$ mysql -u root -ppassword -e "create database nova;"
$ mysql -u root -ppassword -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';"
$ mysql -u root -ppassword -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova';"
===============================分隔線===========================

mysql -u root -ppassword -e "create database nova;"
mysql  -u root -ppassword -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';"
mysql  -u root -ppassword -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova';"

安裝nova
$ apt-get install -y libpq-dev python-libvirt libxml2-dev libxslt1-dev
$ git clone https://github.com/openstack/nova.git -b stable/kilo
$ cd nova
$ cp -r etc/nova/* /etc/nova/
$ sudo pip install -r requirements.txt
$ python setup.py install
# cd ~
===============================分隔線===========================

apt-get install -y libpq-dev python-libvirt libxml2-dev libxslt1-dev
git clone https://github.com/openstack/nova.git -b stable/kilo
cd nova
cp -r etc/nova/* /etc/nova/
sudo pip install -r requirements.txt
python setup.py install
cd ~

向keystone註冊nova service及endpoint
# 建立 nova compute service
$ openstack service create --name nova --description "OpenStack Compute" compute

# 建立 nova public/internal/admin endpoint
$ openstack endpoint create --region RegionOne compute public http://controller:8774/v2/%\(tenant_id\)s

$ openstack endpoint create --region RegionOne compute internal http://controller:8774/v2/%\(tenant_id\)s

$ openstack endpoint create --region RegionOne compute admin http://controller:8774/v2/%\(tenant_id\)s

# 建立nova user
$ openstack user create nova --domain default --password nova

# nova user綁定在service project底下為admin的role
$ openstack role add --project service --user nova admin

編輯nova.conf
$ cat > /etc/nova/nova.conf << EOF
[DEFAULT] verbose = True
log_dir = /var/log/nova
rpc_backend = rabbit
auth_strategy = keystone my_ip = $MY_PRIVATE_IP network_api_class = nova.network.neutronv2.api.API security_group_api = neutron linuxnet_interface_driver = nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver firewall_driver = nova.virt.firewall.NoopFirewallDriver root_helper = sudo /usr/local/bin/nova-rootwrap /etc/nova/rootwrap.conf state_path = /var/lib/nova
enabled_apis = osapi_compute,metadata vncserver_listen = 0.0.0.0 vncserver_proxyclient_address = $MY_PRIVATE_IP [glance] host = controller [oslo_concurrency] lock_path = /var/lib/nova/tmp [database] connection = mysql://nova:nova@controller/nova [oslo_messaging_rabbit] rabbit_host = controller rabbit_userid = guest rabbit_password = password [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = nova password = nova
EOF


設定rotate nova 服務的日誌
$ cat >> /etc/logrotate.d/nova<< EOF /var/log/nova/*.log { daily missingok rotate 7 compress notifempty nocreate } EOF

修改/etc/nova底下的owner為nova,並建立nova table
$ chown nova:nova /etc/nova/*.{conf,json,ini}
$ su -s /bin/sh -c "nova-manage db sync" nova

建立nova-api upstart 腳本
$ cat > /etc/init/nova-api.conf << EOF start on runlevel [2345] stop on runlevel [!2345] exec start-stop-daemon --start --chuid nova --exec /usr/local/bin/nova-api -- --config-file=/etc/nova/nova.conf EOF

建立nova-cert upstart 腳本
$ cat > /etc/init/nova-cert.conf << EOF start on runlevel [2345] stop on runlevel [!2345] exec start-stop-daemon --start --chuid nova --exec /usr/local/bin/nova-cert -- --config-file=/etc/nova/nova.conf EOF

建立nova-consoleauth upstart 腳本
$ cat > /etc/init/nova-consoleauth.conf << EOF start on runlevel [2345] stop on runlevel [!2345] respawn chdir /var/run exec start-stop-daemon --start --chuid nova --exec /usr/local/bin/nova-consoleauth -- --config-file=/etc/nova/nova.conf EOF

建立nova-conductor upstart 腳本
$ cat > /etc/init/nova-conductor.conf << EOF description "Nova conductor" author "Duncan" start on runlevel [2345] stop on runlevel [!2345] chdir /var/run pre-start script mkdir -p /var/run/nova chown nova:root /var/run/nova/ mkdir -p /var/lock/nova chown nova:root /var/lock/nova/ end script exec start-stop-daemon --start --chuid nova --exec /usr/local/bin/nova-conductor -- --config-file=/etc/nova/nova.conf EOF

建立nova-scheduler upstart 腳本
$ cat > /etc/init/nova-scheduler.conf << EOF description "Nova scheduler" author "Duncan" start on runlevel [2345] stop on runlevel [!2345] chdir /var/run pre-start script mkdir -p /var/run/nova chown nova:root /var/run/nova/ mkdir -p /var/lock/nova chown nova:root /var/lock/nova/ end script exec start-stop-daemon --start --chuid nova --exec /usr/local/bin/nova-scheduler -- --config-file=/etc/nova/nova.conf EOF

啟動nova相關服務
$ start nova-api $ start nova-cert $ start nova-consoleauth $ start nova-conductor $ start nova-scheduler

# 確認nova相關服務是否有正常執行

$ ps aux | grep nova
$ nova service-list

# 如果發現沒有上面其中的nova process,可以利用以下cmd單獨執行看看錯誤訊息
$ sudo -u nova nova-api --config-file=/etc/nova/nova.conf
$ sudo -u nova nova-cert --config-file=/etc/nova/nova.conf
$ sudo -u nova nova-consoleauth --config-file=/etc/nova/nova.conf
$ sudo -u nova nova-conductor --config-file=/etc/nova/nova.conf

$ sudo -u nova nova-scheduler --config-file=/etc/nova/nova.conf


--Compute Node

設定compute1 Host環境參數
$ cat >> .bashrc << EOF MY_IP=172.20.3.58 MY_PRIVATE_IP=172.20.3.58 CONTROLLER_IP=172.20.3.49
EOF
$ source .bashrc

安裝nova-compute之前,先需安裝的相依套件
$ apt-get install -y python-dev libmysqlclient-dev libffi-dev libssl-dev python-pip git

$ pip install pip==7.1.2

$ pip install python-openstackclient==1.0.5
$ pip install repoze.lru pbr mysql-python

檢查是否有下列目錄,沒有則手動建立
$ mkdir /var/lib/nova/keys
$ mkdir /var/lib/nova/locks
$ mkdir /var/lib/nova/instances
$ chown -R nova:nova /var/lib/nova
$ grep -c '(vmx|svm)' /proc/cpuinfo
首先先利用cmd判斷我們適合的libvirt typeIf this command returns a value of one or greater, your compute node supports hardware acceleration which typically requires no additional configuration.If this command returns a value of zero, your compute node does not support hardware acceleration and you must configure libvirt to use QEMU instead of KVM.
安裝nova
$ apt-get install -y libpq-dev python-libvirt libxml2-dev libxslt1-dev libvirt-bin qemu-kvm python-libguestfs libguestfs-tools
$ git clone https://github.com/openstack/nova.git -b stable/kilo
$ cd nova
$ cp -r etc/nova/* /etc/nova/
$ sudo pip install -r requirements.txt
$ python setup.py install
# cd ~
===============================分隔線===========================

apt-get install -y libpq-dev python-libvirt libxml2-dev libxslt1-dev
git clone https://github.com/openstack/nova.git -b stable/kilo
cd nova
cp -r etc/nova/* /etc/nova/
sudo pip install -r requirements.txt
python setup.py install
cd ~

編輯nova.conf
$ cat > /etc/nova/nova.conf << EOF
[DEFAULT] verbose = True
log_dir = /var/log/nova
rpc_backend = rabbit
auth_strategy = keystone my_ip = $MY_PRIVATE_IP network_api_class = nova.network.neutronv2.api.API security_group_api = neutron linuxnet_interface_driver = nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver firewall_driver = nova.virt.firewall.NoopFirewallDriver root_helper = sudo /usr/local/bin/nova-rootwrap /etc/nova/rootwrap.conf state_path = /var/lib/nova
enabled_apis = osapi_compute,metadata vncserver_listen = 0.0.0.0 vncserver_proxyclient_address = $MY_PRIVATE_IP [glance] host = controller [oslo_concurrency] lock_path = /var/lib/nova/tmp [database] connection = mysql://nova:nova@controller/nova [oslo_messaging_rabbit] rabbit_host = controller rabbit_userid = guest rabbit_password = password [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = nova password = nova
EOF

建立nova-compute.conf
$ cat > /etc/nova/nova-compute.conf << EOF [DEFAULT] compute_driver=libvirt.LibvirtDriver
resume_guests_state_on_host_boot=true
vnc_enabled = True
novncproxy_base_url = http://$CONTROLLER_IP:6080/vnc_auto.html

[libvirt] virt_type=qemu
EOF

建立nova-compute upstart腳本
$ cat > /etc/init/nova-compute.conf << EOF description "Nova compute worker" author "Duncan" start on runlevel [2345] stop on runlevel [!2345] chdir /var/run pre-start script mkdir -p /var/run/nova chown nova:root /var/run/nova/ mkdir -p /var/lock/nova chown nova:root /var/lock/nova/ modprobe nbd end script exec start-stop-daemon --start --chuid nova --exec /usr/local/bin/nova-compute -- --config-file=/etc/nova/nova.conf --config-file=/etc/nova/nova-compute.conf EOF

設定rotate nova 服務的日誌
$ cat >> /etc/logrotate.d/nova<< EOF /var/log/nova/*.log { daily missingok rotate 7 compress notifempty nocreate } EOF

修改/etc/nova底下的owner為nova
$ chown nova:nova /etc/nova/*.{conf,json,ini}

啟動nova-compute
$ usermod -G libvirtd nova

$ start nova-compute


# 確認nova-compute是否正常

$ ps aux|grep nova


# 如果不正常可以用下列指令執行看錯誤訊息


$ sudo -u nova nova-compute --config-file=/etc/nova/nova.conf --config-file=/etc/nova/nova-compute.conf

如果錯誤訊息發生"HypervisorUnavailable: Connection to the hypervisor is broken on host"

# 請先確認user:nova 是否有加入到libvirtd底下
$ getent group libvirtd

# 如果還是不行嘗試下面做法

$ vim /etc/libvirt/libvirtd.conf
把unix_sock_rw_perms = "0770" 改成 unix_sock_rw_perms = "0777"

$ /etc/init.d/libvirt-bin restart


當上述步驟完成,我們來驗證nova-compute的服務是否正常
$ cat >> adminrc << EOF
export OS_PROJECT_DOMAIN_ID=default
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_NAME=admin
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=password
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
EOF

$ source adminrc


$ nova service-list 

# 成功會出現類似下列結果

+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+ | Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +----+------------------+------------+----------+---------+-------+----------------------------+-----------------+ | 1 | nova-cert | controller | internal | enabled | up | 2016-06-01T09:21:02.000000 | - | | 2 | nova-consoleauth | controller | internal | enabled | up | 2016-06-01T09:21:05.000000 | - | | 3 | nova-conductor | controller | internal | enabled | up | 2016-06-01T09:20:59.000000 | - | | 5 | nova-scheduler | controller | internal | enabled | up | 2016-06-01T09:21:02.000000 | - | | 6 | nova-compute | compute1 | nova | enabled | up | 2016-06-01T09:21:02.000000 | - | +----+------------------+------------+----------+---------+-------+----------------------------+-----------------+


==Neutron==

--controller node

建立neutron資料庫、使用者及相關權限設定
$ mysql -u root -ppassword -e "create database neutron;"
$ mysql -u root -ppassword -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutron';"
$ mysql -u root -ppassword -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron';"
===============================分隔線===========================


mysql  -u root -ppassword -e "create database neutron;"
mysql  -u root -ppassword -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutron';"
mysql  -u root -ppassword -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron';"

安裝neutron
$ apt-get install -y git ipset keepalived conntrack conntrackd arping openvswitch-switch dnsmasq-utils dnsmasq libffi-dev libssl-dev libmysqlclient-dev

$ git clone https://github.com/openstack/neutron.git -b stable/kilo

$ cp neutron/etc/* /etc/neutron/
$ cp -R neutron/etc/neutron/plugins/ml2/* /etc/neutron/plugins/ml2
$ cp -R neutron/etc/neutron/rootwrap.d/* /etc/neutron/rootwrap.d

$ cd neutron

$ pip install -r requirements.txt
$ python setup.py install && cd
===============================分隔線===========================


apt-get install -y git ipset keepalived conntrack conntrackd arping openvswitch-switch dnsmasq-utils dnsmasq libffi-dev libssl-dev libmysqlclient-dev
git clone https://github.com/openstack/neutron.git -b stable/kilo
cp neutron/etc/* /etc/neutron/
cp -R neutron/etc/neutron/plugins/ml2/* /etc/neutron/plugins/ml2
cp -R neutron/etc/neutron/rootwrap.d/* /etc/neutron/rootwrap.d
cd neutron
pip install -r requirements.txt
python setup.py install && cd

向keystone註冊neutron service及endpoint
# 建立 neutron network service
$ openstack service create --name neutron --description "OpenStack Networking" network

# 建立 neutron public/internal/admin endpoint
$ openstack endpoint create --region RegionOne network public http://controller:9696

$ openstack endpoint create --region RegionOne network internal http://controller:9696

$ openstack endpoint create --region RegionOne network admin http://controller:9696

# 建立neutron user
$ openstack user create neutron --domain default --password neutron

# neutron user綁定在service project底下為admin的role
$ openstack role add --project service --user neutron admin

編輯/etc/neutron/neutron.conf
$ rm /etc/neutron/neutron.conf

$ cat > /etc/neutron/neutron.conf << EOF
[DEFAULT] verbose = True debug = True
rpc_backend=rabbit
core_plugin = ml2 service_plugins = router auth_strategy = keystone allow_overlapping_ips = True
notify_nova_on_port_status_changes = True notify_nova_on_port_data_changes = True nova_url = http://controller:8774/v2
notification_driver=neutron.openstack.common.notifier.rpc_notifier [nova] auth_plugin = password project_domain_id = default user_domain_id = default region_name = RegionOne project_name = service username = nova password = nova auth_url = http://controller:35357
[agent] root_helper=sudo /usr/local/bin/neutron-rootwrap /etc/neutron/rootwrap.conf [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 project_domain_id = default user_domain_id = default project_name = service username = neutron password = neutron auth_plugin = password [database] connection = mysql://neutron:neutron@controller/neutron [oslo_concurrency] lock_path = /var/lock/neutron/tmp [oslo_messaging_rabbit] rabbit_host = controller rabbit_userid = guest rabbit_password = password
EOF

編輯 /etc/neutron/plugins/ml2/ml2_conf.ini
$ rm /etc/neutron/plugins/ml2/ml2_conf.ini

$ cat > /etc/neutron/plugins/ml2/ml2_conf.ini << EOF
[ml2] tenant_network_types = vxlan extension_drivers = port_security type_drivers = flat,vxlan mechanism_drivers = linuxbridge,l2population [ml2_type_flat] flat_networks = public [ml2_type_vxlan] vni_ranges = 1001:2000 [securitygroup] enable_ipset = True
EOF

當完成 ML2 設定後,接著編輯/etc/nova/nova.conf
$ vim /etc/nova/nova.conf

# 新增neutron section
[neutron] url = http://controller:9696 auth_uri = http://controller:5000 auth_url = http://controller:35357 auth_plugin = password username = neutron password = neutron project_name = service auth_strategy = keystone user_domain_id = default project_domain_id = default region_name = RegionOne

建立neutron plugin soft link,並同步資料庫
# 編輯檔案權限
$ chown neutron:neutron /etc/neutron/*.{conf,json,ini}
$ chown -R neutron:neutron /etc/neutron/plugins
$ ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
$ su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

設定rotate neutron 服務的日誌
$ cat >> /etc/logrotate.d/neutron << EOF /var/log/neutron/*.log { daily missingok rotate 7 compress notifempty nocreate } EOF

建立neutron-server upstart腳本
$ cat > /etc/default/neutron-server << EOF NEUTRON_PLUGIN_CONFIG="/etc/neutron/plugins/ml2/ml2_conf.ini" EOF

$ cat > /etc/init/neutron-server.conf << EOF # vim:set ft=upstart ts=2 et: start on runlevel [2345] stop on runlevel [!2345] script [ -r /etc/default/neutron-server ] && . /etc/default/neutron-server [ -r "\$NEUTRON_PLUGIN_CONFIG" ] && CONF_ARG="--config-file \$NEUTRON_PLUGIN_CONFIG" exec start-stop-daemon --start --chuid neutron --exec /usr/local/bin/neutron-server -- \ --config-file /etc/neutron/neutron.conf \ --log-file /var/log/neutron/server.log \$CONF_ARG end script EOF

動neutron-server服務
$ restart nova-api
$ start neutron-server

# 確認neutron相關服務是否有正常執行
$ ps aux | grep neutron
$ neutron ext-list
+-----------------------+-----------------------------------------------+
| alias                 | name                                          |
+-----------------------+-----------------------------------------------+
| port-security         | Port Security                                 |
| security-group        | security-group                                |
| l3_agent_scheduler    | L3 Agent Scheduler                            |
| net-mtu               | Network MTU                                   |
| ext-gw-mode           | Neutron L3 Configurable external gateway mode |
| binding               | Port Binding                                  |
| provider              | Provider Network                              |
| agent                 | agent                                         |
| quotas                | Quota management support                      |
| subnet_allocation     | Subnet Allocation                             |
| dhcp_agent_scheduler  | DHCP Agent Scheduler                          |
| l3-ha                 | HA Router extension                           |
| multi-provider        | Multi Provider Network                        |
| external-net          | Neutron external network                      |
| router                | Neutron L3 Router                             |
| allowed-address-pairs | Allowed Address Pairs                         |
| extraroute            | Neutron Extra Route                           |
| extra_dhcp_opt        | Neutron Extra DHCP opts                       |
| dvr                   | Distributed Virtual Router                    |

+-----------------------+-----------------------------------------------+

# 如果發現沒有上面其中的neutron process,可以利用以下cmd單獨執行看看錯誤訊息
$ sudo -u neutron neutron-server --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini --log-file /var/log/neutron/server.log

--Network Node

設定Network Node環境參數
$ cat >> .bashrc << EOF MY_IP=172.20.3.62 MY_PRIVATE_IP=172.20.3.62 MY_PUBLIC_IP=192.168.100.17 CONTROLLER_IP=172.20.3.49
EOF
$ source .bashrc

安裝neutron network node之前,先需安裝的相依套件
$ apt-get install -y python-dev libmysqlclient-dev libffi-dev libssl-dev python-pip git

$ pip install pip==7.1.2

$ pip install python-openstackclient==1.0.5
$ pip install repoze.lru pbr mysql-python

安裝neutron之前,先設定 kernel 網路參數,透過 /etc/sysctl.conf 加入以下參數
$ vim /etc/sysctl.conf

net.ipv4.ip_forward=1

net.ipv4.conf.all.rp_filter=0 net.ipv4.conf.default.rp_filter=0

# 編輯完成,利用cmd載入參數

$ sudo sysctl -p

===============================分隔線===========================

# case 1
# 如果出現下列訊息
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory

sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory

# 請執行以下cmd啟用br_netfilter module

$ modprobe br_netfilter

# 確認是否成功啟用

$ lsmod |grep  br_netfilter
br_netfilter           20480  0

bridge                110592  1 br_netfilter

# case 2

# 如果出現下列訊息,請確認你的kernel版本在3.19.0-15以上
modprobe: FATAL: Module br_netfilter not found.

$ uname -r


# 如果低於這版本請更新kernel

$ apt-get update

# 查詢你想安裝的版本

$ apt-cache search linux-image

$ apt-get install linux-image-xxxversion

$ reboot


# 將之前的 kernel 移除

$ apt-get remove linux-image-xxxversion


安裝neutron
$ apt-get install -y ipset keepalived conntrack conntrackd arping openvswitch-switch dnsmasq-utils dnsmasq libffi-dev libssl-dev ebtables bridge-utils

$ git clone https://github.com/openstack/neutron.git -b stable/kilo
$ cp neutron/etc/* /etc/neutron/
$ cp -R neutron/etc/neutron/plugins/ml2/* /etc/neutron/plugins/ml2
$ cp -R neutron/etc/neutron/rootwrap.d/* /etc/neutron/rootwrap.d

$ cd neutron

$ pip install -r requirements.txt
$ python setup.py install && cd
===============================分隔線===========================


apt-get install -y git ipset keepalived conntrack conntrackd arping openvswitch-switch dnsmasq-utils dnsmasq libffi-dev libssl-dev libmysqlclient-dev
git clone https://github.com/openstack/neutron.git -b stable/kilo
cp neutron/etc/* /etc/neutron/
cp -R neutron/etc/neutron/plugins/ml2/* /etc/neutron/plugins/ml2
cp -R neutron/etc/neutron/rootwrap.d/* /etc/neutron/rootwrap.d
cd neutron
pip install -r requirements.txt
python setup.py install && cd

編輯/etc/neutron/neutron.conf
$ rm /etc/neutron/neutron.conf

$ cat > /etc/neutron/neutron.conf << EOF
[DEFAULT] verbose = True debug = True
rpc_backend=rabbit
core_plugin = ml2 service_plugins = router auth_strategy = keystone allow_overlapping_ips = True
[agent] root_helper=sudo /usr/local/bin/neutron-rootwrap /etc/neutron/rootwrap.conf [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 project_domain_id = default user_domain_id = default project_name = service username = neutron password = neutron auth_plugin = password
[oslo_concurrency] lock_path = /var/lock/neutron/tmp [oslo_messaging_rabbit] rabbit_host = controller rabbit_userid = guest rabbit_password = password
EOF


編輯 /etc/neutron/plugins/ml2/ml2_conf.ini
$ rm /etc/neutron/plugins/ml2/ml2_conf.ini

$ cat > /etc/neutron/plugins/ml2/ml2_conf.ini << EOF
[ml2] tenant_network_types = vxlan extension_drivers = port_security type_drivers = flat,vxlan mechanism_drivers = linuxbridge,l2population [ml2_type_flat] flat_networks = public [ml2_type_vxlan] vni_ranges = 1001:2000 [securitygroup] enable_ipset = True
EOF

編輯 /etc/neutron/plugins/ml2/linuxbridge_agent.ini
$ cat > /etc/neutron/plugins/ml2/linuxbridge_agent.ini << EOF
[linux_bridge]
physical_interface_mappings = public:eth1 [vxlan] enable_vxlan = True local_ip = $MY_PRIVATE_IP l2_population = True [agent] prevent_arp_spoofing = True [securitygroup] enable_security_group = True firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
EOF

編輯 /etc/neutron/dhcp_agent.ini
$ rm /etc/neutron/dhcp_agent.ini
$ cat > /etc/neutron/dhcp_agent.ini << EOF
[DEFAULT] interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = True verbose = True
EOF

編輯 /etc/neutron/l3_agent.ini
$ rm /etc/neutron/l3_agent.ini
$ cat > /etc/neutron/l3_agent.ini << EOF
[DEFAULT] interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver external_network_bridge = verbose = True
EOF

編輯 /etc/neutron/metadata_agent.ini
$ rm /etc/neutron/metadata_agent.ini
$ cat > /etc/neutron/metadata_agent.ini << EOF
[DEFAULT] auth_uri = http://controller:5000 auth_url = http://controller:35357
nova_metadata_ip = controller
metadata_proxy_shared_secret = password
user_domain_id = default
project_domain_id = default
auth_region = RegionOne
auth_plugin = password
admin_tenant_name = service
username = neutron
password = neutron
verbose = True
EOF

完成上面設定後,先回到controller Host,編輯/etc/nova/nova.conf,
$ vim /etc/nova/nova.conf

# neutron section 加入以下參數
[neutron]
metadata_proxy_shared_secret = password
service_metadata_proxy = True

繼續在network Host上建立neutron-l3-agent upstart腳本
$ cat > /etc/init/neutron-l3-agent.conf << EOF # vim:set ft=upstart ts=2 et: respawn start on runlevel [2345] stop on runlevel [!2345] script exec start-stop-daemon --start --chuid neutron --exec /usr/local/bin/neutron-l3-agent -- --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/l3_agent.ini --log-file=/var/log/neutron/l3-agent.log end script EOF

建立neutron-dhcp-agent upstart腳本
$ cat > /etc/init/neutron-dhcp-agent.conf << EOF # vim:set ft=upstart ts=2 et: start on runlevel [2345] stop on runlevel [!2345] script exec start-stop-daemon --start --chuid neutron --exec /usr/local/bin/neutron-dhcp-agent -- --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/dhcp_agent.ini --log-file=/var/log/neutron/dhcp-agent.log end script EOF

建立neutron-metadata-agent upstart腳本
$ cat > /etc/init/neutron-metadata-agent.conf << EOF # vim:set ft=upstart ts=2 et: start on runlevel [2345] stop on runlevel [!2345] script exec start-stop-daemon --start --chuid neutron --exec /usr/local/bin/neutron-metadata-agent -- \ --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/metadata_agent.ini \ --log-file=/var/log/neutron/metadata-agent.log end script EOF

建立neutron-linuxbridge-agent upstart腳本
$ cat > /etc/init/neutron-linuxbridge-agent.conf << EOF # vim:set ft=upstart ts=2 et: #start on runlevel [2345] #stop on runlevel [!2345] script [ -r /etc/default/neutron-server ] && . /etc/default/neutron-server [ -r "\$NEUTRON_PLUGIN_CONFIG" ] && CONF_ARG="--config-file \$NEUTRON_PLUGIN_CONFIG" exec start-stop-daemon --start --chuid neutron --exec /usr/local/bin/neutron-linuxbridge-agent -- --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini --config-file=/etc/neutron/plugins/ml2/linuxbridge_agent.ini --log-file=/var/log/neutron/linuxbridge_agent.log \$CONF_ARG end script EOF

啟動neutron相關服務
$ start neutron-linuxbridge-agent
$ start neutron-dhcp-agent $ start neutron-l3-agent $ start neutron-metadata-agent


===============================分隔線===========================
restart nova-api
start neutron-server
start neutron-linuxbridge-agent
start neutron-dhcp-agent
start neutron-l3-agent
start neutron-metadata-agent


# 確認neutron相關服務是否有正常執行

$ ps aux | grep neutron



# 如果發現沒有上面其中的neutron process,可以利用以下cmd單獨執行看看錯誤訊息
$ sudo -u neutron neutron-server --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini --log-file /var/log/neutron/server.log
$ sudo -u neutron neutron-linuxbridge-agent --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini --config-file=/etc/neutron/plugins/ml2/linuxbridge_agent.ini --log-file=/var/log/neutron/linuxbridge_agent.log
$ sudo -u neutron neutron-metadata-agent --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/metadata_agent.ini --log-file=/var/log/neutron/metadata-agent.log
$ sudo -u neutron neutron-dhcp-agent --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/dhcp_agent.ini --log-file=/var/log/neutron/dhcp-agent.log

$ sudo -u neutron neutron-l3-agent --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/l3_agent.ini --log-file=/var/log/neutron/l3-agent.log

$ cat >> adminrc << EOF
export OS_PROJECT_DOMAIN_ID=default
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_NAME=admin
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=password
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
EOF

$ source adminrc

$ neutron agent-list
+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+
| id                                   | agent_type         | host     | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+
| 041efffb-f791-4a2e-ae19-60f121a53236 | Linux bridge agent | network  | :-)   | True           | neutron-linuxbridge-agent |
| 3a20c8e6-4b64-4f3f-9e59-affad95ef7f0 | DHCP agent         | network  | :-)   | True           | neutron-dhcp-agent        |
| 58035dfb-bc62-4786-b074-7545701c2ddf | Metadata agent     | network  | :-)   | True           | neutron-metadata-agent    |
| acbd88b7-bc86-4d61-aec6-28bd0fd2033b | L3 agent           | network  | :-)   | True           | neutron-l3-agent          |

+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+

--compute node

安裝neutron之前,先設定 kernel 網路參數,透過 /etc/sysctl.conf 加入以下參數
$ vim /etc/sysctl.conf

net.ipv4.conf.all.rp_filter=0 net.ipv4.conf.default.rp_filter=0 net.bridge.bridge-nf-call-iptables=1 net.bridge.bridge-nf-call-ip6tables=1


# 編輯完成,利用cmd載入參數

$ sudo sysctl -p

===============================分隔線===========================

# case 1
# 如果出現下列訊息
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory

sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory

# 請執行以下cmd啟用br_netfilter module

$ modprobe br_netfilter

# 確認是否成功啟用

$ lsmod |grep  br_netfilter
br_netfilter           20480  0

bridge                110592  1 br_netfilter

# case 2

# 如果出現下列訊息,請確認你的kernel版本在3.19.0-15以上
modprobe: FATAL: Module br_netfilter not found.

$ uname -r


# 如果低於這版本請更新kernel

$ apt-get update

# 查詢你想安裝的版本

$ apt-cache search linux-image

$ apt-get install linux-image-xxxversion

$ reboot


# 將之前的 kernel 移除

$ apt-get remove linux-image-xxxversion

安裝neutron
$ apt-get install -y ipset keepalived conntrack conntrackd arping openvswitch-switch dnsmasq-utils dnsmasq libffi-dev libssl-dev

$ git clone https://github.com/openstack/neutron.git -b stable/kilo
$ cp neutron/etc/* /etc/neutron/
$ cp -R neutron/etc/neutron/plugins/ml2/* /etc/neutron/plugins/ml2
$ cp -R neutron/etc/neutron/rootwrap.d/* /etc/neutron/rootwrap.d

$ cd neutron

$ pip install -r requirements.txt
$ python setup.py install && cd
===============================分隔線===========================


apt-get install -y git ipset keepalived conntrack conntrackd arping openvswitch-switch dnsmasq-utils dnsmasq libffi-dev libssl-dev
git clone https://github.com/openstack/neutron.git -b stable/kilo
cp neutron/etc/* /etc/neutron/
cp -R neutron/etc/neutron/plugins/ml2/* /etc/neutron/plugins/ml2
cp -R neutron/etc/neutron/rootwrap.d/* /etc/neutron/rootwrap.d
cd neutron
pip install -r requirements.txt
python setup.py install && cd

編輯/etc/neutron/neutron.conf
$ rm /etc/neutron/neutron.conf

$ cat > /etc/neutron/neutron.conf << EOF
[DEFAULT] verbose = True debug = True
rpc_backend=rabbit
core_plugin = ml2 service_plugins = router auth_strategy = keystone allow_overlapping_ips = True
[agent] root_helper=sudo /usr/local/bin/neutron-rootwrap /etc/neutron/rootwrap.conf [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 project_domain_id = default user_domain_id = default project_name = service username = neutron password = neutron auth_plugin = password
[oslo_concurrency] lock_path = /var/lock/neutron/tmp [oslo_messaging_rabbit] rabbit_host = controller rabbit_userid = guest rabbit_password = password
EOF

編輯 /etc/neutron/plugins/ml2/ml2_conf.ini
$ rm /etc/neutron/plugins/ml2/ml2_conf.ini

$ cat > /etc/neutron/plugins/ml2/ml2_conf.ini << EOF
[ml2] tenant_network_types = vxlan extension_drivers = port_security type_drivers = flat,vxlan mechanism_drivers = linuxbridge,l2population [ml2_type_flat] flat_networks = public [ml2_type_vxlan] vni_ranges = 1001:2000 [securitygroup] enable_ipset = True
EOF

編輯 /etc/neutron/plugins/ml2/linuxbridge_agent.ini
$ cat > /etc/neutron/plugins/ml2/linuxbridge_agent.ini << EOF
[linux_bridge]
physical_interface_mappings = public:eth1 [vxlan] enable_vxlan = True local_ip = $MY_PRIVATE_IP l2_population = True [agent] prevent_arp_spoofing = True [securitygroup] enable_security_group = True firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
EOF

編輯 /etc/nova/nova.conf
$ vim /etc/nova/nova.conf

# 新增neutron section
[neutron] url = http://controller:9696 auth_uri = http://controller:5000 auth_url = http://controller:35357 auth_plugin = password username = neutron password = neutron project_name = service auth_strategy = keystone user_domain_id = default project_domain_id = default region_name = RegionOne

設定rotate neutron 服務的日誌
$ cat >> /etc/logrotate.d/neutron << EOF /var/log/neutron/*.log { daily missingok rotate 7 compress notifempty nocreate } EOF

建立neutron-linuxbridge-agent upstart腳本
$ cat > /etc/init/neutron-linuxbridge-agent.conf << EOF # vim:set ft=upstart ts=2 et: #start on runlevel [2345] #stop on runlevel [!2345] script [ -r /etc/default/neutron-server ] && . /etc/default/neutron-server [ -r "\$NEUTRON_PLUGIN_CONFIG" ] && CONF_ARG="--config-file \$NEUTRON_PLUGIN_CONFIG" exec start-stop-daemon --start --chuid neutron --exec /usr/local/bin/neutron-linuxbridge-agent -- --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini --config-file=/etc/neutron/plugins/ml2/linuxbridge_agent.ini --log-file=/var/log/neutron/linuxbridge_agent.log \$CONF_ARG end script EOF

啟動neutron相關服務
$ restart nova-compute
$ start neutron-linuxbridge-agent
===============================分隔線===========================
restart nova-compute
start neutron-linuxbridge-agent


# 確認neutron相關服務是否有正常執行

$ ps aux | grep neutron



# 如果發現沒有上面其中的neutron process,可以利用以下cmd單獨執行看看錯誤訊息
$ sudo -u neutron neutron-linuxbridge-agent --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini --config-file=/etc/neutron/plugins/ml2/linuxbridge_agent.ini --log-file=/var/log/neutron/linuxbridge_agent.log

$ neutron agent-list
+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+
| id                                   | agent_type         | host     | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+
| 041efffb-f791-4a2e-ae19-60f121a53236 | Linux bridge agent | network  | :-)   | True           | neutron-linuxbridge-agent |
| 3a20c8e6-4b64-4f3f-9e59-affad95ef7f0 | DHCP agent         | network  | :-)   | True           | neutron-dhcp-agent        |
| 3b8c1504-00a1-4be4-b121-109c6f622384 | Linux bridge agent | compute1 | :-)   | True           | neutron-linuxbridge-agent |
| 58035dfb-bc62-4786-b074-7545701c2ddf | Metadata agent     | network  | :-)   | True           | neutron-metadata-agent    |
| acbd88b7-bc86-4d61-aec6-28bd0fd2033b | L3 agent           | network  | :-)   | True           | neutron-l3-agent          |

+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+


==Cinder==

--controller node

確認/var/cache/cinder目錄是否建立,如果沒有請先手動建立
$ mkdir /var/cache/cinder $ chown cinder:cinder /var/cache/cinder/ $ chmod 700 /var/cache/cinder
===============================分隔線===========================
mkdir /var/cache/cinder chown cinder:cinder /var/cache/cinder/ chmod 700 /var/cache/cinder
建立cinder資料庫、使用者及相關權限設定
$ mysql -u root -ppassword -e 'CREATE DATABASE cinder;'

$ mysql -u root -ppassword -e "GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'cinder';"
$ mysql -u root -ppassword -e "GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'cinder';"

===============================分隔線===========================

mysql -u root -ppassword -e 'CREATE DATABASE cinder;' mysql -u root -ppassword -e "GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'cinder';" mysql -u root -ppassword -e "GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'cinder';"
安裝cinder
$ git clone https://github.com/openstack/cinder.git -b stable/kilo

$ cd cinder

$ pip install --requirement requirements.txt

$ python setup.py install

$ cp -R etc/cinder/* /etc/cinder
===============================分隔線===========================

git clone https://github.com/openstack/cinder.git -b stable/kilo
cd cinder pip install --requirement requirements.txt python setup.py install
cp -R etc/cinder/* /etc/cinder

向keystone註冊cinder service及endpoint
# 建立 cinder block storage service v1及v2
$ openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev1

$ openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2

# 建立 cinder public/internal/admin endpoint
$ openstack endpoint create --region RegionOne volumev1 public http://controller:8776/v1/%\(tenant_id\)s
$ openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(tenant_id\)s $ openstack endpoint create --region RegionOne volumev1 internal http://controller:8776/v1/%\(tenant_id\)s
$ openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(tenant_id\)s $ openstack endpoint create --region RegionOne volumev1 admin http://controller:8776/v1/%\(tenant_id\)s
$ openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(tenant_id\)s

# 建立cinder user
$ openstack user create cinder --domain default --password cinder

# cinder user綁定在service project底下為admin的role
$ openstack role add --project service --user cinder admin

編輯cinder.conf
$ cat > /etc/cinder/cinder.conf << EOF [DEFAULT]
verbose = True rpc_backend=rabbit auth_strategy = keystone my_ip = $MY_PRIVATE_IP 
api_paste_config = /etc/cinder/api-paste.ini
[DATABASE] connection = mysql://cinder:cinder@controller/cinder?charset=utf8 [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 project_domain_id = default user_domain_id = default project_name = service username = cinder password = cinder auth_plugin = password signing_dir = /var/cache/cinder [oslo_messaging_rabbit] rabbit_host = controller rabbit_userid = guest rabbit_password = password

[oslo_concurrency]
lock_path = /var/lock/cinder/tmp
EOF

修改/etc/cinder底下的owner為cinder,並建立cinder table
$ chown cinder:cinder /etc/cinder/*.{conf,json,ini}
$ su -s /bin/sh -c "cinder-manage db sync" cinder

設定rotate cinder 服務的日誌
$ cat >> /etc/logrotate.d/cinder << EOF /var/log/cinder/*.log { daily missingok rotate 7 compress notifempty nocreate } EOF
建立cinder upstart 腳本
# 建立cinder-api啟動服務腳本
$ cat > /etc/init/cinder-api.conf << EOF description "Cinder API" start on runlevel [2345] stop on runlevel [!2345]

chdir /var/run

pre-start script mkdir -p /var/run/cinder chown cinder:root /var/run/cinder/ mkdir -p /var/lock/cinder chown cinder:root /var/lock/cinder/ end script exec start-stop-daemon --start --chuid cinder --exec /usr/local/bin/cinder-api -- --config-file=/etc/cinder/cinder.conf --log-file=/var/log/cinder/api.log EOF

# 建立cinder-scheduler啟動服務腳本
$ cat > /etc/init/cinder-scheduler.conf << EOF description "Cinder Scheduler" start on runlevel [2345] stop on runlevel [!2345]

chdir /var/run

pre-start script mkdir -p /var/run/cinder chown cinder:root /var/run/cinder/ mkdir -p /var/lock/cinder chown cinder:root /var/lock/cinder/ end script exec start-stop-daemon --start --chuid cinder --exec /usr/local/bin/cinder-scheduler -- --config-file=/etc/cinder/cinder.conf --log-file=/var/log/cinder/scheduler.log EOF

啟動cinder
$ start cinder-api

$ start cinder-scheduler

# 確認ciner是否有正常執行
$ ps aux | grep cinder

$ cat >> ~/adminrc << EOF
export OS_VOLUME_API_VERSION=2
EOF


$ cinder service-list

+------------------+------------+------+---------+-------+------------+-----------------+

|      Binary      |    Host    | Zone |  Status | State | Updated_at | Disabled Reason |

+------------------+------------+------+---------+-------+------------+-----------------+

| cinder-scheduler | controller | nova | enabled |   up  |    None    |       None      |

+------------------+------------+------+---------+-------+------------+-----------------+


# 如果沒有發現cinder 的process
$ sudo -u cinder cinder-api --config-file=/etc/cinder/cinder.conf --log-file=/var/log/cinder/api.log

$ sudo -u cinder cinder-scheduler --config-file=/etc/cinder/cinder.conf --log-file=/var/log/cinder/scheduler.log


--storage node

設定storage Host環境參數
$ cat >> .bashrc << EOF MY_IP=172.20.3.54 MY_PRIVATE_IP=172.20.3.54 CONTROLLER_IP=172.20.3.49
EOF
$ source .bashrc

確認/var/cache/cinder目錄是否建立,如果沒有請先手動建立
$ mkdir /var/cache/cinder $ chown cinder:cinder /var/cache/cinder/ $ chmod 700 /var/cache/cinder
===============================分隔線===========================
mkdir /var/cache/cinder chown cinder:cinder /var/cache/cinder/ chmod 700 /var/cache/cinder

安裝cinder之前,先需安裝的相依套件
$ apt-get install -y python-dev libmysqlclient-dev libffi-dev libssl-dev python-pip git libpq-dev python-libvirt libxml2-dev libxslt1-dev

$ pip install pip==7.1.2

$ pip install python-openstackclient==1.0.5
$ pip install repoze.lru pbr mysql-python

安裝cinder
$ apt-get install -y open-iscsi tgt lvm2

$ git clone https://github.com/openstack/cinder.git -b stable/kilo

$ cd cinder

$ pip install -r requirements.txt

$ python setup.py install

$ cp -R etc/cinder/* /etc/cinder
===============================分隔線===========================

git clone https://github.com/openstack/cinder.git -b stable/kilo
cd cinder pip install -r requirements.txt python setup.py install
cp -R etc/cinder/* /etc/cinder

由於我們這邊是用lvm當作cinder backend的storage,所以須先做以下的前置作業
# 這邊將 /dev/sdb 硬碟當作 Physical Volume 使用:

$ sudo pvcreate /dev/sdb
Physical volume "/dev/sdb" successfully created

# 如果要將已用過的dev重新給lvm使用,可以用以下cmd重新format
$ sudo fdisk /dev/sdb
$ sudo mkfs -t ext4 /dev/sdb

# 接著要建立一個volume group(VG) 來讓多顆硬碟當作一個邏輯儲存使用:
$ sudo vgcreate cinder-volumes /dev/sdb
Volume group "cinder-volumes" successfully created

# 預設下的lvm會透過工具搜尋包含/dev的目錄。如果部署時是使用lvm來提供volume的話,該工具會檢查這些volume,並試著快取目錄,這樣將會造成各式各樣的問題,因此要編輯 /etc/lvm/lvm.conf 來正確的提供VG的硬碟使用。

# 這邊設定只使用 /dev/sdb



$ vim /etc/lvm/lvm.conf

# 這邊以 a 開頭的表示 accept,而 r 則表示 reject。 filter = [ 'a/sdb/', 'r/.*/']

編輯cinder.conf
$ cat > /etc/cinder/cinder.conf << EOF [DEFAULT] verbose = True rpc_backend=rabbit api_paste_config = /etc/cinder/api-paste.ini auth_strategy = keystone my_ip = $MY_PRIVATE_IP enabled_backends = lvm glance_host = controller [DATABASE] connection = mysql://cinder:cinder@controller/cinder?charset=utf8 [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 project_domain_id = default user_domain_id = default project_name = service username = cinder password = cinder auth_plugin = password signing_dir = /var/cache/cinder [oslo_messaging_rabbit] rabbit_host = controller rabbit_userid = guest rabbit_password = password [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_group = cinder-volumes iscsi_protocol = iscsi iscsi_helper = tgtadm [oslo_concurrency]
lock_path = /var/lock/cinder/tmp
EOF

修改/etc/cinder底下的owner為cinder
$ chown cinder:cinder /etc/cinder/*.{conf,json,ini}

設定rotate cinder 服務的日誌
$ cat >> /etc/logrotate.d/cinder << EOF /var/log/cinder/*.log { daily missingok rotate 7 compress notifempty nocreate } EOF

建立cinder upstart 腳本
# 建立cinder-volume啟動服務腳本
$ cat > /etc/init/cinder-volume.conf << EOF
description "Cinder Volume"

start on runlevel [2345]
stop on runlevel [!2345]


chdir /var/run

pre-start script
        mkdir -p /var/run/cinder
        chown cinder:root /var/run/cinder/

        mkdir -p /var/lock/cinder
        chown cinder:root /var/lock/cinder/

end script

exec start-stop-daemon --start --chuid cinder --exec /usr/local/bin/cinder-volume -- --config-file=/etc/cinder/cinder.conf --log-file=/var/log/cinder/volume.log
EOF

啟動cinder
$ cat >> /etc/tgt/conf.d/cinder_tgt.conf << EOF include /var/lib/cinder/volumes/* EOF

$ service tgt restart
$ service open-iscsi restart
$ start cinder-volume

# 確認ciner是否有正常執行
$ ps aux | grep cinder

$ sudo -u cinder cinder-volume --config-file=/etc/cinder/cinder.conf --log-file=/var/log/cinder/volume.log

==Horizon==

# 們在這邊把horizon安裝到controller host上
$ sudo apt-get install -y python-setuptools python-virtualenv python-dev gettext git gcc libpq-dev python-pip python-tox libffi-dev python-memcache memcached

$ git clone https://github.com/openstack/horizon.git /opt/horizon -b stable/kilo

$ sudo chown -R horizon:horizon /opt/horizon

$ cd /opt/horizon

$ pip install -r requirements.txt

$ python setup.py install

$ cp openstack_dashboard/local/local_settings.py.example openstack_dashboard/local/local_settings.py
===============================分隔線===========================


sudo apt-get install -y python-setuptools python-virtualenv python-dev gettext git gcc libpq-dev python-pip python-tox libffi-dev python-memcache memcached
git clone https://github.com/openstack/horizon.git /opt/horizon -b stable/kilo
sudo chown -R horizon:horizon /opt/horizon
cd /opt/horizon
pip install -r requirements.txt
python setup.py install
cp openstack_dashboard/local/local_settings.py.example openstack_dashboard/local/local_settings.py
修改openstack_dashboard/local/local_settings.py
# 加入以下參數
COMPRESS_OFFLINE = True
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

# 修改以下參數 OPENSTACK_HOST = "controller"
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
CACHES = {
    'default': {
        'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
        'LOCATION': '127.0.0.1:11211',
    }
}
# 把原本的CACHES註解掉
#CACHES = { # 'default': { # 'BACKEND': 'django.core.cache.backends.locmem.LocMemCache', # } #}

# 修改完成執行以下動作
$ ./manage.py collectstatic
$ ./manage.py compress
設定一個支援 WSGI 的 Web server,及相關conf
# 安裝相關套件
$ sudo apt-get install -y apache2 libapache2-mod-wsgi

# 我們需為 Apache2 提供一個 WSGI 設定檔案 /etc/apache2/sites-available/horizon.conf
$ ./manage.py make_web_conf --apache | sudo tee /etc/apache2/sites-available/horizon.conf

# 修改與設定 Apache2 的 sites-available 下的 /etc/apache2/sites-available/horizon.conf為
<VirtualHost *:80>

    DocumentRoot /opt/horizon/

    LogLevel warn
    ErrorLog /var/log/apache2/openstack_dashboard-error.log
    CustomLog /var/log/apache2/openstack_dashboard-access.log combined

    WSGIDaemonProcess horizon user=horizon group=horizon processes=3 threads=10 home=/opt/horizon display-name=%{GROUP}
    WSGIApplicationGroup %{GLOBAL}

    SetEnv APACHE_RUN_USER horizon
    SetEnv APACHE_RUN_GROUP horizon
    WSGIProcessGroup horizon
    WSGIScriptAlias / /opt/horizon/openstack_dashboard/wsgi/django.wsgi

    <Location "/">
        Require all granted
    </Location>

    Alias /static /opt/horizon/static
    <Location "/static">
        SetHandler None
    </Location>

</Virtualhost>

$ sudo a2ensite horizon
$ sudo a2dissite 000-default $ sudo service apache2 restart
$ chown -R horizon:horizon /opt/horizon/

# 在瀏覽器輸入http://controller導入openstack首頁,代表成功安裝horizon。

==Novnc==

# 我們在這邊把novnc安裝到controller host上
$ git clone git://github.com/kanaka/noVNC
$ mkdir /usr/share/novnc

$ cp -r noVNC/* /usr/share/novnc
$ apt-get install libjs-jquery libjs-sphinxdoc libjs-swfobject libjs-underscore


建立nova-novncproxy腳本
$ cat >> /etc/init/nova-novncproxy.conf << EOF description "Nova novnc proxy worker" start on runlevel [2345] stop on runlevel [!2345] chdir /var/run pre-start script mkdir -p /var/run/nova chown nova:root /var/run/nova/ mkdir -p /var/lock/nova chown nova:root /var/lock/nova/ modprobe nbd end script
exec start-stop-daemon --start --chuid nova --exec /usr/local/bin/nova-novncproxy -- --config-file=/etc/nova/nova.conf EOF

啟動novanovncproxy服務
$ start nova-novncproxy
# 確認nova-novncproxy服務是否有正常執行 $ ps aux | grep nova-novncproxy

# 如果發現沒有上面其中的nova-novncproxy,可以利用以下cmd單獨執行看看錯誤訊息
$ sudo -u nova nova-novncproxy --config-file=/etc/nova/nova.conf

==參考來源==

Install OpenStack from source

OpenStack Ubuntu Manual 安裝

留言

這個網誌中的熱門文章

Python - 計算特定目錄底下的檔案以及目錄數量

PHP - 產生qrcode

devstack安裝all in one openstack(pike)