OpenStack Cluster

環境描述:

1.3個node(各裝nova services),3個rabbitmq做cluster, 2個kvm hypervisor, 1個xen hypervisor

安裝步驟:

1. RabbitMQ Cluster install:
3個node各裝: aptitude -y install rabbitmq-server python-pika

2. Setup:

Set Cookie
Erlang nodes use a cookie to determine whether they are allowed to communicate with each other - for two nodes to be able to communicate they must have the same cookie.
The cookie is just a string of alphanumeric characters. It can be as long or short as you like.
Erlang will automatically create a random cookie file when the RabbitMQ server starts up. This will be typically located in /var/lib/rabbitmq/.erlang.cookie on Unix systems.

root@rabbit1:~# ls -l /var/lib/rabbitmq/.erlang.cookie 
-r-------- 1 rabbitmq rabbitmq 15 May  9 18:15 /var/lib/rabbitmq/.erlang.cookie
root@rabbit1:~# echo "WIWYNNRABBITMQ" > /var/lib/rabbitmq/.erlang.cookie 

root@rabbit1:~# reboot; # to make cookie take effect


3. Starting independent nodes:

Clusters are set up by re-configuring existing RabbitMQ nodes into a cluster configuration. Hence the first step is to start RabbitMQ on all nodes in the normal way:

rabbit1$ rabbitmq-server -detached
rabbit2$ rabbitmq-server -detached
rabbit3$ rabbitmq-server -detached


This creates three independent RabbitMQ brokers, one on each node, as confirmed by the cluster_status command:

rabbit1$ rabbitmqctl cluster_status
Cluster status of node rabbit@rabbit1 ...
[{nodes,[{disc,[rabbit@rabbit1]}]},{running_nodes,[rabbit@rabbit1]}]
...done.
rabbit2$ rabbitmqctl cluster_status
Cluster status of node rabbit@rabbit2 ...
[{nodes,[{disc,[rabbit@rabbit2]}]},{running_nodes,[rabbit@rabbit2]}]
...done.
rabbit3$ rabbitmqctl cluster_status
Cluster status of node rabbit@rabbit3 ...
[{nodes,[{disc,[rabbit@rabbit3]}]},{running_nodes,[rabbit@rabbit3]}]
...done.


The node name of a RabbitMQ broker started from the rabbitmq-server shell script is rabbit@shorthostname, where the short node name is lower-case (as in rabbit@rabbit1, above). If you use the rabbitmq-server.bat batch file on Windows, the short node name is upper-case (as in rabbit@RABBIT1). When you type node names, case matters, and these strings must match exactly.

4. Creating the cluster:

In order to link up our three nodes in a cluster, we tell two of the nodes, say rabbit@rabbit2 and rabbit@rabbit3, to join the cluster of the third, say rabbit@rabbit1.
We first join rabbit@rabbit2 as a ram node in a cluster with rabbit@rabbit1 in a cluster. To do that, on rabbit@rabbit2 we stop the RabbitMQ application, reset the node, join the rabbit@rabbit1 cluster, and restart the RabbitMQ application.

rabbit2$ rabbitmqctl stop_app
Stopping node rabbit@rabbit2 ...done.
rabbit2$ rabbitmqctl reset
Resetting node rabbit@rabbit2 ...done.
rabbit2$ rabbitmqctl cluster rabbit@rabbit1
Clustering node rabbit@rabbit2 with [rabbit@rabbit1] ...done.
rabbit2$ rabbitmqctl start_app
Starting node rabbit@rabbit2 ...done.
We can see that the two nodes are joined in a cluster by running the cluster_status command on either of the nodes:

rabbit1$ rabbitmqctl cluster_status
Cluster status of node rabbit@rabbit1 ...
[{nodes,[{disc,[rabbit@rabbit1]},{ram,[rabbit@rabbit2]}]},
 {running_nodes,[rabbit@rabbit2,rabbit@rabbit1]}]
...done.
rabbit2$ rabbitmqctl cluster_status
Cluster status of node rabbit@rabbit2 ...
[{nodes,[{disc,[rabbit@rabbit1]},{ram,[rabbit@rabbit2]}]},
 {running_nodes,[rabbit@rabbit1,rabbit@rabbit2]}]
...done.


Now we join rabbit@rabbit3 as a disk node to the same cluster. The steps are identical to the ones above, except that we list rabbit@rabbit3 as a node in the cluster command in order to turn it into a disk rather than ram node.


rabbit3$ rabbitmqctl stop_app
Stopping node rabbit@rabbit3 ...done.
rabbit3$ rabbitmqctl reset
Resetting node rabbit@rabbit3 ...done.
rabbit3$ rabbitmqctl cluster rabbit@rabbit1 rabbit@rabbit3
Clustering node rabbit@rabbit3 with [rabbit@rabbit1, rabbit@rabbit3] ...done.
rabbit3$ rabbitmqctl start_app
Starting node rabbit@rabbit3 ...done


When joining a cluster it is ok to specify nodes which are currently down; it is sufficient for one node to be up for the command to succeed.
We can see that the three nodes are joined in a cluster by running the cluster_status command on any of the nodes:

rabbit1$ rabbitmqctl cluster_status
Cluster status of node rabbit@rabbit1 ...
[{nodes,[{disc,[rabbit@rabbit1,rabbit@rabbit3]},{ram,[rabbit@rabbit2]}]},
 {running_nodes,[rabbit@rabbit3,rabbit@rabbit2,rabbit@rabbit1]}]
...done.
rabbit2$ rabbitmqctl cluster_status
Cluster status of node rabbit@rabbit2 ...
[{nodes,[{disc,[rabbit@rabbit1,rabbit@rabbit3]},{ram,[rabbit@rabbit2]}]},
 {running_nodes,[rabbit@rabbit3,rabbit@rabbit1,rabbit@rabbit2]}]
...done.
rabbit3$ rabbitmqctl cluster_status
Cluster status of node rabbit@rabbit3 ...
[{nodes,[{disc,[rabbit@rabbit3,rabbit@rabbit1]},{ram,[rabbit@rabbit2]}]},
 {running_nodes,[rabbit@rabbit2,rabbit@rabbit1,rabbit@rabbit3]}]
...done.


By following the above steps we can add new nodes to the cluster at any time, while the cluster is running.
Note For the rabbitmqctl cluster command to succeed, the target nodes need to be active. It is possible to cluster with offline nodes; for this purpose, use the rabbitmqctl force_cluster command.
Reference: http://www.rabbitmq.com/clustering.html

5. PostgreSQL (9.1) install:

Server Side (nova controller):

aptitude install postgresql-9.1

Edit /etc/postgresql/9.1/main/postgresql.conf

...
# - Connection Settings -

listen_address = '*'
...


Edit /etc/postgresql/9.1/main/pg_hba.conf

...
# IPv4 local connections:
host    all             all             127.0.0.1/32            md5
host    all             all             192.168.0.0/16          md5
host    all             all             10.1.0.0/16             md5
...


Update PostgreSQL privilege

需要輸入password:

# type 'password' after Password:
sudo -u postgres psql template1
template1=#\password
Enter Password:
Enter again:
template1=#\q

service postgresql restart

sudo -u postgres createdb nova
#sudo -u postgres createdb glance

adduser nova

# type 'password' after role:
sudo -u postgres createuser -PSDR nova
Enter password for new role: 
Enter it again: 
sudo -u postgres psql template1
template1=#GRANT ALL PRIVILEGES ON DATABASE nova TO nova
template1=#\q
reference: http://docs.openstack.org/trunk/openstack-compute/admin/content/setting-up-sql-database-postgresql.html

6. Server Side (nova controller), Client Side (nova-compute) install:

aptitude install nova-api nova-console nova-consoleauth nova-compute nova-network nova-scheduler nova-objectstore nova -network nova-vncproxy

aptitude install python-psycopg2

3個node Update /etc/nova/nova.conf:


...
--sql_connection=postgresql://nova:password@rabbit1/nova
7. Sync Database:
nova-manage db sync

8. Restart Services:

restart nova-api
restart nova-objectstore
restart nova-compute
restart nova-scheduler
restart nova-network
restart nova-console
restart nova-consoleauth
restart nova-vncproxy
9. 1個node install xen:
Hypervistor: Xen
aptitude -y install nova-compute-xen
Issue:
假使在xen nova-compute not start:

1. virConnectGetVersion() failed
Check /etc/nova/nova-compute.conf
--libvirt_type=xen
Check /etc/xen/xend-config.sxp
...
(xend-unix-server yes)
...
(xend-unix-path /var/lib/xend/xend-socket)
Restart xend
/etc/init.d/xend restart
2. (...)
假使xend is not up:

Xen is not enabled/loaded
1. Check /sys/hypervisor/type
cat /sys/hypervisor/type
Check /boot/grub/grub.cfg
...
### BEGIN /etc/grub.d/20_linux_xen ###
submenu "Xen 4.1-amd64" {
...
Set boot entry Xen 4.1-amd64 & reboot
grub-set-default "Xen 4.1-amd64" 
reboot

留言

這個網誌中的熱門文章

Json概述以及python對json的相關操作

Docker容器日誌查看與清理

遠程控制管理工具ipmitool