- 浏览: 92824 次
- 性别:
- 来自: 上海
最新评论
-
zdx3578:
rsync -avrlptzon s d
zabbix编译: -
zdx3578:
jmx hostname -i 确认机器名!
优化杂记 -
zdx3578:
http://www.linuxyw.com/a/WEBfuw ...
通过证书dn名映射应用其他的登录名 -
zdx3578:
http://www.cnblogs.com/peida/ta ...
vmstat system in cs cs: The number of context switches per second. 高 -
zdx3578:
cmdb2.1.7 + shark workflow 安装ok ...
cmdbuild
mnesia数据库如何集群的没有介绍,暂时还是到代码里找吧! 还可以参考mnesia中文手册 erlang-china.org有下载!
3.13-17:00 rabbitmqctl.bat 命令调用的是rabbit_control.erl里的函数 cluster的确就是操作mnesia集群!
action(cluster, Node, ClusterNodeSs, Inform) ->
ClusterNodes = lists:map(fun list_to_atom/1, ClusterNodeSs),
Inform("Clustering node ~p with ~p",
[Node, ClusterNodes]),
rpc_call(Node, rabbit_mnesia, cluster, [ClusterNodes]);
Messaging that just works javaeye的表格处理有问题,每天都会变化,表格还是到官方看吧!
Table of Contents
Clustering overview
A RabbitMQ broker is a logical grouping of one or several Erlang nodes , each running the RabbitMQ application and sharing users, virtual hosts, queues, exchanges , etc. Sometimes we refer to the collection of nodes as a cluster .
All data/state required for the operation of a RabbitMQ broker is replicated across all nodes, for reliability and scaling, with full ACID properties. An exception to this are message queues, which currently only reside on the node that created them, though they are visible and reachable from all nodes . Future releases of RabbitMQ will introduce migration and replication of message queues.
The easiest way to set up a cluster is by auto configuration using a default cluster config file . See the clustering transcripts for an example .
The composition of a cluster can be altered dynamically . All RabbitMQ brokers start out as running on a single node. These nodes can be joined into clusters, and subsequently turned back into individual brokers again.
RabbitMQ brokers tolerate the failure of individual nodes . Nodes can be started and stopped at will.
The list of currently active cluster connection points is
returned in the known_hosts
field of AMQP's
connection.open_ok
method
, as a comma-separated
list
of addresses where each address is an IP address or a DNS
name, optionally followed by a colon and a port number.
Nodes in a cluster perform some basic load balancing
by
responding to client connection attempts with AMQP's
connection.redirect
method
as appropriate,
unless the client suppressed
redirects by setting the
insist
flag in the connection.open
method.
A node can be a RAM node or a disk node . RAM nodes keep their state only in memory (with the exception of the persistent contents of durable queues which are still stored safely on disc) . Disk nodes keep state in memory and on disk. As RAM nodes don't have to write to disk as much as disk nodes, they can perform better. Because state is replicated across all nodes in the cluster, it is sufficient to have just one disk node within a cluster (disk error), to store the state of the cluster safely. Beware, however, that RabbitMQ will not stop you from creating a cluster with only RAM nodes. Should you do this, and suffer a power failure to the entire cluster, the entire state of the cluster, including all messages, will be lost.
Clustering transcript
The following is a transcript of setting up and manipulating
a RabbitMQ cluster across three machines -
rabbit1
, rabbit2
,
rabbit3
, with two of the machines replicating
data on ram and disk, and the other replicating data in ram
only.
We assume that the user is logged into all three machines, that RabbitMQ has been installed on the machines, and that the rabbitmq-server and rabbitmqctl scripts are in the user's PATH.
Initial setup
Erlang nodes use a cookie to determine whether they are allowed to communicate with each other - for two nodes to be able to communicate they must have the same cookie.
The cookie is just a string of alphanumeric characters. It can be as long or short as you like.
Erlang will automatically create a random cookie file when
the RabbitMQ server starts up. This will be typically located in /var/lib/rabbitmq/.erlang.cookie
on Unix systems and C:\Documents and Settings\Current User
\Application
Data\RabbitMQ\.erlang.cookie
on Windows systems. The easiest way
to proceed is to allow one node to create the file, and then copy it to
all the other nodes in the cluster.
As an alternative, you can insert the option "-setcookie
cookie
" in the erl
call i
n the
rabbitmq-server
and rabbitmqctl
scripts.
Starting independent nodes
Clusters are set up by re-configuring existing RabbitMQ nodes into a cluster configuration. Hence the first step is to start RabbitMQ on all nodes in the normal way:
This creates three independent RabbitMQ brokers, one on each node, as confirmed by the status command:
Creating the cluster
In order to link up our three nodes in a cluster
, we tell
two of the nodes, say rabbit@rabbit2
and
rabbit@rabbit3
, to join the cluster of the
third
, say rabbit@rabbit1
.
We first join rabbit@rabbit2
as a ram node in
a cluster withh rabbit@rabbit1
in a
cluster. To do that, on rabbit@rabbit2
we
stop the RabbitMQ application, reset the node, join
the
rabbit@rabbit1
cluster, and restart the
RabbitMQ application.
We can see that the two nodes are joined in a cluster by running the status command on either of the nodes:
Now we join rabbit@rabbit3
as a disk node to
the same cluster. The steps are identical to the ones
above, except that we list rabbit@rabbit3
as
a node in the cluster
command in order to turn it
into a disk rather than ram node.
When joining a cluster it is ok to specify nodes which are currently down; it is sufficient for one node to be up for the command to succeed.
We can see that the three nodes are joined in a cluster by running the status command on any of the nodes:
By following the above steps we can add new nodes to the cluster at any time, while the cluster is running.
Changing node types
We can change the type of a node from ram to disk and vice
versa. Say we wanted to reverse the types of
rabbit@rabbit2
and
rabbit@rabbit3
, turning the former from a ram
node into a disk node and the latter from a disk node into
a ram node. To do that we simply stop the RabbitMQ
application, change the type with an appropriate
cluster
command, and restart the application.
The significance of specifying both
rabbit@rabbit1
and
rabbit@rabbit2
as the cluster nodes for
rabbit@rabbit3
is that in case of failure of
either of them, rabbit@rabbit3
can still
connect to the cluster when it starts, and operate
normally
. This is only important for ram nodes; disk nodes
automatically keep track of the cluster
configuration.
Restarting cluster nodes
Nodes that have been joined to a cluster can be stopped at any time . It is also ok for them to crash. In both cases the rest of the cluster continues operating unaffected, and the nodes automatically "catch up" with the other cluster nodes when they start up again.
We shut down the nodes rabbit@rabbit1
and
rabbit@rabbit3
and check on the cluster
status at each step:
Now we start the nodes again, checking on the cluster status as we go along:
There are some important caveats:
- All disk nodes must be running for certain operations, most notably leaving a cluster, to succeed.
- At least one disk node should be running at all times.
- When all nodes in a cluster have been shut down, restarting any node will suspend for up to 30 seconds and then fail if the last disk node that was shut down has not been restarted yet. Since the nodes do not know what happened to that last node, they have to assume that it holds a more up-to-date version of the broker state. Hence, in order to preserve data integrity, they cannot resume operation until that node is restarted.
Nodes need to be removed explicitly from a cluster when they are no longer meant to be part of it. This is particularly important in case of disk nodes since, as noted above, certain operations require all disk nodes to be up.
We first remove rabbit@rabbit3 from the cluster, returning it to independent operation. To do that, on rabbit@rabbit3 we stop the RabbitMQ application, reset the node, and restart the RabbitMQ application.
rabbit3$ rabbitmqctl stop_app
Stopping node rabbit@rabbit3 ...done.
rabbit3$ rabbitmqctl reset
Resetting node rabbit@rabbit3 ...done.
rabbit3$ rabbitmqctl start_app
Starting node rabbit@rabbit3 ...done.
Note that it would have been equally valid to list rabbit@rabbit3 as a node.
Running the status command on the nodes confirms that rabbit@rabbit3 now is no longer part of the cluster and operates independently:
rabbit1$ rabbitmqctl status
Status of node rabbit@rabbit1 ...
[...,
{nodes,[rabbit@rabbit2,rabbit@rabbit1]},
{running_nodes,[rabbit@rabbit2,rabbit@rabbit1]}]
done.
rabbit2$ rabbitmqctl status
Status of node rabbit@rabbit2 ...
[...,
{nodes,[rabbit@rabbit2,rabbit@rabbit1]},
{running_nodes,[rabbit@rabbit1,rabbit@rabbit2]}]
done.
rabbit3$ rabbitmqctl status
Status of node rabbit@rabbit3 ...
[...,
{nodes,[rabbit@rabbit3]},
{running_nodes,[rabbit@rabbit3]}]
done.
Now we remove rabbit@rabbit1 from the cluster. The steps are identical to the ones above.
rabbit1$ rabbitmqctl stop_app
Stopping node rabbit@rabbit1 ...done.
rabbit1$ rabbitmqctl reset
Resetting node rabbit@rabbit1 ...done.
rabbit1$ rabbitmqctl start_app
Starting node rabbit@rabbit1 ...done.
The status command now shows all three nodes operating as independent RabbitMQ brokers:
rabbit1$ rabbitmqctl status
Status of node rabbit@rabbit1 ...
[...,
{nodes,[rabbit@rabbit1]},
{running_nodes,[rabbit@rabbit1]}]
done.
rabbit2$ rabbitmqctl status
Status of node rabbit@rabbit2 ...
[...,
{nodes,[rabbit@rabbit2]},
{running_nodes,[rabbit@rabbit2]}]
done.
rabbit3$ rabbitmqctl status
Status of node rabbit@rabbit3 ...
[...,
{nodes,[rabbit@rabbit3]},
{running_nodes,[rabbit@rabbit3]}]
done.
Note that rabbit@rabbit2 retains the residual state of the cluster , whereas rabbit@rabbit1 and rabbit@rabbit3 are freshly initialised RabbitMQ brokers. If we want to re-initialise rabbit@rabbit2 we follow the same steps as for the other nodes:
rabbit2$ rabbitmqctl stop_app
Stopping node rabbit@rabbit2 ...done.
rabbit2$ rabbitmqctl force_reset
Resetting node rabbit@rabbit2 ...done.
rabbit2$ rabbitmqctl start_app
Starting node rabbit@rabbit2 ...done.
Note that we used force_reset here. The reason is that removing a node from a cluster updates only the node-local configuration of the cluster, and that calling reset gets the node to connect to any of the other nodes that it believes are in the cluster , to perform some house-keeping that is necessary when leaving a cluster. However, at this point, there are no other nodes in the cluster, but rabbit@rabbit2 doesn't know this. As a result, calling reset would fail, as it can't connect to rabbit@rabbit1 or rabbit@rabbit3, hence the use of force_reset , in which rabbit@rabbit2 does not attempt to contact any other nodes in the cluster. This situation only arises when resetting the last remaining node of a cluster.
Auto-configuration of a cluster
Instead of configuring clusters "on the fly" using the cluster command, clusters can also be set up via a default cluster configuration file, the location of which is determined by the startup scripts; see the installation guide for details. The file should contain a list of cluster nodes.
Listing cluster nodes in that file has the same effect as using the cluster command. However, the latter takes precedence over the former, i.e. the default cluster configuration file is ignored subsequent to any successful invocation of the cluster command, until the node is reset.
A common use of the default cluster configuration file is to automatically configure nodes to join a common cluster. For this purpose the same configuration file can be installed on all nodes, containing a list of potential disk nodes for the cluster.
Say we want to join our three separate nodes of our running example back into a single cluster, with rabbit@rabbit1 and rabbit@rabbit2 being the disk nodes of the cluster. First we reset and stop all nodes - NB: this step would not be necessary if this was a fresh installation of RabbitMQ.
rabbit1$ rabbitmqctl stop_app
Stopping node rabbit@rabbit1 ...done.
rabbit1$ rabbitmqctl reset
Resetting node rabbit@rabbit1 ...done.
rabbit1$ rabbitmqctl stop
Stopping and halting node rabbit@rabbit1 ...done.
rabbit2$ rabbitmqctl stop_app
Stopping node rabbit@rabbit2 ...done.
rabbit2$ rabbitmqctl reset
Resetting node rabbit@rabbit2 ...done.
rabbit2$ rabbitmqctl stop
Stopping and halting node rabbit@rabbit2 ...done.
rabbit3$ rabbitmqctl stop_app
Stopping node rabbit@rabbit3 ...done.
rabbit3$ rabbitmqctl reset
Resetting node rabbit@rabbit3 ...done.
rabbit3$ rabbitmqctl stop
Stopping and halting node rabbit@rabbit3 ...done.
Now we create a configuration file containing the line
[rabbit@rabbit1, rabbit@rabbit2].
We copy this file onto all machines and install it in the location as defined in the start up files (see the installation guide). For example, on a Unix system the file would typically have the path /etc/rabbitmq/rabbitmq_cluster.config. Now we simply start the nodes.
rabbit1$ rabbitmq-server -detached
rabbit2$ rabbitmq-server -detached
rabbit3$ rabbitmq-server -detached
We can see that the three nodes are joined in a cluster by running the status command on any of the nodes:
rabbit1$ rabbitmqctl status
Status of node rabbit@rabbit1 ...
[...,
{nodes,[rabbit@rabbit1,rabbit@rabbit2,rabbit@rabbit3]},
{running_nodes,[rabbit@rabbit3,rabbit@rabbit2,rabbit@rabbit1]}]
done.
rabbit2$ rabbitmqctl status
Status of node rabbit@rabbit2 ...
[...,
{nodes,[rabbit@rabbit1,rabbit@rabbit2,rabbit@rabbit3]},
{running_nodes,[rabbit@rabbit3,rabbit@rabbit1,rabbit@rabbit2]}]
done.
rabbit3$ rabbitmqctl status
Status of node rabbit@rabbit3 ...
[...,
{nodes,[rabbit@rabbit1,rabbit@rabbit2,rabbit@rabbit3]},
{running_nodes,[rabbit@rabbit2,rabbit@rabbit1,rabbit@rabbit3]}]
done.
A cluster on a single machine
Under some circumstances it can be useful to run a cluster of RabbitMQ nodes on a single machine. In particular, this is necessary in order to get the full benefit of the CPUs on a multi-core machine. The two main requirements for running more than one node on a single machine are that each node should have a unique name and bind to a unique port / IP address combination.
The easiest way to start a cluster on a single machine is to use the script rabbitmq-multi (rabbitmq-multi.bat on Windows). You can invoke this as:
$ rabbitmq-multi start_all count
This will start count nodes with unique names, listening on all IP addresses and on sequential ports starting from 5672. You can then stop all nodes as follows:
$ rabbitmq-multi stop_all
Please note that you still need to put the nodes into a cluster by auto-configuration or manually arranging your nodes into a cluster. This may be as simple as creating a cluster configuration file containing this:
[rabbit@machine].
You can also start multiple nodes on the same host manually by repeated invocation of rabbitmq-server ( rabbitmq-server.bat on Windows). You must ensure that for each invocation you set the environment variables RABBITMQ_NODENAME, RABBITMQ_NODE_IP_ADDRESS and RABBITMQ_NODE_PORT to suitable values ("0.0.0.0" means "all IP addresses").
About us RabbitMQ? is a Trademark of Rabbit Technologies Ltd.
评论
{atomic,ok}
running db nodes = [zdx2@POP3ADMIN,zdx1@POP3ADMIN]
stopped db nodes = [zdx3@POP3ADMIN]
{db_nodes,[zdx2@POP3ADMIN,zdx1@POP3ADMIN]},
running db nodes = [zdx2@POP3ADMIN,zdx1@POP3ADMIN]
stopped db nodes = [zdx3@POP3ADMIN]
del_table_copy(Tab, Node) -> {aborted, R} | {atomic, ok}
Deletes the replica of table Tab at node Node. When the last replica is deleted with this function, the table disappears entirely.
This function may also be used to delete a replica of the table named schema. Then the mnesia node will be removed. Note: Mnesia must be stopped on the node first.
http://stackoverflow.com/questions/819928/how-do-i-remove-an-extra-node
running db nodes = [zdx1@POP3ADMIN,zdx2@POP3ADMIN]
stopped db nodes = [zdx3@POP3ADMIN]
{db_nodes,[zdx3@POP3ADMIN,zdx2@POP3ADMIN,zdx1@POP3ADMIN]},
对比:
{db_nodes,[zdx2@POP3ADMIN,zdx1@POP3ADMIN]},
running db nodes = [zdx1@POP3ADMIN,zdx2@POP3ADMIN]
stopped db nodes = []
stopped db nodes = [] 这里也许还会显示有
del_table_copy(Tab, Node) -> {aborted, R} | {atomic, ok}
Deletes the replica of table Tab at node Node. When the last replica is deleted with this function, the table disappears entirely.
This function may also be used to delete a replica of the table named schema. Then the mnesia node will be removed. Note: Mnesia must be stopped on the node first.
http://stackoverflow.com/questions/819928/how-do-i-remove-an-extra-node
running db nodes = [zdx1@POP3ADMIN,zdx2@POP3ADMIN]
stopped db nodes = [zdx3@POP3ADMIN]
{db_nodes,[zdx3@POP3ADMIN,zdx2@POP3ADMIN,zdx1@POP3ADMIN]},
对比:
{db_nodes,[zdx2@POP3ADMIN,zdx1@POP3ADMIN]},
running db nodes = [zdx1@POP3ADMIN,zdx2@POP3ADMIN]
stopped db nodes = []
发表评论
-
erlang Bad cookie in table definition
2012-02-03 09:36 2378merge_schema_failed, ... -
监控程序累计
2010-12-10 11:05 914toolbar:start(). 可启动下面的工具和dbg ... -
erlang 分布式控制
2010-12-05 18:40 1395测试记录: zdx@couchdb:~$ erl ... -
ec2 安装 erlang yaws
2010-11-28 21:55 123649 sudo yum install unixOD ... -
ec2 安装 erlang yaws
2010-11-28 20:38 84649 sudo yum install unixOD ... -
yasw 问题积累
2010-11-07 12:25 12271? .yasw文件输出源代码? 原因是notepad++显示 ... -
mnesia 遇到的问题记录
2010-04-05 11:24 8151 mnesia的事务不是任何地方都可以用,比如创建表就不能用 ... -
rabbitmq mnesia 集群启动 debug 跟踪的结果
2010-03-14 12:19 0源代码没看到如何启动集群的,debug trace 跟踪看看: ... -
rabbitmq 集群配置
2010-03-10 10:40 011 -
yaws 学习笔记!
2010-03-10 10:37 3256入门较系统文档: http://yaws.hyber.org/ ... -
erlyweb翻译 学习yaws的起点!
2010-03-07 18:02 2392http://yaws.hyber.org/yaws.pdf ... -
erlang debug
2010-03-05 09:43 1091装载需要debug的erlang模块的小程序: -modul ... -
rabbitmq erlang 源代码读 三 core process启动 gen_server
2010-01-19 14:29 2274from http://blog.chinaunix.net/ ...
相关推荐
本人rabbitmq集群环境搭建笔记分享。
资源包含rabbitmq镜像集群的搭建文档和springboot连接rabbitmq集群的配置方式,供参考
CentOS7 安装RabbitMQ集群 CentOS7 安装RabbitMQ集群 CentOS7 安装RabbitMQ集群
文档一步一步讲解了怎么搭建rabbitmq和rabbitmq的web界面,rabbitmq的集群,很实用的文档
kubernetes搭建rabbitmq集群,只需创建好相应的pv即可,无需修改,依次执行
《RabbitMQ集群环境生产实例部署》《ActiveMQ集群》《ActiveMQ高可用+负载均衡集群的安装、配置、高可用测试》
Rabbitmq 集群配置按照图解,步骤很详细及问题说明
1.Docker搭建RabbitMQ集群
rabbitMQ学习笔记rabbitMQ学习笔记rabbitMQ学习笔记rabbitMQ学习笔记rabbitMQ学习笔记rabbitMQ学习笔记rabbitMQ学习笔记rabbitMQ学习笔记rabbitMQ学习笔记rabbitMQ学习笔记rabbitMQ学习笔记rabbitMQ学习笔记rabbitMQ...
包含k8s下部署rabbitmq集群部署方式的说明,有pv.yaml, svc.yaml, statefulset.yaml
docker安装rabbitmq3.8集群-3台-详细笔记文档-带安装包
rabbitmq linux下部署
安装rabbitmq步骤文档
rabbitMq3.6.5集群部署,已成功运行
RabbitMQ中文文档.pdf
RabbitMQ是流行的开源消息队列系统,用erlang语言开发。
linux下rabbitmq集群负载均衡安装文档(rabbitmq+haproxy) 附带:部署文档、使用文档、问题解决文档 技术:rabbitmq+haproxy 附带各种源码包;经测试后通过;
MQ全称为Message Queue, 消息队列(MQ)是一种应用程序对应用程序的通信方法。应用程序通过读写出入队列的消息(针对应用程序的数据)来通信,而无需专用连接来链接它们。...本文档提供了linux 安装rabbitMQ教程。
除了一台刚刚安装完成,纯净版的,没有做任何开...1.192.168.1.61~62是一个rabbitmq集群,这集群,有2台组成集群 2.192.168.1.63~65是一个rabbitmq集群,这集群,有3台组成集群 redis,redis哨兵,redis集群,predixy,twempr
docker一键搭建rabbitmq集群 只需运行一条命令就可搭建rabbitmq集群