ActiveMQ in a master-slave setup
This guide will go over the process of configuring two ActiveMQ instances in a master-slave setup. This type of set-up is typically done to improve performance (by distributing loads) and high-availability by adding a failover instance, which are both important when running Martini in a production environment.
Martini uses the configured broker when indexing data with Tracker and Monitor; in fact, a message is created, published, received, and resolved per transaction. By enhancing ActiveMQ's performance, it should be safe to assume that Martini's overall performance will also improve, especially if your organization uses search indices a lot.
To implement this setup, you must have two instances of ActiveMQ running; one will be the master instance, and the other, the slave. It is recommended to host these instances on separate machines, but if this arrangement is not an option, then it is still possible to deploy them on the same machine. An NFS server will also be required in order to share data between these instances.
Configuration, authentication and authorization
Before diving into this guide, it is recommended to read the previous pages on general remote ActiveMQ instance configuration, as well as setting up remote ActiveMQ instances with authentication and authorization.
Configuring the NFS server & NFS clients
A network file system (NFS) is a setup that involves a server and a client. An NFS enables you to mount directories to share and access files directly within a network.
Below are the steps required in order to configure NFS servers and NFS clients for CentOS 7. Of course, it is possible to use a different operating system as long as you are able to correctly configure the NFS server its clients.
Configuring the NFS server
- Determine which instance or machine will act as the NFS server.
Install NFS packages using
yum install nfs-utils
nfs-utilspackage contains the tools for both the server and client which are necessary for the kernel's NFS abilities.
Determine and create the directory which will be shared for the NFS setup.
TORO recommends creating and sharing the
/datastore/activemq-datadirectory, but it is also possible to choose a different directory which will satisfy your preferences and/or needs better.
mkdir -p /datastore/activemq-data
Set the NFS directory's permissions.
Once you have chosen and have created the NFS directory in the NFS server, you can now set the permissions of the folder using the
chmod -R 7777 /datastore/activemq-data && chown $user:$group /datastore/activemq-data
Start and enable services.
1 2 3 4 5 6 7 8
systemctl enable rpcbind systemctl enable nfs-server systemctl enable nfs-lock systemctl enable nfs-idmap systemctl start rpcbind systemctl start nfs-server systemctl start nfs-lock systemctl start nfs-idmap
Share the NFS directory to enable client access.
First, open the
/etc/exportsfile and edit it with your preferred text editor.
Next, input the following:
A connection must be establishable
The server and client must be able to ping each other.
Start the NFS service using the command:
systemctl restart nfs-server
Configure the firewall for CentOS 7.
1 2 3 4
firewall-cmd --permanent --zone=public --add-service=nfs firewall-cmd --permanent --zone=public --add-service=mountd firewall-cmd --permanent --zone=public --add-service=rpc-bind firewall-cmd --reload
Configuring the NFS client
Install NFS packages using
yum install nfs-utils
Create the NFS directory mount point(s).
Mount the directory.
mount -t nfs <nfs-server-ip>:/datastore/activemq-data /datastore/activemq-data
Verify the NFS mount.
After executing this command, you should see a list of file systems and belonging in that list should be the NFS configured earlier.
Setting up a permanent NFS mount
By default, you will have to remount all your NFS directories after every reboot. To make sure it is available upon reboot, you may follow the steps below:
fstabfile using the text editor of your choice.
Edit the file and add the following line to the file to have the mount points automatically configured after reboot:
<nfs-server-ip>:/datastore/activemq-data /datastore/activemq-data nfs defaults 0 0
To tell Martini that there will be two instances of ActiveMQ, edit the
in the application.properties file. This property contains the list
of broker connection URLs, separated by commas if there is more than one broker, which is the case for this set-up.
You can select an embedded ActiveMQ instance as the fail-over option by doing something like:
tcp://0.0.0.0:61616 is the URL for the embedded ActiveMQ instance.
Duplicate JMS client IDs
No two instances should be using the same
jms.client-id or the connection will fail.
Remote ActiveMQ instances can be configured via XML or via the command line. TORO and ActiveMQ both recommend configuring your instance via XML, as this type of configuration allows for advanced configuration options and flexibility. This is also the type of configuration the following sections will be referencing, as we discuss which components of the ActiveMQ instance should be configured for a master-slave setup.
broker bean element
broker bean element is used to configure ActiveMQ instance-wide properties. TORO recommends the following setup
for your remote ActiveMQ instance:
1 2 3 4 5 6 7 8 9 10 11
In fact, the configuration above is used for remote ActiveMQ instances deployed for Martini instances on TORO Cloud.
Configuring destination policies
According to the ActiveMQ documentation on per destination policies:
"Multiple different policies can be applied per destination (queue or topic), or using wildcard notation to apply to a hierarchy of queues or topics, making it possible, therefore, to configure how different regions of the JMS destination space are handled."
When using XML configuration, destination policies are defined using the
<destinationPolicy> tag, which is under
<broker> XML element. For example, the following configuration sets the behavior of topics:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Changes are immediately applied once saved. Check your logs for confirmation.
Configuring the management context
This is not necessary but if you'd like to be able to control the behavior of the broker via JMX MBeans, then you must configure the ActiveMQ instance's management context. Doing so will allow you to connect, monitor, and control the instance remotely via JConsole.
1 2 3
With this, you will be able to connect to the service via JConsole using the URL
Configuring the KahaDB directory
According to the ActiveMQ website,
"KahaDB is a file-based persistence database that is local to the message broker that is using it." You must set
KahaDB's directory to
/datastore/activemq-data, which was created earlier, like so:
1 2 3
Configuring transport connectors
Since this master-slave setup features a fail-over, you must use the
transport connector. This transport connector allows brokers to talk to each other over
a network. The configuration below shows the transport connector for the master broker instance:
1 2 3 4 5
61616is the default TCP port
8161is the default web console port
5672is the default AMQP port
After successfully following through this guide, you should now have an ActiveMQ master-slave setup for Martini.
activemq.xml file should look roughly something like:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148
activemq.xml file will often either contain Jetty settings or import them from another file. For example:
MCollective doesn't use this. If you're not using it to manage ActiveMQ, leaving it enabled may be a security risk.