Using an Oracle database as storage for the Apache Artemis message broker -- part 1
Most users of the Artemis message broker will use the standard, file-based message store. For fault-tolerance, however, it's necessary to replicate this store in some way. We could use the built-in file replication machinery in Artemis, or we could use a network file store that is itself replicated. The former -- the built-in replication -- has the problem of requiring an extensive installation -- at least six message brokers, or an external active/passive coordinator based on Apache Zookeeper. The latter approach -- a replicated network file system -- requires its own infrastructure, which will probably be complex, or expensive, or both.
An alternative approach is to use a relational database as the message data store. For fault-tolerance this also needs to be replicated or, at least, have a regular mirroring policy. I would not recommend anybody to go and buy a relational database with replication, just to provide a message store for Artemis; but many organizations already have such a database. These organizations also tend to have a lot of in-house expertise that will be useful in setting up and tuning the database. So using a relational database is, in some circumstances, an attractive proposition.
This article explains in outline how to set up the Artemis broker to use an Oracle database as its store. Artemis will work with other databases, but Oracle is the one I have some experience with. I should point out that unless your database is itself fault-tolerant, no advantage whatsoever attaches to using it in preference to a local file store. In fact, there are many disadvantages -- tuning the database to get good throughput is not a trivial operation (and it's not one I'll be going into in this introductory article).
Basic Oracle set-up
Initially we require at the Oracle level only a user who has
rights to perform basic (select
, create table
,
create index
...)
SQL operations. I assume in this article that the user is authenticated
by username and
password but, of course, Oracle provides other mechanisms.
Artemis does not require a private Oracle table space, or even a private user. However, using a specific user with its own table space does make administration easier. And, of course, the user needs to have sufficient table space somewhere to store the broker's data, which can be extensive.
Note:
Artemis will participate in XA transactions, but it won't coordinate transactions against a relational database. You don't need the database use to have rights on transaction-related tables.
It helps enormously, in the set-up and tuning stages, to have SQL access to the database. That way you'll be able to check that tables are being created and manipulated as expected, and you'll be able to reset the database schema during testing, should you need to start fresh.
Note:
Artemis won't easily support Oracle databases earlier than 12c, because it relies on auto-increment column support. You'd have to create the database schema manually, and define custom sequences and triggers to get the auto-increment behaviour that Artemis needs.
Getting and installing the database driver
Drivers are available from Oracle's
JDBC downloads page.
Because later Artemis versions support only Java versions 11 and later,
in principle you'll need one of the variants of ojdbc11.jar
from
Oracle. Oracle issues these for each database version -- backward
compatibility is pretty good, but you shouldn't push your luck -- use
the proper version for your database.
Installation in Artemis amounts to copying
ojdbc11.jar
to the lib/
directory of the Artemis
instance.
Configuring Artemis
Here is a basic store configuration -- just add it to broker.xml
in the <core> section.
<store> <database-store> <jdbc-driver-class-name>oracle.jdbc.OracleDriver</jdbc-driver-class-name> <jdbc-connection-url>jdbc:oracle:thin:@my_oracle_host:1521/my_service?user=kevin&password=kevin</jdbc-connection-url> <bindings-table-name>B</bindings-table-name> <message-table-name>M</message-table-name> <page-store-table-name>MP</page-store-table-name> <large-message-table-name>LM</large-message-table-name> <node-manager-store-table-name>NM</node-manager-store-table-name> <!--jdbc-user>kevin</jdbc-user> <jdbc-password>kevin</jdbc-password--> </database-store> </store>
You can remove any existing configuration related to the file-based message store, as it won't be used. It won't hurt to leave it in place, but its presence may be confusing.
Notice that I have specified my credentials on the database connection
URL, even though there are specific entries in the configuration for
credentials. The Artemis broker will accept credentials in both places
but the diagnostic tools won't. Tools like artemis data exp
,
that export the message store in XML format, do work with a database as the
message store -- but they only work if the credentials are specified
on the URL.
I have chosen short names for the tables. It's important to bear in mind that some of these names are merely prefixes -- a 12-digit number will be appended to them. Oracle installations often have restrictions on the lengths of identifier names.
Creating the database schema
In the first instance, there should be no need to create the database schema administratively. The broker will do it automatically the first time it starts up with JDBC persistence enabled. When moving into full production the schema will probably need to be tuned -- I will describe the schema in more detail in later articles in this series.
Testing
If you're removed all configuration for a file-based store from your configuration, you'll know that the installation basically works when you can send and receive messages using messaging clients of the broker. After all, there isn't anywhere else to store the data.
Closing remarks
Configuring Artemis to use an Oracle database for storage is straightforward -- at least in the early stages. You'll quickly find out, however, the broker will not be as fast as local file storage, and probably not even as fast as networked file storage. The throughput can be improved by careful tuning, and using a database is usually makes it much easier to set up a fault-tolerant broker cluster than any kind of file-based store -- so long as the datbase infrastructure is already in place.
In the second article in this series, I'll explain the database schema, and how it is managed by the broker, in more detail.