Simplifying MFT Server Clustering and High Availability Through Global Datastores
Overview
The latest release of JSCAPE MFT Server (9.3) has a new way of storing all configuration and message data; one that makes it easier for server clustering, scaling out, and implementing high availability. Let's talk about it.
Introducing JSCAPE MFT Server's global datastore
In previous versions of JSCAPE MFT Server, server configuration data was always stored in files. While this method worked quite well, we later on realised there was still a better way. We saw that many of the data types JSCAPE MFT Server was handling, which included AS2 messages, OFTP messages, Ad-hoc messages, Users, and Groups, already supported database serialization.
And so we thought, "Why not create a global datastore, in the form of a relational database, and let that datastore handle all server configuration and message data?" That means, in addition to those messages and data we mentioned earlier, this database could also hold information for Triggers, Trading Partners, and several others. As we shall see shortly, this change would provide some great benefits.
So, beginning 9.3, your MFT Server installation will come with a global datastore. This database could be deployed either on the same machine as the MFT Server installation or on a centralized location where it could easily be shared among multiple instances of MFT Server. The latter configuration would allow you to simplify failover, clustering and high availability (HA).
A new approach to failover synchronization
Failover synchronization is integral to high availability clusters. Even just two file transfer servers in an active-passive configuration require failover synchronization. Synchronization ensures that the two servers always have exactly the same configurations. That way, when the active (a.k.a. primary) server goes down, the passive (a.k.a. failover) server can seamlessly take its place.
Related post: Active-Active vs Active-Passive High Availability Cluster
Prior to 9.3, failover synchronization was implemented by pointing the primary server to the failover. Synchronization was then carried out either automatically or through a manual process. You can read more about these two processes as well as about the old method of failover synchronization in the post How to Setup High Availability File Transfer Servers.
Regardless which process was used, synchronization was basically a one way street. That is, configurations were designed to be copied from the primary to the failover and never the other way around. While this arrangement might be ideal for active-passive configurations, it doesn't have to be that way all the time. In an active-active server cluster, for instance, you might want changes to any of the nodes to cascade to all other nodes in the cluster.
Having a global datastore makes it easy for us to implement this particular requirement. With a global datastore in a centralized database, you simply point each node to the same database. That way, configuration settings can be shared by all nodes and it would be much easier to cascade changes in any single node to all other nodes in the cluster.
Easily scale out your cluster
Scaling out or adding a new node to a cluster is now also much easier with a global datastore. In the past, you had to take note which server was the last server in the cluster and then have that server point to the newly added node. This was to ensure that the newly added node would be part of the cascade process and consequently be able to reflect whatever changes was made on the first node.
To elaborate - all nodes had to be daisy chained such that the 1st node would point to the 2nd, the 2nd point to the 3rd, the 3rd point to the 4th and so on. It's not difficult to see that this can be quite tedious and prone to error. If say the 3rd node failed to point to the 4th node, then all other remaining nodes in the cluster would not be able to reflect the changes made from nodes 1 to 3.
With a centralised database for storing configuration data, all you have to do is point all new nodes to the same database and voila. Easy as pie.
Where to set up the global datastore
If you're using version 9.3 and above, your installation of JSCAPE MFT Server would already be pointing to a local H2 database by default. You may optionally point it to any ANSI compliant relational database. This is what you'll want to do if you want to have a centralised database for HA. To do that, just launch your server manager and then go to Server > Settings > Datastore.
Specify the JDBC URL of the target database and then enter the required username and password. After that, you may click the Test Parameters button to make sure you've successfully established a connection with the database. If all goes well, click Apply.
Request a risk-free JSCAPE trial
You just learned a new way of storing all configuration data; one that makes it easier to scale or set up clustering and high availability.
Test it in your own environment when you request a risk-free trial here.