|Unlike ModeShape 3 and 4, ModeShape 5's primary goal is data consistency at the possible cost of performance and scalability.|
ModeShape 3 and 4 used Infinispan mainly because of its clustering capabilities, hoping to leverage the different clustered topologies supported by Infinispan. However, as it turned out, many of these topologies were unsuitable for ModeShape either because of their nature (i.e. eventually consistent) or because of various Infinispan persistence consistency issues (you can look at the documentation for ModeShape 4 clustering here
Since ModeShape is primarily a JCR implementation it adheres and implements the JCR specification which means it must be strongly consistent. This is straightforward in a local, non-clustered configuration, but when running in a cluster ModeShape has to behave as if it were strictly serializable. In ModeShape 5 we implement this using a global locking mechanism which should ensure data integrity at the cost of performance.
In a local mode, ModeShape is not clustered at all. This is the default, so if you don't tell both ModeShape to cluster, each process will happily operate without communicating or sharing any content. Updates on one process will not be visible to any of the other processes.
Note that in the local, non-clustered topology data must be persisted to disk or some other system. Otherwise, if the ModeShape process terminates, all data will be lost.
|This is the only clustering model which will be supported by ModeShape 5.|
A cluster in this model can have any number of members each with it's own in-memory cache but all using a shared database for persisting and reading the content. Binary stores and Indexes can be configured to be either local to each member or shared across all members, depending on the chosen implementation.
Updates in the cluster are sent to each of the members in the form of JGroups messages representing the various events that caused that data to mutate. Each cluster member will update their own local state in response to these events.
This works great for small- to medium-sized repositories, even when the available memory on each process is not large enough to hold all of the nodes and binary values at one time.
|Prior to ModeShape 5.2 network partitions can cause data corruption.|
The reason why data corruption can occur with partitions is that when running in a cluster, ModeShape's exclusive locking mechanism (i.e. only a certain cluster node and thread can modify a JCR node at any given time) relies on JGroups. However, when the network partition occurs, JGroups will still function "normally" within each partition. This means that multiple cluster nodes from different network partitions could hold the same lock at the same time essentially overwriting each other's data.
To prevent the above case from happening, from 5.2 onward ModeShape allows the clustered locking mechanism to be configured to use either JGroups or use exclusive DB row-level locking (essentially via the SELECT FOR UPDATE statement).
One can configure the locking option like so:
In either case, the valid locking options are "jgroups" or "db".
Using the DB (which is shared by all the cluster nodes regardless of where they are in the partition) to lock the JCR nodes ensures the exclusivity constraint. Also, the fact that prior to each DB write (performed via the regular JCR session.save calls) ModeShape will load the latest version of each of the nodes involved in the write operation, ensures data integrity.
|Starting with 5.4, ModeShape will use DB locking by default.|