<subsystem xmlns="urn:jboss:domain:singleton:1.0"> <singleton-policies default="default"> <singleton-policy name="default" cache-container="server"> <simple-election-policy/> </singleton-policy> </singleton-policies> </subsystem>
In general, an HA or clustered singleton is a service that exists on multiple nodes in a cluster, but is active on just a single node at any given time. If the node providing the service fails or is shut down, a new singleton provider is chosen and started. Thus, other than a brief interval when one provider has stopped and another has yet to start, the service is always running on one node.
WildFly 10 introduces a “singleton” subsystem, which defines a set of policies that define how an HA singleton should behave. A singleton policy can be used to instrument singleton deployments or to create singleton MSC services.
The default subsystem configuration from WildFly’s ha and full-ha profile looks like:
<subsystem xmlns="urn:jboss:domain:singleton:1.0"> <singleton-policies default="default"> <singleton-policy name="default" cache-container="server"> <simple-election-policy/> </singleton-policy> </singleton-policies> </subsystem>
A singleton policy defines:
A unique name
A cache container and cache with which to register singleton provider candidates
An election policy
A quorum (optional)
One can add a new singleton policy via the following management operation:
/subsystem=singleton/singleton-policy=foo:add(cache-container=server)
The cache-container and cache attributes of a singleton policy must reference a valid cache from the Infinispan subsystem. If no specific cache is defined, the default cache of the cache container is assumed. This cache is used as a registry of which nodes can provide a given service and will typically use a replicated-cache configuration.
WildFly 10 includes 2 singleton election policy implementations:
simple
Elects the provider (a.k.a. master) of a singleton service based on a specified position in a circular linked list of eligible nodes sorted by descending age. Position=0, the default value, refers to the oldest node, 1 is second oldest, etc. ; while position=-1 refers to the youngest node, -2 to the second youngest, etc.
e.g.
/subsystem=singleton/singleton-policy=foo/election-policy=simple:add(position=-1)
random
Elects a random member to be the provider of a singleton service
e.g.
/subsystem=singleton/singleton-policy=foo/election-policy=random:add()
Additionally, any singleton election policy may indicate a preference for one or more members of a cluster. Preferences may be defined either via node name or via outbound socket binding name. Node preferences always take precedent over the results of an election policy.
e.g.
/subsystem=singleton/singleton-policy=foo/election-policy=simple:list-add(name=name-preferences, value=nodeA) /subsystem=singleton/singleton-policy=bar/election-policy=random:list-add(name=socket-binding-preferences, value=nodeA)
Network partitions are particularly problematic for singleton services, since they can trigger multiple singleton providers for the same service to run at the same time. To defend against this scenario, a singleton policy may define a quorum that requires a minimum number of nodes to be present before a singleton provider election can take place. A typical deployment scenario uses a quorum of N/2 + 1, where N is the anticipated cluster size. This value can be updated at runtime, and will immediately affect any active singleton services.
e.g.
/subsystem=singleton/singleton-policy=foo:write-attribute(name=quorum, value=3)
The singleton subsystem can be used in a non-HA profile, so long as the cache that it references uses a local-cache configuration. In this manner, an application leveraging singleton functionality (via the singleton API or using a singleton deployment descriptor) will continue function as if the server was a sole member of a cluster. For obvious reasons, the use of a quorum does not make sense in such a configuration.
WildFly 10 resurrects the ability to start a given deployment on a single node in the cluster at any given time. If that node shuts down, or fails, the application will automatically start on another node on which the given deployment exists. Long time users of JBoss AS will recognize this functionality as being akin to the HASingletonDeployer, a.k.a. “deploy-hasingleton”, feature of AS6 and earlier.
A deployment indicates that it should be deployed as a singleton via a deployment descriptor. This can either be a standalone “/META-INF/singleton-deployment.xml” file or embedded within an existing jboss-all.xml descriptor. This descriptor may be applied to any deployment type, e.g. JAR, WAR, EAR, etc., with the exception of a subdeployment within an EAR.
e.g.
<singleton-deployment xmlns="urn:jboss:singleton-deployment:1.0" policy="foo"/>
The singleton deployment descriptor defines which singleton policy should be used to deploy the application. If undefined, the default singleton policy is used, as defined by the singleton subsystem.
Using a standalone descriptor is often preferable, since it may be overlaid onto an existing deployment archive.
e.g.
deployment-overlay add --name=singleton-policy-foo --content=/META-INF/singleton-deployment.xml=/path/to/singleton-deployment.xml --deployments=my-app.jar --redeploy-affected
WildFly allows any user MSC service to be installed as a singleton MSC service via a public API. Once installed, the service will only ever start on 1 node in the cluster at a time. If the node providing the service is shutdown, or fails, another node on which the service was installed will start automatically.
While singleton MSC services have been around since AS7, WildFly 10 adds the ability to leverage the singleton subsystem to create singleton MSC services from existing singleton policies.
The singleton subsystem exposes capabilities for each singleton policy it defines. These policies, represented via the org.wildfly.clustering.singleton.SingletonPolicy interface, can be referenced via the following name: “org.wildfly.clustering.singleton.policy”
e.g.
public class MyServiceActivator implements ServiceActivator { @Override public void activate(ServiceActivatorContext context) { ServiceName name = ServiceName.parse(“my.service.name”); Service<?> service = new MyService(); try { SingletonPolicy policy = (SingletonPolicy) context.getServiceRegistry().getRequiredService(ServiceName.parse(SingletonPolicy.CAPABILITY_NAME)).awaitValue(); policy.createSingletonServiceBuilder(name, service).build(context.getServiceTarget()).install(); } catch (InterruptedException e) { throw new ServiceRegistryException(e); } } }
Alternatively, you can build singleton policy dynamically, which is particularly useful if you want to use a custom singleton election policy. Specifically, SingletonPolicy is a generalization of the org.wildfly.clustering.singleton.SingletonServiceBuilderFactory interface, which includes support for specifying an election policy and, optionally, a quorum.
e.g.
public class MyServiceActivator implements ServiceActivator { @Override public void activate(ServiceActivatorContext context) { String containerName = “server”; ElectionPolicy policy = new MySingletonElectionPolicy(); int quorum = 3; ServiceName name = ServiceName.parse(“my.service.name”); Service<?> service = new MyService(); try { SingletonServiceBuilderFactory factory = (SingletonServiceBuilderFactory) context.getServiceRegistry().getRequiredService(SingletonServiceName.BUILDER.getServiceName(containerName))).awaitValue(); factory.createSingletonServiceBuilder(name, service) .electionPolicy(policy) .quorum(quorum) .build(context.getServiceTarget()).install(); } catch (InterruptedException e) { throw new ServiceRegistryException(e); } } }