This document is DEPRECATED.

Please consider any information here as out of date. DO NOT use this document.

Instead, refer to

Please update your bookmarks accordingly.

Skip to end of metadata
Go to start of metadata


The key affinity service solves the following problem: for a distributed Infinispan cluster one wants to make sure that a value is placed in a certain node. Based on a supplied cluster address identifying the node, the service returns a key that will be hashed to that particular node.


Following code snippet depicts how a reference to this service can be obtained and used.

The service is started at step 2: after this point it uses the supplied Excutor to generate and queue keys. At step 3, we obtain a key for this service, and use it at step 4, with that guarantee that it is distributed in node identified by cacheManager.getAddress().


KeyAffinityService extends Lifecycle, which allows stopping and (re)starting it:

The service is instantiated through KeyAffinityServiceFactory. All the factory method have an Executors parameter, that is used for asynchronous key generation (so that it won't happen in the caller's thread). It is user's responsibility to handle the shutdown of this Executor.

The KeyAffinityService, once started, needs to be explicitly stopped. This stops the async key generation and releases other held resources.

The only situation in which KeyAffinityService stops by itself is when the cache manager with wich it was registered is shutdown.

Topology changes

It is very important to note that while the KeyAffinityService generates keys that would use the local node as the primary owner in distributed mode, this may not hold true after a topology change, since key ownership may migrate. To ensure keys are always mapped locally, application code must register a listener to be notified of topology changes, and test keys previously generated to see if they are still mapped locally. If not, user code as two options:

  1. Create a new key, which will map to the local node under the new topology. Then copy values from the old key to the new key, delete the old key from the system and use the new key only.
  2. Migrate any transactions, sessions, etc that work on the key to the node which is the new primary owner of that key.
Enter labels to add to this page:
Please wait 
Looking for a label? Just start typing.