Deprecation

This document is DEPRECATED.

Please consider any information here as out of date. DO NOT use this document.

Instead, refer to http://infinispan.org/documentation.

Please update your bookmarks accordingly.

Skip to end of metadata
Go to start of metadata

Introduction

The key affinity service solves the following problem: for a distributed Infinispan cluster one wants to make sure that a value is placed in a certain node. Based on a supplied cluster address identifying the node, the service returns a key that will be hashed to that particular node.

API

Following code snippet depicts how a reference to this service can be obtained and used.

The service is started at step 2: after this point it uses the supplied Excutor to generate and queue keys. At step 3, we obtain a key for this service, and use it at step 4, with that guarantee that it is distributed in node identified by cacheManager.getAddress().

Lifecycle

KeyAffinityService extends Lifecycle, which allows stopping and (re)starting it:

The service is instantiated through KeyAffinityServiceFactory. All the factory method have an Executors parameter, that is used for asynchronous key generation (so that it won't happen in the caller's thread). It is user's responsibility to handle the shutdown of this Executor.

The KeyAffinityService, once started, needs to be explicitly stopped. This stops the async key generation and releases other held resources.

The only situation in which KeyAffinityService stops by itself is when the cache manager with wich it was registered is shutdown.

Topology changes

When a topology change takes place the key ownership from the KeyAffinityService might change. The key affinity service keep tracks of these topology changes and updates and doesn't return stale keys, i.e. keys that would currently map to a different node than the one specified. However, this does not guarantee that at the time the key is used its node affinity hasn't changed, e.g.:

- thread T1 reads a key k1 that maps to node A

- a topology change happens which makes k1 map to node B

- T1 uses k1 to add something to the cache. At this point k1 maps to B, different node than the one requested at the time of read.

Whilst this is not ideal, it should be a supported behaviour for the application as all the already in-use keys might me moved over during cluster change. The KeyAffinityService provides an access proximity optimisation for stable clusters which doesn't apply during topology changes. 

Labels:
None
Enter labels to add to this page:
Please wait 
Looking for a label? Just start typing.