Interface Cache<K,V>

All Superinterfaces:
AsyncCache<K,V>, BasicCache<K,V>, BatchingCache, ConcurrentMap<K,V>, FilteringListenable<K,V>, Lifecycle, Listenable, Map<K,V>
All Known Subinterfaces:
AdvancedCache<K,V>, SecureCache<K,V>
All Known Implementing Classes:
AbstractDelegatingAdvancedCache, AbstractDelegatingCache, CacheImpl, DecoratedCache, EncoderCache, SecureCacheImpl, SimpleCacheImpl, StatsCollectingCache

public interface Cache<K,V> extends BasicCache<K,V>, BatchingCache, FilteringListenable<K,V>
The central interface of Infinispan. A Cache provides a highly concurrent, optionally distributed data structure with additional features such as:

  • JTA transaction compatibility
  • Eviction support for evicting entries from memory to prevent OutOfMemoryErrors
  • Persisting entries to a CacheLoader, either when they are evicted as an overflow, or all the time, to maintain persistent copies that would withstand server failure or restarts.

For convenience, Cache extends ConcurrentMap and implements all methods accordingly. Methods like keySet(), values() and entrySet() produce backing collections in that updates done to them also update the original Cache instance. Certain methods on these maps can be expensive however (prohibitively so when using a distributed cache). The size() and Map.containsValue(Object) methods upon invocation can also be expensive just as well. The reason these methods are expensive are that they take into account entries stored in a configured CacheLoader and remote entries when using a distributed cache. Frequent use of these methods is not recommended if used in this manner. These aforementioned methods do take into account in-flight transactions, however key/value pairs read in using an iterator will not be placed into the transactional context to prevent OutOfMemoryErrors. Please note all of these methods behavior can be controlled using a Flag to disable certain things such as taking into account the loader. Please see each method on this interface for more details.

Also, like many ConcurrentMap implementations, Cache does not support the use of null keys or values.

Asynchronous operations

Cache also supports the use of "async" remote operations. Note that these methods only really make sense if you are using a clustered cache. I.e., when used in LOCAL mode, these "async" operations offer no benefit whatsoever. These methods, such as AsyncCache.putAsync(Object, Object) offer the best of both worlds between a fully synchronous and a fully asynchronous cache in that a CompletableFuture is returned. The CompletableFuture can then be ignored or thrown away for typical asynchronous behaviour, or queried for synchronous behaviour, which would block until any remote calls complete. Note that all remote calls are, as far as the transport is concerned, synchronous. This allows you the guarantees that remote calls succeed, while not blocking your application thread unnecessarily. For example, usage such as the following could benefit from the async operations:
   CompletableFuture f1 = cache.putAsync("key1", "value1");
   CompletableFuture f2 = cache.putAsync("key2", "value2");
   CompletableFuture f3 = cache.putAsync("key3", "value3");
   f1.get();
   f2.get();
   f3.get();
 
The net result is behavior similar to synchronous RPC calls in that at the end, you have guarantees that all calls completed successfully, but you have the added benefit that the three calls could happen in parallel. This is especially advantageous if the cache uses distribution and the three keys map to different cache instances in the cluster.

Also, the use of async operations when within a transaction return your local value only, as expected. A CompletableFuture is still returned though for API consistency.

Constructing a Cache

An instance of the Cache is usually obtained by using a CacheContainer.
   CacheManager cm = new DefaultCacheManager(); // optionally pass in a default configuration
   Cache c = cm.getCache();
 
See the CacheContainer interface for more details on providing specific configurations, using multiple caches in the same JVM, etc.

Please see the Infinispan documentation and/or the 5 Minute Usage Tutorial for more details.

Since:
4.0
Author:
Mircea.Markus@jboss.com, Manik Surtani, Galder ZamarreƱo
See Also:
  • Method Details

    • putForExternalRead

      void putForExternalRead(K key, V value)
      Under special operating behavior, associates the value with the specified key.
      • Only goes through if the key specified does not exist; no-op otherwise (similar to ConcurrentMap.putIfAbsent(Object, Object))
      • Force asynchronous mode for replication to prevent any blocking.
      • invalidation does not take place.
      • 0ms lock timeout to prevent any blocking here either. If the lock is not acquired, this method is a no-op, and swallows the timeout exception.
      • Ongoing transactions are suspended before this call, so failures here will not affect any ongoing transactions.
      • Errors and exceptions are 'silent' - logged at a much lower level than normal, and this method does not throw exceptions
      This method is for caching data that has an external representation in storage, where, concurrent modification and transactions are not a consideration, and failure to put the data in the cache should be treated as a 'suboptimal outcome' rather than a 'failing outcome'.

      An example of when this method is useful is when data is read from, for example, a legacy datastore, and is cached before returning the data to the caller. Subsequent calls would prefer to get the data from the cache and if the data doesn't exist in the cache, fetch again from the legacy datastore.

      See JBCACHE-848 for details around this feature.

      Parameters:
      key - key with which the specified value is to be associated.
      value - value to be associated with the specified key.
      Throws:
      IllegalStateException - if getStatus() would not return ComponentStatus.RUNNING.
    • putForExternalRead

      void putForExternalRead(K key, V value, long lifespan, TimeUnit unit)
      An overloaded form of putForExternalRead(K, V), which takes in lifespan parameters.
      Parameters:
      key - key to use
      value - value to store
      lifespan - lifespan of the entry. Negative values are interpreted as unlimited lifespan.
      unit - unit of measurement for the lifespan
      Since:
      7.0
    • putForExternalRead

      void putForExternalRead(K key, V value, long lifespan, TimeUnit lifespanUnit, long maxIdle, TimeUnit maxIdleUnit)
      An overloaded form of putForExternalRead(K, V), which takes in lifespan parameters.
      Parameters:
      key - key to use
      value - value to store
      lifespan - lifespan of the entry. Negative values are interpreted as unlimited lifespan.
      lifespanUnit - time unit for lifespan
      maxIdle - the maximum amount of time this key is allowed to be idle for before it is considered as expired
      maxIdleUnit - time unit for max idle time
      Since:
      7.0
    • evict

      void evict(K key)
      Evicts an entry from the memory of the cache. Note that the entry is not removed from any configured cache stores or any other caches in the cluster (if used in a clustered mode). Use BasicCache.remove(Object) to remove an entry from the entire cache system.

      This method is designed to evict an entry from memory to free up memory used by the application. This method uses a 0 lock acquisition timeout so it does not block in attempting to acquire locks. It behaves as a no-op if the lock on the entry cannot be acquired immediately.

      Important: this method should not be called from within a transaction scope.

      Parameters:
      key - key to evict
    • getCacheConfiguration

      Configuration getCacheConfiguration()
    • getCacheManager

      EmbeddedCacheManager getCacheManager()
      Retrieves the cache manager responsible for creating this cache instance.
      Returns:
      a cache manager
    • getAdvancedCache

      AdvancedCache<K,V> getAdvancedCache()
    • getStatus

      ComponentStatus getStatus()
    • size

      int size()
      Returns a count of all elements in this cache and cache loader across the entire cluster.

      Only a subset of entries is held in memory at a time when using a loader or remote entries, to prevent possible memory issues, however the loading of said entries can still be vary slow.

      If there are performance concerns then the Flag.SKIP_CACHE_LOAD flag should be used to avoid hitting the cache loader in case if this is not needed in the size calculation.

      Also if you want the local contents only you can use the Flag.CACHE_MODE_LOCAL flag so that other remote nodes are not queried for data. However the loader will still be used unless the previously mentioned Flag.SKIP_CACHE_LOAD is also configured.

      If this method is used in a transactional context, note this method will not bring additional values into the transaction context and thus objects that haven't yet been read will act in a IsolationLevel.READ_COMMITTED behavior irrespective of the configured isolation level. However values that have been previously modified or read that are in the context will be adhered to. e.g. any write modification or any previous read when using IsolationLevel.REPEATABLE_READ

      This method should only be used for debugging purposes such as to verify that the cache contains all the keys entered. Any other use involving execution of this method on a production system is not recommended.

      Specified by:
      size in interface Map<K,V>
      Returns:
      the number of key-value mappings in this cache and cache loader across the entire cluster.
    • keySet

      CacheSet<K> keySet()
      Returns a set view of the keys contained in this cache and cache loader across the entire cluster. Modifications and changes to the cache will be reflected in the set and vice versa. When this method is called nothing is actually queried as the backing set is just returned. Invocation on the set itself is when the various operations are ran.

      Unsupported Operations

      Care should be taken when invoking Set.toArray(), Set.toArray(Object[]), Set.size(), Set.retainAll(Collection) and Set.iterator() methods as they will traverse the entire contents of the cluster including a configured CacheLoader and remote entries. The former 2 methods especially have a very high likely hood of causing a OutOfMemoryError due to storing all the keys in the entire cluster in the array. Use involving execution of this method on a production system is not recommended as they can be quite expensive operations

      Supported Flags

      Note any flag configured for the cache will also be passed along to the backing set when it was created. If additional flags are configured on the cache they will not affect any existing backings sets.

      If there are performance concerns then the Flag.SKIP_CACHE_LOAD flag should be used to avoid hitting the cache store as this will cause all entries there to be read in (albeit in a batched form to prevent OutOfMemoryError)

      Also if you want the local contents only you can use the Flag.CACHE_MODE_LOCAL flag so that other remote nodes are not queried for data. However the loader will still be used unless the previously mentioned Flag.SKIP_CACHE_LOAD is also configured.

      Iterator Use

      This class implements the CloseableIteratorSet interface which creates a CloseableIterator instead of a regular one. This means this iterator must be explicitly closed either through try with resource or calling the close method directly. Technically this iterator will also close itself if you iterate fully over it, but it is safest to always make sure you close it explicitly.

      Unsupported Operations

      Due to not being able to add null values the following methods are not supported and will throw UnsupportedOperationException if invoked. Set.add(Object) Set.addAll(java.util.Collection)
      Specified by:
      keySet in interface Map<K,V>
      Returns:
      a set view of the keys contained in this cache and cache loader across the entire cluster.
    • values

      CacheCollection<V> values()
      Returns a collection view of the values contained in this cache across the entire cluster. Modifications and changes to the cache will be reflected in the set and vice versa. When this method is called nothing is actually queried as the backing collection is just returned. Invocation on the collection itself is when the various operations are ran.

      Care should be taken when invoking Collection.toArray(), Collection.toArray(Object[]), Collection.size(), Collection.retainAll(Collection) and Collection.iterator() methods as they will traverse the entire contents of the cluster including a configured CacheLoader and remote entries. The former 2 methods especially have a very high likely hood of causing a OutOfMemoryError due to storing all the keys in the entire cluster in the array. Use involving execution of this method on a production system is not recommended as they can be quite expensive operations

      *

      Supported Flags

      Note any flag configured for the cache will also be passed along to the backing set when it was created. If additional flags are configured on the cache they will not affect any existing backings sets.

      If there are performance concerns then the Flag.SKIP_CACHE_LOAD flag should be used to avoid hitting the cache store as this will cause all entries there to be read in (albeit in a batched form to prevent OutOfMemoryError)

      Also if you want the local contents only you can use the Flag.CACHE_MODE_LOCAL flag so that other remote nodes are not queried for data. However the loader will still be used unless the previously mentioned Flag.SKIP_CACHE_LOAD is also configured.

      Iterator Use

      This class implements the CloseableIteratorCollection interface which creates a CloseableIterator instead of a regular one. This means this iterator must be explicitly closed either through try with resource or calling the close method directly. Technically this iterator will also close itself if you iterate fully over it, but it is safest to always make sure you close it explicitly.

      The iterator retrieved using CloseableIteratorCollection.iterator() supports the remove method, however the iterator retrieved from CacheStream.iterator() will not support remove.

      Unsupported Operations

      Due to not being able to add null values the following methods are not supported and will throw UnsupportedOperationException if invoked. Set.add(Object) Set.addAll(java.util.Collection)
      Specified by:
      values in interface Map<K,V>
      Returns:
      a collection view of the values contained in this cache and cache loader across the entire cluster.
    • entrySet

      CacheSet<Map.Entry<K,V>> entrySet()
      Returns a set view of the mappings contained in this cache and cache loader across the entire cluster. Modifications and changes to the cache will be reflected in the set and vice versa. When this method is called nothing is actually queried as the backing set is just returned. Invocation on the set itself is when the various operations are ran.

      Care should be taken when invoking Set.toArray(), Set.toArray(Object[]), Set.size(), Set.retainAll(Collection) and Set.iterator() methods as they will traverse the entire contents of the cluster including a configured CacheLoader and remote entries. The former 2 methods especially have a very high likely hood of causing a OutOfMemoryError due to storing all the keys in the entire cluster in the array. Use involving execution of this method on a production system is not recommended as they can be quite expensive operations

      *

      Supported Flags

      Note any flag configured for the cache will also be passed along to the backing set when it was created. If additional flags are configured on the cache they will not affect any existing backings sets.

      If there are performance concerns then the Flag.SKIP_CACHE_LOAD flag should be used to avoid hitting the cache store as this will cause all entries there to be read in (albeit in a batched form to prevent OutOfMemoryError)

      Also if you want the local contents only you can use the Flag.CACHE_MODE_LOCAL flag so that other remote nodes are not queried for data. However the loader will still be used unless the previously mentioned Flag.SKIP_CACHE_LOAD is also configured.

      Modifying or Adding Entries

      An entry's value is supported to be modified by using the Map.Entry.setValue(Object) and it will update the cache as well. Also this backing set does allow addition of a new Map.Entry(s) via the Set.add(Object) or Set.addAll(java.util.Collection) methods.

      Iterator Use

      This class implements the CloseableIteratorSet interface which creates a CloseableIterator instead of a regular one. This means this iterator must be explicitly closed either through try with resource or calling the close method directly. Technically this iterator will also close itself if you iterate fully over it, but it is safest to always make sure you close it explicitly.
      Specified by:
      entrySet in interface Map<K,V>
      Returns:
      a set view of the mappings contained in this cache and cache loader across the entire cluster.
    • clear

      void clear()
      Removes all mappings from the cache.

      Note: This should never be invoked in production unless you can guarantee no other invocations are ran concurrently.

      If the cache is transactional, it will not interact with the transaction.

      Specified by:
      clear in interface Map<K,V>
    • stop

      void stop()
      Stops a cache. If the cache is clustered, this only stops the cache on the node where it is being invoked. If you need to stop the cache across a cluster, use the shutdown() method.
      Specified by:
      stop in interface Lifecycle
    • shutdown

      default void shutdown()
      Performs a controlled, clustered shutdown of the cache. When invoked, the following operations are performed: This method differs from stop() only in clustered modes, and only when GlobalStateConfiguration.enabled() is true, otherwise it just behaves like stop().
    • computeIfAbsent

      V computeIfAbsent(K key, Function<? super K,? extends V> mappingFunction)

      When this method is used on a clustered cache, either replicated or distributed, the function will be serialized to owning nodes to perform the operation in the most performant way. However this means the function must have an appropriate Externalizer or be Serializable itself.

      For transactional caches, whenever the values of the caches are collections, and the mapping function modifies the collection, the collection must be copied and not directly modified, otherwise whenever rollback is called it won't work. This limitation could disappear in following releases if technically possible.

      Specified by:
      computeIfAbsent in interface ConcurrentMap<K,V>
      Specified by:
      computeIfAbsent in interface Map<K,V>
    • computeIfAbsent

      default V computeIfAbsent(K key, SerializableFunction<? super K,? extends V> mappingFunction)
      Overloaded computeIfAbsent(Object, Function) with Infinispan SerializableFunction. The compiler will pick this overload for lambda parameters, making them Serializable
      Parameters:
      key - , the key to be computed
      mappingFunction - , mapping function to be appliyed to the key
      Returns:
      computed value or null if nothing is computed or computation value is null
    • computeIfAbsent

      default V computeIfAbsent(K key, SerializableFunction<? super K,? extends V> mappingFunction, long lifespan, TimeUnit lifespanUnit)
      Overloaded BasicCache.computeIfAbsent(Object, Function, long, TimeUnit) with Infinispan SerializableFunction. The compiler will pick this overload for lambda parameters, making them Serializable
      Parameters:
      key - , the key to be computed
      mappingFunction - , mapping function to be appliyed to the key
      Returns:
      computed value or null if nothing is computed or computation value is null
    • computeIfAbsent

      default V computeIfAbsent(K key, SerializableFunction<? super K,? extends V> mappingFunction, long lifespan, TimeUnit lifespanUnit, long maxIdleTime, TimeUnit maxIdleTimeUnit)
      Overloaded BasicCache.computeIfAbsent(Object, Function, long, TimeUnit, long, TimeUnit) with Infinispan SerializableFunction. The compiler will pick this overload for lambda parameters, making them Serializable
      Parameters:
      key - , the key to be computed
      mappingFunction - , mapping function to be appliyed to the key
      Returns:
      computed value or null if nothing is computed or computation value is null
    • computeIfAbsentAsync

      default CompletableFuture<V> computeIfAbsentAsync(K key, SerializableFunction<? super K,? extends V> mappingFunction)
      Overloaded AsyncCache.computeIfAbsentAsync(Object, Function) with Infinispan SerializableFunction. The compiler will pick this overload for lambda parameters, making them Serializable
      Parameters:
      key - , the key to be computed
      mappingFunction - , mapping function to be appliyed to the key
      Returns:
      computed value or null if nothing is computed or computation value is null
    • computeIfAbsentAsync

      default CompletableFuture<V> computeIfAbsentAsync(K key, SerializableFunction<? super K,? extends V> mappingFunction, long lifespan, TimeUnit lifespanUnit)
      Overloaded AsyncCache.computeIfAbsentAsync(Object, Function, long, TimeUnit) with Infinispan SerializableFunction. The compiler will pick this overload for lambda parameters, making them Serializable
      Parameters:
      key - , the key to be computed
      mappingFunction - , mapping function to be appliyed to the key
      Returns:
      computed value or null if nothing is computed or computation value is null
    • computeIfAbsentAsync

      default CompletableFuture<V> computeIfAbsentAsync(K key, SerializableFunction<? super K,? extends V> mappingFunction, long lifespan, TimeUnit lifespanUnit, long maxIdleTime, TimeUnit maxIdleTimeUnit)
      Overloaded AsyncCache.computeIfAbsentAsync(Object, Function, long, TimeUnit, long, TimeUnit) with Infinispan SerializableFunction. The compiler will pick this overload for lambda parameters, making them Serializable
      Parameters:
      key - , the key to be computed
      mappingFunction - , mapping function to be appliyed to the key
      Returns:
      computed value or null if nothing is computed or computation value is null
    • computeIfPresent

      V computeIfPresent(K key, BiFunction<? super K,? super V,? extends V> remappingFunction)

      When this method is used on a clustered cache, either replicated or distributed, the bifunction will be serialized to owning nodes to perform the operation in the most performant way. However this means the bifunction must have an appropriate Externalizer or be Serializable itself.

      For transactional caches, whenever the values of the caches are collections, and the mapping function modifies the collection, the collection must be copied and not directly modified, otherwise whenever rollback is called it won't work. This limitation could disappear in following releases if technically possible.

      Specified by:
      computeIfPresent in interface ConcurrentMap<K,V>
      Specified by:
      computeIfPresent in interface Map<K,V>
    • computeIfPresent

      default V computeIfPresent(K key, SerializableBiFunction<? super K,? super V,? extends V> remappingFunction)
      Overloaded computeIfPresent(Object, BiFunction) with Infinispan SerializableBiFunction The compiler will pick this overload for lambda parameters, making them Serializable
      Parameters:
      key - , the key to be computed
      remappingFunction - , mapping function to be appliyed to the key
      Returns:
      computed value or null if nothing is computed or computation result is null
    • computeIfPresentAsync

      default CompletableFuture<V> computeIfPresentAsync(K key, SerializableBiFunction<? super K,? super V,? extends V> remappingFunction)
      Overloaded AsyncCache.computeIfPresentAsync(Object, BiFunction) with Infinispan SerializableBiFunction The compiler will pick this overload for lambda parameters, making them Serializable
      Parameters:
      key - , the key to be computed
      remappingFunction - , mapping function to be appliyed to the key
      Returns:
      computed value or null if nothing is computed or computation result is null
    • compute

      V compute(K key, BiFunction<? super K,? super V,? extends V> remappingFunction)

      When this method is used on a clustered cache, either replicated or distributed, the bifunction will be serialized to owning nodes to perform the operation in the most performant way. However this means the bifunction must have an appropriate Externalizer or be Serializable itself.

      For transactional caches, whenever the values of the caches are collections, and the mapping function modifies the collection, the collection must be copied and not directly modified, otherwise whenever rollback is called it won't work. This limitation could disappear in following releases if technically possible.

      Specified by:
      compute in interface ConcurrentMap<K,V>
      Specified by:
      compute in interface Map<K,V>
    • compute

      default V compute(K key, SerializableBiFunction<? super K,? super V,? extends V> remappingFunction)
      Overloaded compute(Object, BiFunction) with Infinispan SerializableBiFunction. The compiler will pick this overload for lambda parameters, making them Serializable
      Parameters:
      key - , the key to be computed
      remappingFunction - , mapping function to be appliyed to the key
      Returns:
      computation result (can be null)
    • compute

      default V compute(K key, SerializableBiFunction<? super K,? super V,? extends V> remappingFunction, long lifespan, TimeUnit lifespanUnit)
      Overloaded BasicCache.compute(Object, BiFunction, long, TimeUnit) with Infinispan SerializableBiFunction. The compiler will pick this overload for lambda parameters, making them Serializable
      Parameters:
      key - , the key to be computed
      remappingFunction - , mapping function to be appliyed to the key
      Returns:
      computation result (can be null)
    • compute

      default V compute(K key, SerializableBiFunction<? super K,? super V,? extends V> remappingFunction, long lifespan, TimeUnit lifespanUnit, long maxIdleTime, TimeUnit maxIdleTimeUnit)
      Overloaded BasicCache.compute(Object, BiFunction, long, TimeUnit, long, TimeUnit) with Infinispan SerializableBiFunction. The compiler will pick this overload for lambda parameters, making them Serializable
      Parameters:
      key - , the key to be computed
      remappingFunction - , mapping function to be appliyed to the key
      Returns:
      computation result (can be null)
    • computeAsync

      default CompletableFuture<V> computeAsync(K key, SerializableBiFunction<? super K,? super V,? extends V> remappingFunction)
      Overloaded AsyncCache.computeAsync(Object, BiFunction) with Infinispan SerializableBiFunction. The compiler will pick this overload for lambda parameters, making them Serializable
      Parameters:
      key - , the key to be computed
      remappingFunction - , mapping function to be appliyed to the key
      Returns:
      computation result (can be null)
    • computeAsync

      default CompletableFuture<V> computeAsync(K key, SerializableBiFunction<? super K,? super V,? extends V> remappingFunction, long lifespan, TimeUnit lifespanUnit)
      Overloaded AsyncCache.computeAsync(Object, BiFunction, long, TimeUnit) with Infinispan SerializableBiFunction. The compiler will pick this overload for lambda parameters, making them Serializable
      Parameters:
      key - , the key to be computed
      remappingFunction - , mapping function to be appliyed to the key
      Returns:
      computation result (can be null)
    • computeAsync

      default CompletableFuture<V> computeAsync(K key, SerializableBiFunction<? super K,? super V,? extends V> remappingFunction, long lifespan, TimeUnit lifespanUnit, long maxIdleTime, TimeUnit maxIdleTimeUnit)
      Overloaded AsyncCache.computeAsync(Object, BiFunction, long, TimeUnit, long, TimeUnit) with Infinispan SerializableBiFunction. The compiler will pick this overload for lambda parameters, making them Serializable
      Parameters:
      key - , the key to be computed
      remappingFunction - , mapping function to be appliyed to the key
      Returns:
      computation result (can be null)
    • merge

      V merge(K key, V value, BiFunction<? super V,? super V,? extends V> remappingFunction)

      When this method is used on a clustered cache, either replicated or distributed, the bifunction will be serialized to owning nodes to perform the operation in the most performant way. However this means the bifunction must have an appropriate Externalizer or be Serializable itself.

      For transactional caches, whenever the values of the caches are collections, and the mapping function modifies the collection, the collection must be copied and not directly modified, otherwise whenever rollback is called it won't work. This limitation could disappear in following releases if technically possible.

      Specified by:
      merge in interface ConcurrentMap<K,V>
      Specified by:
      merge in interface Map<K,V>
    • merge

      default V merge(K key, V value, SerializableBiFunction<? super V,? super V,? extends V> remappingFunction)
      Overloaded merge(Object, Object, BiFunction) with Infinispan SerializableBiFunction.

      The compiler will pick this overload for lambda parameters, making them Serializable.

      Parameters:
      key - key with which the resulting value is to be associated
      value - the non-null value to be merged with the existing value associated with the key or, if no existing value or a null value is associated with the key, to be associated with the key
      remappingFunction - the function to recompute a value if present
    • merge

      default V merge(K key, V value, SerializableBiFunction<? super V,? super V,? extends V> remappingFunction, long lifespan, TimeUnit lifespanUnit)
      Parameters:
      key - key to use
      value - new value to merge with existing value
      remappingFunction - function to use to merge new and existing values into a merged value to store under key
      lifespan - lifespan of the entry. Negative values are interpreted as unlimited lifespan.
      lifespanUnit - time unit for lifespan
      Returns:
      the merged value that was stored under key
      Since:
      9.4
    • merge

      default V merge(K key, V value, SerializableBiFunction<? super V,? super V,? extends V> remappingFunction, long lifespan, TimeUnit lifespanUnit, long maxIdleTime, TimeUnit maxIdleTimeUnit)
      Parameters:
      key - key to use
      value - new value to merge with existing value
      remappingFunction - function to use to merge new and existing values into a merged value to store under key
      lifespan - lifespan of the entry. Negative values are interpreted as unlimited lifespan.
      lifespanUnit - time unit for lifespan
      maxIdleTime - the maximum amount of time this key is allowed to be idle for before it is considered as expired
      maxIdleTimeUnit - time unit for max idle time
      Returns:
      the merged value that was stored under key
      Since:
      9.4
    • mergeAsync

      default CompletableFuture<V> mergeAsync(K key, V value, SerializableBiFunction<? super V,? super V,? extends V> remappingFunction)
      Parameters:
      key - key to use
      value - new value to merge with existing value
      remappingFunction - function to use to merge new and existing values into a merged value to store under key
      Returns:
      the merged value that was stored under key
      Since:
      10.0
    • mergeAsync

      default CompletableFuture<V> mergeAsync(K key, V value, SerializableBiFunction<? super V,? super V,? extends V> remappingFunction, long lifespan, TimeUnit lifespanUnit)
      Parameters:
      key - key to use
      value - new value to merge with existing value
      remappingFunction - function to use to merge new and existing values into a merged value to store under key
      lifespan - lifespan of the entry. Negative values are interpreted as unlimited lifespan.
      lifespanUnit - time unit for lifespan
      Returns:
      the merged value that was stored under key
      Since:
      10.0
    • mergeAsync

      default CompletableFuture<V> mergeAsync(K key, V value, SerializableBiFunction<? super V,? super V,? extends V> remappingFunction, long lifespan, TimeUnit lifespanUnit, long maxIdleTime, TimeUnit maxIdleTimeUnit)
      Parameters:
      key - key to use
      value - new value to merge with existing value
      remappingFunction - function to use to merge new and existing values into a merged value to store under key
      lifespan - lifespan of the entry. Negative values are interpreted as unlimited lifespan.
      lifespanUnit - time unit for lifespan
      maxIdleTime - the maximum amount of time this key is allowed to be idle for before it is considered as expired
      maxIdleTimeUnit - time unit for max idle time
      Returns:
      the merged value that was stored under key
      Since:
      10.0