- All Implemented Interfaces:
public class SoftIndexFileStore extends java.lang.Object implements AdvancedLoadWriteStoreLocal file-based cache store, optimized for write-through use with strong consistency guarantees (ability to flush disk operations before returning from the store call). * DESIGN: There are three threads operating in the cache-store: - LogAppender: Requests to store entries are passed to the LogAppender thread via queue, then the requestor threads wait until LogAppender notifies them about successful store. LogAppender serializes the writes into append-only file, writes the offset into TemporaryTable and enqueues request to update index into UpdateQueue. The append-only files have limited size, when the file is full, new file is started. - IndexUpdater: Reads the UpdateQueue, applies the operation into B-tree-like structure Index (exact description below) and then removes the entry from TemporaryTable. When the Index is overwriten, the current entry offset is retrieved and IndexUpdater increases the unused space statistics in FileStats. - Compactor: When a limit of unused space in some file is reached (according to FileStats), the Compactor starts reading this file sequentially, querying TemporaryTable or Index for the current entry position and copying the unchanged entries into another file. For the entries that are still valid in the original file, a compare-and-set (file-offset based) request is enqueued into UpdateQueue - therefore this operation cannot interfere with concurrent writes overwriting the entry. Multiple files can be merged into single file during compaction. Structures: - TemporaryTable: keeps the records about current entry location until this is applied to the Index. Each read request goes to the TemporaryTable, if the key is not found here, Index is queried. - UpdateQueue: bounded queue (to prevent grow the TemporaryTable too much) of either forced writes (used for regular stores) or compare-and-set writes (used by Compactor). - FileStats: simple (Concurrent)HashTable with actual file size and amount of unused space for each file. - Index: B+-tree of IndexNodes. The tree is dropped and built a new if the process crashes, it does not need to flush disk operations. On disk it is kept as single random-accessed file, with free blocks list stored in memory. As IndexUpdater may easily become a bottleneck under heavy load, the IndexUpdater thread, UpdateQueue and tree of IndexNodes may be multiplied several times - the Index is divided into Segments. Each segment owns keys according to the hashCode() of the key. Amount of entries in IndexNode is limited by the size it occupies on disk. This size is limited by configurable nodeSize (4096 bytes by default?), only in case that the node contains single pivot (too long) it can be longer. A key_prefix common for all keys in the IndexNode is stored in order to reduce space requirements. For implementation reasons the keys are limited to 32kB - this requirement may be circumvented later. The pivots are not whole keys - it is the shortest part of key that is greater than all left children (but lesser or equal to all right children) - let us call this key_part. The key_parts are sorted in the IndexNode, naturally. On disk it has this format: key_prefix_length(2 bytes), key_prefix, num_parts(2 bytes), ( key_part_length (2 bytes), key_part, left_child_index_node_offset (8 bytes))+, right_child_index_node_offset (8 bytes) In memory, for every child a SoftReference
is held. When this reference is empty (but the offset in file is set), any reader may load the reference using double-locking pattern (synchronized over the reference itself). The entry is never loaded by multiple threads in parallel and even may block other threads trying to read this node. For each node in memory a RW-lock is held. When the IndexUpdater thread updates the Index (modifying some IndexNodes), it prepares a copy of these nodes (already stored into index file). Then, in locks only the uppermost node for writing, overwrites the references to new data and unlocks the this node. After that the changed nodes are traversed from top down, write locked and their record in index file is released. Reader threads crawl the tree from top down, locking the parent node (for reading), locking child node and unlocking parent node.
- Radim Vansa <firstname.lastname@example.org>
Nested Class Summary
Nested classes/interfaces inherited from interface org.infinispan.persistence.spi.AdvancedCacheLoader
Constructors Constructor Description
All Methods Instance Methods Concrete Methods Modifier and Type Method Description
clear()Removes all the data from the storage.
contains(java.lang.Object key)Returns true if the storage contains an entry associated with the given key.
debugInfo(java.lang.Object key)This method should be called by reflection to get more info about the missing/invalid key (from test tools)
destroy()Method to be used to destroy and clean up any resources associated with this store.
init(InitializationContext ctx)Used to initialize a cache loader.
isSeqIdOld(long seqId, java.lang.Object key, byte serializedKey)
load(java.lang.Object key)Fetches an entry from the storage.
publishEntries(java.util.function.Predicate filter, boolean fetchValue, boolean fetchMetadata)Publishes all entries from this store.
publishKeys(java.util.function.Predicate filter)Publishes all the keys from this store.
purge(java.util.concurrent.Executor threadPool, AdvancedCacheWriter.PurgeListener listener)Using the thread in the pool, removed all the expired data from the persistence storage.
size()Returns the number of elements in the store.
start()Invoked on component start
stop()Invoked on component stop
write(MarshalledEntry entry)Persists the entry to the storage.
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
public void init(InitializationContext ctx)Used to initialize a cache loader. Typically invoked by the
PersistenceManagerwhen setting up cache loaders.
public void start()Description copied from interface:
LifecycleInvoked on component start
protected boolean isSeqIdOld(long seqId, java.lang.Object key, byte serializedKey) throws java.io.IOException
protected void startIndex()
protected boolean isIndexLoaded()
public void stop()Description copied from interface:
LifecycleInvoked on component stop
public void destroy()Description copied from interface:
ExternalStoreMethod to be used to destroy and clean up any resources associated with this store. This is normally only useful for non shared stores.
This method will ensure the store is stopped and properly cleans up all resources for it.
public boolean isAvailable()
public void clear() throws PersistenceExceptionDescription copied from interface:
AdvancedCacheWriterRemoves all the data from the storage.
public int size()Returns the number of elements in the store.
public void purge(java.util.concurrent.Executor threadPool, AdvancedCacheWriter.PurgeListener listener)Description copied from interface:
AdvancedCacheWriterUsing the thread in the pool, removed all the expired data from the persistence storage. For each removed entry, the supplied listener is invoked.
When this method returns all entries will be purged and no tasks will be running due to this loader in the provided executor. If however an exception is thrown there could be tasks still pending or running in the executor.
public void write(MarshalledEntry entry)Description copied from interface:
CacheWriterPersists the entry to the storage.
public boolean delete(java.lang.Object key)
public boolean contains(java.lang.Object key)Returns true if the storage contains an entry associated with the given key.
public MarshalledEntry load(java.lang.Object key)Fetches an entry from the storage. If a
MarshalledEntryneeds to be created here,
InitializationContext.getByteBufferFactory()should be used.
public java.lang.String debugInfo(java.lang.Object key)This method should be called by reflection to get more info about the missing/invalid key (from test tools)
public org.reactivestreams.Publisher publishKeys(java.util.function.Predicate filter)Publishes all the keys from this store. The given publisher can be used by as many
Subscribers as desired. Keys are not retrieved until a given Subscriber requests them from the
Stores will return only non expired keys
public org.reactivestreams.Publisher<MarshalledEntry> publishEntries(java.util.function.Predicate filter, boolean fetchValue, boolean fetchMetadata)Publishes all entries from this store. The given publisher can be used by as many
Subscribers as desired. Entries are not retrieved until a given Subscriber requests them from the
If fetchMetadata is true this store must guarantee to not return any expired entries.
- Specified by:
filter- a filter - null is treated as allowing all entries
fetchValue- whether or not to fetch the value from the persistent store. E.g. if the iteration is intended only over the key set, no point fetching the values from the persistent store as well
fetchMetadata- whether or not to fetch the metadata from the persistent store. E.g. if the iteration is intended only ove the key set, then no point fetching the metadata from the persistent store as well
- a publisher that will provide the entries from the store