This chapter focuses on some of the core concepts underlying how the JBoss Cache-based implementation of the Hibernate Second Level Cache works. There's a fair amount of detail, which certainly doesn't all need to be mastered to use JBoss Cache with Hibernate. But, an understanding of some of the basic concepts here will help a user understand what some of the typical configurations discussed in the next chapter are all about.
If you want to skip the details for now, feel free to jump ahead to Section 2.3.4, “Bringing It All Together”
The Second Level Cache can cache four different types of data: entities,
collections, query results and timestamps. Proper handling of each
of the types requires slightly different caching semantics. A major
improvement in Hibernate 3.3 is the addition of the
org.hibernate.cache.RegionFactory API, which
allows Hibernate to tell the caching integration layer what type
of data is being cached. Based on that knowledge, the cache integration
layer can apply the semantics appropriate to that type.
Entities are the most common type of data cached in the second level cache. Entity caching requires the following semantics in a clustered cache:
Newly created entities should only be stored on the node on which they are created until the transaction in which they were created commits. Until that transaction commits, the cached entity should only be visible to that transaction. After the transaction commits, cluster-wide the cache should be in a "consistent" state. The cache is consistent if on any node in the cluster, the new entity is either:
stored in the cache, with all non-collection fields matching what is in the database.
not stored in the cache at all.
Maintaining cache consistency basically requires that the cluster-wide update messages that inform other nodes of the changes made during a transaction be made synchronously as part of the transaction commit process. This means that the transaction thread will block until the changes have been transmitted to all nodes in the cluster, those nodes have updated their internal state to reflect the changes, and have responded to the originating node telling them of their success (or failure) in doing so. JBoss Cache uses a 2 phase commit protocol, so there will actually be 2 synchronous cluster-wide messages per transaction commit. If any node in the cluster fails in the initial prepare phase of the 2PC, the underlying transaction will be rolled back and in the second phase JBoss Cache will tell the other nodes in the cluster to revert the change.
For existing entities that are modified in the course of a transaction, the updated entity state should only be stored on the node on which the modification occurred until the transaction commits. Until that transaction commits, the changed entity state should only be visible to that transaction. After the transaction commits, cluster-wide the cache should be in a "consistent" state, as described above.
Concurrent cache updates of the same entity anywhere in the cluster should not be possible, as Hibernate will acquire an exclusive lock on the database representation of the entity before attempting to update the cache.
A read of a cached entity holds a transaction scope read lock on the relevant portion of cache. The presence of a read lock held by one transaction should not prevent a concurrent read by another transaction. Whether the presence of that read lock prevents a concurrent write depends on whether the cache is configured for READ_COMMITTED or REPEATABLE_READ semantics and whether the cache is using optimistic locking. READ_COMMITTED will allow a concurrent write to proceed; pessimistic locking with REPEATABLE_READ will cause the write to block until the transaction with the read lock commits. Optimistic locking allows a REPEATABLE_READ semantic without forcing the writing transaction to block.
A read of a cached entity does not result in any messages to other nodes in the cluster or any cluster-wide read locks.
The basic operation of storing an entity that has been directly
read from the database should have a fail-fast
semantic. This type of operation is referred to as a put
and is the most common type of operation. Basically, the
rules for handling new entities or entity updates discussed
above mean the cache's representation of an entity should
always match the database's. So, if a
attempt encounters any existing copy of the entity in the cache,
it should assume that existing copy is either newer or the same
as what it is trying to store, and the
attempt should promptly and silently abort, with no impact on
any ongoing transactions.
put operation should not acquire any
long-lasting locks on the cache.
If the cache is configured to use replication, the replication
put should occur immediately, not
waiting for transaction commit and without the calling thread
needing to block waiting for responses from the other nodes
in the cluster. This is a "fire-and-forget" semantic that
JBoss Cache refers to as asynchronous replication.
When other nodes receive a replicated
they use the same fail-fast semantics as a local
-- i.e. promptly and silently abort if the entity is already cached.
If the cache is configured to use invalidation, a
put should not result in any cluster-wide
message at all. The fact that one node in the cluster has
cached an entity should not invalidate another node's cache
of that same entity -- both caches are storing the same
A removal of an entity from the cache (i.e. to reflect a DELETE from the underlying database) is basically a special case of a modification; the removal should not be visible on other nodes or to other transactions until the transaction that did the remove commits. Cache consistency after commit means the removed entity is no longer in the cache on any node in the cluster.
Collection caching refers to the case where a cached entity has as one of its fields a collection of other entities. Hibernate handles this field specially in the second level cache; a special area in the cache is created where the primary keys of the entities in the collection are stored.
The management of collection caching is very similar to entity caching, with a few differences:
When a new entity is created that includes a collection, no attempt is made to insert the collection into the cache.
When a transaction updates the contents of a collection, no attempt is made to reflect the new contents of the collection in the cache. Instead, the existing collection is simply removed from the cache across the cluster, using the same semantics as an entity removal.
In essence, for collections Hibernate only supports cache reads and
put operation, with any modification of the
collection resulting in cluster-wide invalidation of that collection
from the cache. If the collection field is accessed again, a new read from
the database will be done, followed by another cache
Hibernate supports caching of query results in the second level cache. The HQL statement that comprised the query is cached (including any parameter values) along with the primary keys of all entities that comprise the result set.
The semantics of query caching are significantly different
from those of entity caching. A database row that reflects an
entity's state can be locked, with cache updates applied with that
lock in place. The semantics of entity caching take advantage of
this fact to help ensure cache consistency across the cluster.
There is no clear database analogue to a query result set that can
be efficiently locked to ensure consistency in the cache. As a result,
the fail-fast semantics used with the entity caching
operation are not available; instead query caching has semantics
akin to an entity insert, including costly synchronous cluster
updates and the JBoss Cache two phase commit protocol. Furthermore,
Hibernate must agressively invalidate query results from the cache
any time any instance of one of the entity classes involved in the
query's WHERE clause changes. All such query results are invalidated,
even if the change made to the entity instance would not have affected
the query result. It is not performant for Hibernate to try to
determine if the entity change would have affected the query result,
so the safe choice is to invalidate the query. See
Section 2.1.4, “Timestamps” for more on query
The effect of all this is that query caching is less likely to provide a performance boost than entity/collection caching. Use it with care and benchmark your application with it enabled and disabled. Be careful about replicating query results; caching them locally only on the node that executed the query will be more performant unless the query is quite expensive, is very likely to be repeated on other nodes, and is unlikely to be invalidated out of the cache..
The JBoss Cache-based implementation of query caching adds a couple of interesting semantics, both designed to ensure that query cache operations don't block transactions from proceeding:
The insertion of a query result into the cache is very much like the insertion of a new entity. The difference is it is possible for two transactions, possibly on different nodes, to try to insert the same query at the same time. (If this happened with entities, the database would throw an exception with a primary key violation before any caching work could start). This could lead to long delays as the transactions compete for cache locks. To prevent such delays, the cache integration layer will set a very short (a few ms) lock timeout before attempting to cache a query result. If there is any sort of locking conflict, it will be detected quickly, and the attempt to cache the result will be quietly abandonded.
A read of a query result does not result in any long-lasting read lock in the cache. Thus, the fact that an uncommitted transaction had read a query result does not prevent concurrent transactions from subsequently invalidating that result and caching a new result set. However, an insertion of a query result into the cache will result in an exclusive write lock that lasts until the transaction that did the insert commits; this lock will prevent other transactions from reading the result. Since the point of query caching is to improve performance, blocking on a cache read for an extended period seems suboptimal. So, the cache integration code will set a very low lock acquisition timeout before attempting the read; if there is a lock conflict, the read will silently fail, resulting in a cache miss and a re-execution of the query against the database.
Timestamp caching is an internal detail of query caching. As part of each query result, Hibernate stores the timestamp of when the query was executed. There is also a special area in the cache (the timestamps cache) where, for each entity class, the timestamp of the last update to any instance of that class is stored. When a query result is read from the cache, its timestamp is compared to the timestamps of all entities involved in the query. If any entity has a later timestamp, the cached result is discarded and a new query against the database is executed.
The semantics of of the timestamp cache are quite different from those of the entity, collection and query caches.
For all nodes in the cluster, the contents of the timestamp cache should be identical, with all timestamps represented. For the other cache types, it is acceptable for some nodes in the cluster to not store some data, as long as everyone who does store an item stores the same thing. Not so with timestamps -- everyone must store all timestamps. Using a JBoss Cache configured for invalidation is not allowed for the timestamps cache. Further, configuring JBoss Cache eviction to remove old or infrequently used data from the timestamps cache should not be done. Also, when a new node joins a running cluster, it must acquire the current state of all timestamps from another member, performing what is known as an initial state transfer. For other cache types, an initial state transfer is not required.
A timestamp represents an entire entity class, not a single instance. Thus it is quite likely that two concurrent transactions will both attempt to update the same timestamp. These updates need to be serialized, but no long lasting exclusive lock on the timestamp is held.
As soon as a timestamp is updated, the new value needs to be propagated around the cluster. Waiting until the transaction that changed the timestamp commits is inadequate. So, changes to timestamps can be quite "chatty" in terms of how many messages are sent around the cluster. Sending the timestamp update messages synchronously would have a serious impact on performance, and would quite likely result in cluster-wide lock conflicts that would prevent transactions from progressing for tens of seconds at a time. To mitigate these issues, timestamp updates are sent asynchronously.
 See the discussion of the
property in Section 3.1, “Configuring the Hibernate Session Factory”