Hibernate.orgCommunity Documentation
As Hibernate core applies changes to the Database, Hibernate Search detects these changes and will update the index automatically (unless the EventListeners are disabled). Sometimes changes are made to the database without using Hibernate, as when backup is restored or your data is otherwise affected; for these cases Hibernate Search exposes the Manual Index APIs to explicitly update or remove a single entity from the index, or rebuild the index for the whole database, or remove all references to a specific type.
All these methods affect the Lucene Index only, no changes are applied to the Database.
Using FullTextSession
.index(T
entity)
you can directly add or update a specific object
instance to the index. If this entity was already indexed, then the index
will be updated. Changes to the index are only applied at transaction
commit.
Example 6.1. Indexing an entity via FullTextSession.index(T
entity)
FullTextSession fullTextSession = Search.getFullTextSession(session);
Transaction tx = fullTextSession.beginTransaction();
Object customer = fullTextSession.load( Customer.class, 8 );
fullTextSession.index(customer);
tx.commit(); //index only updated at commit time
In case you want to add all instances for a type, or for all indexed
types, the recommended approach is to use a
MassIndexer
: see Section 6.3.2, “Using a MassIndexer” for more details.
It is equally possible to remove an entity or all entities of a
given type from a Lucene index without the need to physically remove them
from the database. This operation is named purging and is also done
through the FullTextSession
.
Example 6.2. Purging a specific instance of an entity from the index
FullTextSession fullTextSession = Search.getFullTextSession(session);
Transaction tx = fullTextSession.beginTransaction();
for (Customer customer : customers) {
fullTextSession.purge( Customer.class, customer.getId() );
}
tx.commit(); //index is updated at commit time
Purging will remove the entity with the given id from the Lucene index but will not touch the database.
If you need to remove all entities of a given type, you can use the
purgeAll
method. This operation removes all
entities of the type passed as a parameter as well as all its
subtypes.
Example 6.3. Purging all instances of an entity from the index
FullTextSession fullTextSession = Search.getFullTextSession(session);
Transaction tx = fullTextSession.beginTransaction();
fullTextSession.purgeAll( Customer.class );
//optionally optimize the index
//fullTextSession.getSearchFactory().optimize( Customer.class );
tx.commit(); //index changes are applied at commit time
It is recommended to optimize the index after such an operation.
Methods index
,
purge
and purgeAll
are
available on FullTextEntityManager
as
well.
All manual indexing methods (index
,
purge
and purgeAll
)
only affect the index, not the database, nevertheless they are
transactional and as such they won't be applied until the transaction is
successfully committed, or you make use of
flushToIndexes
.
If you change the entity mapping to the index, chances are that the whole Index needs to be updated; For example if you decide to index a an existing field using a different analyzer you'll need to rebuild the index for affected types. Also if the Database is replaced (like restored from a backup, imported from a legacy system) you'll want to be able to rebuild the index from existing data. Hibernate Search provides two main strategies to choose from:
Using
FullTextSession
.flushToIndexes()
periodically, while using
FullTextSession
.index()
on all entities.
Use a MassIndexer
.
This strategy consists in removing the existing index and then
adding all entities back to the index using
FullTextSession
.purgeAll()
and
FullTextSession
.index()
,
however there are some memory and efficiency contraints. For maximum
efficiency Hibernate Search batches index operations and executes them
at commit time. If you expect to index a lot of data you need to be
careful about memory consumption since all documents are kept in a queue
until the transaction commit. You can potentially face an
OutOfMemoryException
if you don't empty the queue
periodically: to do this you can use
fullTextSession.flushToIndexes()
. Every time
fullTextSession.flushToIndexes()
is called (or
if the transaction is committed), the batch queue is processed applying
all index changes. Be aware that, once flushed, the changes cannot be
rolled back.
Example 6.4. Index rebuilding using index() and flushToIndexes()
fullTextSession.setFlushMode(FlushMode.MANUAL); fullTextSession.setCacheMode(CacheMode.IGNORE); transaction = fullTextSession.beginTransaction(); //Scrollable results will avoid loading too many objects in memory ScrollableResults results = fullTextSession.createCriteria( Email.class ) .setFetchSize(BATCH_SIZE) .scroll( ScrollMode.FORWARD_ONLY ); int index = 0; while( results.next() ) { index++; fullTextSession.index( results.get(0) ); //index each element if (index % BATCH_SIZE == 0) { fullTextSession.flushToIndexes(); //apply changes to indexes fullTextSession.clear(); //free memory since the queue is processed } } transaction.commit();
hibernate.search.worker.batch_size
has been
deprecated in favor of this explicit API which provides better
control
Try to use a batch size that guarantees that your application will not run out of memory: with a bigger batch size objects are fetched faster from database but more memory is needed.
Hibernate Search's MassIndexer
uses several
parallel threads to rebuild the index; you can optionally select which
entities need to be reloaded or have it reindex all entities. This
approach is optimized for best performance but requires to set the
application in maintenance mode: making queries to the index is not
recommended when a MassIndexer is busy.
This will rebuild the index, deleting it and then reloading all entities from the database. Although it's simple to use, some tweaking is recommended to speed up the process: there are several parameters configurable.
During the progress of a MassIndexer the content of the index is undefined, make sure that nobody will try to make some query during index rebuilding! If somebody should query the index it will not corrupt but most results will likely be missing.
Example 6.6. Using a tuned MassIndexer
fullTextSession .createIndexer( User.class ) .batchSizeToLoadObjects( 25 ) .cacheMode( CacheMode.NORMAL ) .threadsToLoadObjects( 5 ) .threadsForIndexWriter( 3 ) .threadsForSubsequentFetching( 20 ) .progressMonitor( monitor ) //a MassIndexerProgressMonitor implementation .startAndWait();
This will rebuild the index of all User instances (and subtypes), and will create 5 parallel threads to load the User instances using batches of 25 objects per query; these loaded User instances are then pipelined to 20 parallel threads to load the attached lazy collections of User containing some information needed for the index. Finally, 3 parallel threads are being used to Analyze the text and write to the index.
It is recommended to leave cacheMode to
CacheMode.IGNORE
(the default), as in most reindexing
situations the cache will be a useless additional overhead; it might be
useful to enable some other CacheMode
depending on
your data: it might increase performance if the main entity is relating
to enum-like data included in the index.
The "sweet spot" of number of threads to achieve best performance is highly dependent on your overall architecture, database design and even data values. To find out the best number of threads for your application it is recommended to use a profiler: all internal thread groups have meaningful names to be easily identified with most tools.
The MassIndexer was designed for speed and is unaware of transactions, so there is no need to begin one or committing. Also because it is not transactional it is not recommended to let users use the system during it's processing, as it is unlikely people will be able to find results and the system load might be too high anyway.
Other parameters which affect indexing time and memory consumption are:
hibernate.search.[default|<indexname>].exclusive_index_use
hibernate.search.[default|<indexname>].indexwriter.batch.max_buffered_docs
hibernate.search.[default|<indexname>].indexwriter.batch.max_merge_docs
hibernate.search.[default|<indexname>].indexwriter.batch.merge_factor
hibernate.search.[default|<indexname>].indexwriter.batch.ram_buffer_size
hibernate.search.[default|<indexname>].indexwriter.batch.term_index_interval
hibernate.search.batchbackend.concurrent_writers
Previous versions also had a max_field_length
but this was removed from Lucene,
it's possible to obtain a similar effect by using a LimitTokenCountAnalyzer
.
All .indexwriter
parameters are Lucene specific
and Hibernate Search is just passing these parameters through - see Section 3.10, “Tuning Lucene indexing performance” for more details.
hibernate.search.batchbackend.concurrent_writers
defaults to
2
and represent the number of threads being used at the Analysis
and indexwriter stage of the MassIndexing pipeline. The MassIndexer
.threadsForIndexWriter(int)
method overrides this value.