Hibernate.orgCommunity Documentation

Hibernate Search

Apache Lucene™ Integration

Reference Guide


Preface
1. Getting started
1.1. System Requirements
1.2. Using Maven
1.3. Configuration
1.4. Indexing
1.5. Searching
1.6. Analyzer
1.7. What's next
2. Architecture
2.1. Overview
2.2. Back end
2.2.1. Lucene
2.2.2. JMS
2.2.3. JGroups
2.3. Reader strategy
2.3.1. shared
2.3.2. not-shared
2.3.3. Custom
3. Configuration
3.1. Enabling Hibernate Search and automatic indexing
3.1.1. Enabling Hibernate Search
3.1.2. Automatic indexing
3.2. Configuring the IndexManager
3.2.1. directory-based
3.2.2. near-real-time
3.2.3. Custom
3.3. Directory configuration
3.3.1. Infinispan Directory configuration
3.4. Worker configuration
3.4.1. JMS Master/Slave back end
3.4.2. JGroups Master/Slave back end
3.5. Reader strategy configuration
3.6. Exception handling
3.7. Lucene configuration
3.7.1. Tuning indexing performance
3.7.2. LockFactory configuration
3.7.3. Index format compatibility
3.8. Metadata API
3.9. Hibernate Search as a WildFly module
4. Mapping entities to the index structure
4.1. Mapping an entity
4.1.1. Basic mapping
4.1.2. Mapping properties multiple times
4.1.3. Embedded and associated objects
4.2. Boosting
4.2.1. Static index time boosting
4.2.2. Dynamic index time boosting
4.3. Analysis
4.3.1. Default analyzer and analyzer by class
4.3.2. Named analyzers
4.3.3. Dynamic analyzer selection
4.3.4. Retrieving an analyzer
4.4. Bridges
4.4.1. Built-in bridges
4.4.2. Tika bridge
4.4.3. Custom bridges
4.5. Conditional indexing: to index or not based on entity state
4.6. Providing your own id
4.6.1. The ProvidedId annotation
4.7. Programmatic API
4.7.1. Mapping an entity as indexable
4.7.2. Adding DocumentId to indexed entity
4.7.3. Defining analyzers
4.7.4. Defining full text filter definitions
4.7.5. Defining fields for indexing
4.7.6. Programmatically defining embedded entities
4.7.7. Contained In definition
4.7.8. Date/Calendar Bridge
4.7.9. Defining bridges
4.7.10. Mapping class bridge
4.7.11. Mapping dynamic boost
5. Querying
5.1. Building queries
5.1.1. Building a Lucene query using the Lucene API
5.1.2. Building a Lucene query with the Hibernate Search query DSL
5.1.3. Building a Hibernate Search query
5.2. Retrieving the results
5.2.1. Performance considerations
5.2.2. Result size
5.2.3. ResultTransformer
5.2.4. Understanding results
5.3. Filters
5.3.1. Using filters in a sharded environment
5.4. Faceting
5.4.1. Creating a faceting request
5.4.2. Setting the facet sort order
5.4.3. Applying a faceting request
5.4.4. Interpreting a Facet result
5.4.5. Restricting query results
5.5. Optimizing the query process
5.5.1. Caching index values: FieldCache
6. Manual index changes
6.1. Adding instances to the index
6.2. Deleting instances from the index
6.3. Rebuilding the whole index
6.3.1. Using flushToIndexes()
6.3.2. Using a MassIndexer
6.3.3. Useful parameters for batch indexing
7. Index Optimization
7.1. Automatic optimization
7.2. Manual optimization
7.3. Adjusting optimization
8. Monitoring
8.1. JMX
8.1.1. StatisticsInfoMBean
8.1.2. IndexControlMBean
8.1.3. IndexingProgressMonitorMBean
9. Spatial
9.1. Enable indexing of Spatial Coordinates
9.1.1. Indexing coordinates for Double Range Queries
9.1.2. Indexing coordinates in a Grid with Quad Trees
9.1.3. Implementing the Coordinates interface
9.2. Performing Spatial Queries
9.2.1. Returning distance to query point in the search results
9.3. Multiple Coordinate pairs
9.4. Insight: implementation details of Quad Tree indexing
9.4.1. At indexing level
9.4.2. At search level
10. Advanced features
10.1. Accessing the SearchFactory
10.2. Using an IndexReader
10.3. Accessing a Lucene Directory
10.4. Sharding indexes
10.4.1. Static sharding
10.4.2. Dynamic sharding
10.5. Sharing indexes
10.6. Using external services
10.6.1. Exposing a service
10.6.2. Using a service
10.7. Customizing Lucene's scoring formula
11. Further reading

Welcome to Hibernate Search. The following chapter will guide you through the initial steps required to integrate Hibernate Search into an existing Hibernate enabled application. In case you are a Hibernate new timer we recommend you start here.

The Hibernate Search artifacts can be found in Maven's central repository but are released first in the JBoss maven repository. So it's not a requirement but we recommend to add this repository to your settings.xml file (see also Maven Getting Started for more details).

This is all you need to add to your pom.xml to get started:



Only the hibernate-search dependency is mandatory. hibernate-entitymanager is only required if you want to use Hibernate Search in conjunction with JPA.

To use hibernate-search-infinispan, adding the JBoss Maven repository is mandatory, because it contains the needed Infinispan dependencies which are currently not mirrored by central.

Once you have downloaded and added all required dependencies to your application you have to add a couple of properties to your hibernate configuration file. If you are using Hibernate directly this can be done in hibernate.properties or hibernate.cfg.xml. If you are using Hibernate via JPA you can also add the properties to persistence.xml. The good news is that for standard use most properties offer a sensible default. An example persistence.xml configuration could look like this:


First you have to tell Hibernate Search which DirectoryProvider to use. This can be achieved by setting the hibernate.search.default.directory_provider property. Apache Lucene has the notion of a Directory to store the index files. Hibernate Search handles the initialization and configuration of a Lucene Directory instance via a DirectoryProvider. In this tutorial we will use a a directory provider storing the index in the file system. This will give us the ability to physically inspect the Lucene indexes created by Hibernate Search (eg via Luke). Once you have a working configuration you can start experimenting with other directory providers (see Section 3.3, “Directory configuration”). Next to the directory provider you also have to specify the default base directory for all indexes via hibernate.search.default.indexBase.

Lets assume that your application contains the Hibernate managed classes example.Book and example.Author and you want to add free text search capabilities to your application in order to search the books contained in your database.


To achieve this you have to add a few annotations to the Book and Author class. The first annotation @Indexed marks Book as indexable. By design Hibernate Search needs to store an untokenized id in the index to ensure index unicity for a given entity. @DocumentId marks the property to use for this purpose and is in most cases the same as the database primary key. The @DocumentId annotation is optional in the case where an @Id annotation exists.

Next you have to mark the fields you want to make searchable. Let's start with title and subtitle and annotate both with @Field. The parameter index=Index.YES will ensure that the text will be indexed, while analyze=Analyze.YES ensures that the text will be analyzed using the default Lucene analyzer. Usually, analyzing means chunking a sentence into individual words and potentially excluding common words like 'a' or 'the'. We will talk more about analyzers a little later on. The third parameter we specify within @Field, store=Store.NO, ensures that the actual data will not be stored in the index. Whether this data is stored in the index or not has nothing to do with the ability to search for it. From Lucene's perspective it is not necessary to keep the data once the index is created. The benefit of storing it is the ability to retrieve it via projections ( see Section 5.1.3.5, “Projection”).

Without projections, Hibernate Search will per default execute a Lucene query in order to find the database identifiers of the entities matching the query critera and use these identifiers to retrieve managed objects from the database. The decision for or against projection has to be made on a case to case basis. The default behaviour is recommended since it returns managed objects whereas projections only return object arrays.

Note that index=Index.YES, analyze=Analyze.YES and store=Store.NO are the default values for these paramters and could be ommited.

After this short look under the hood let's go back to annotating the Book class. Another annotation we have not yet discussed is @DateBridge. This annotation is one of the built-in field bridges in Hibernate Search. The Lucene index is purely string based. For this reason Hibernate Search must convert the data types of the indexed fields to strings and vice versa. A range of predefined bridges are provided, including the DateBridge which will convert a java.util.Date into a String with the specified resolution. For more details see Section 4.4, “Bridges”.

This leaves us with @IndexedEmbedded. This annotation is used to index associated entities (@ManyToMany, @*ToOne, @Embedded and @ElementCollection) as part of the owning entity. This is needed since a Lucene index document is a flat data structure which does not know anything about object relations. To ensure that the authors' name will be searchable you have to make sure that the names are indexed as part of the book itself. On top of @IndexedEmbedded you will also have to mark all fields of the associated entity you want to have included in the index with @Indexed. For more details see Section 4.1.3, “Embedded and associated objects”.

These settings should be sufficient for now. For more details on entity mapping refer to Section 4.1, “Mapping an entity”.


Now it is time to execute a first search. The general approach is to create a Lucene query, either via the Lucene API (Section 5.1.1, “Building a Lucene query using the Lucene API”) or via the Hibernate Search query DSL (Section 5.1.2, “Building a Lucene query with the Hibernate Search query DSL”), and then wrap this query into a org.hibernate.Query in order to get all the functionality one is used to from the Hibernate API. The following code will prepare a query against the indexed fields, execute it and return a list of Books.



Let's make things a little more interesting now. Assume that one of your indexed book entities has the title "Refactoring: Improving the Design of Existing Code" and you want to get hits for all of the following queries: "refactor", "refactors", "refactored" and "refactoring". In Lucene this can be achieved by choosing an analyzer class which applies word stemming during the indexing as well as the search process. Hibernate Search offers several ways to configure the analyzer to be used (see Section 4.3.1, “Default analyzer and analyzer by class”):

  • Setting the hibernate.search.analyzer property in the configuration file. The specified class will then be the default analyzer.

  • Setting the @Analyzer annotation at the entity level.

  • Setting the @Analyzer annotation at the field level.

When using the @Analyzer annotation one can either specify the fully qualified classname of the analyzer to use or one can refer to an analyzer definition defined by the @AnalyzerDef annotation. In the latter case the Solr analyzer framework with its factories approach is utilized. To find out more about the factory classes available you can either browse the Solr JavaDoc or read the corresponding section on the Solr Wiki.

In the example below a StandardTokenizerFactory is used followed by two filter factories, LowerCaseFilterFactory and SnowballPorterFilterFactory. The standard tokenizer splits words at punctuation characters and hyphens while keeping email addresses and internet hostnames intact. It is a good general purpose tokenizer. The lowercase filter lowercases the letters in each token whereas the snowball filter finally applies language specific stemming.

Generally, when using the Solr framework you have to start with a tokenizer followed by an arbitrary number of filters.


Using @AnalyzerDef only defines an Analyzer, you still have to apply it to entities and or properties using @Analyzer. Like in the above example the customanalyzer is defined but not applied on the entity: it's applied on the title and subtitle properties only. An analyzer definition is global, so you can define it on any entity and reuse the definition on other entities.

The above paragraphs helped you getting an overview of Hibernate Search. The next step after this tutorial is to get more familiar with the overall architecture of Hibernate Search (Chapter 2, Architecture) and explore the basic features in more detail. Two topics which were only briefly touched in this tutorial were analyzer configuration (Section 4.3.1, “Default analyzer and analyzer by class”) and field bridges (Section 4.4, “Bridges”). Both are important features required for more fine-grained indexing. More advanced topics cover clustering (Section 3.4.1, “JMS Master/Slave back end”, Section 3.3.1, “Infinispan Directory configuration”) and large index handling (Section 10.4, “Sharding indexes”).

Hibernate Search consists of an indexing component as well as an index search component. Both are backed by Apache Lucene.

Each time an entity is inserted, updated or removed in/from the database, Hibernate Search keeps track of this event (through the Hibernate event system) and schedules an index update. All these updates are handled without you having to interact with the Apache Lucene APIs directly (see Section 3.1, “Enabling Hibernate Search and automatic indexing”). Instead, the interaction with the underlying Lucene indexes is handled via so called IndexManagers.

Each Lucene index is managed by one index manager which is uniquely identified by name. In most cases there is also a one to one relationship between an indexed entity and a single IndexManager. The exceptions are the use cases of index sharding and index sharing. The former can be applied when the index for a single entity becomes too big and indexing operations are slowing down the application. In this case a single entity is indexed into multiple indexes each with its own index manager (see Section 10.4, “Sharding indexes”). The latter, index sharing, is the ability to index multiple entities into the same Lucene index (see Section 10.5, “Sharing indexes”).

The index manager abstracts from the specific index configuration. In the case of the default index manager this includes details about the selected backend, the configured reader strategy and the chosen DirectoryProvider. These components will be discussed in greater detail later on. It is recommended that you start with the default index manager which uses different Lucene Directory types to manage the indexes (see Section 3.3, “Directory configuration”). You can, however, also provide your own IndexManager implementation (see Section 3.2, “Configuring the IndexManager).

Once the index is created, you can search for entities and return lists of managed entities saving you the tedious object to Lucene Document mapping. The same persistence context is shared between Hibernate and Hibernate Search. As a matter of fact, the FullTextSession is built on top of the Hibernate Session so that the application code can use the unified org.hibernate.Query or javax.persistence.Query APIs exactly the same way a HQL, JPA-QL or native query would do.

To be more efficient Hibernate Search batches the write interactions with the Lucene index. This batching is the responsibility of the Worker. There are currently two types of batching. Outside a transaction, the index update operation is executed right after the actual database operation. This is really a no batching setup. In the case of an ongoing transaction, the index update operation is scheduled for the transaction commit phase and discarded in case of transaction rollback. The batching scope is the transaction. There are two immediate benefits:

  • Performance: Lucene indexing works better when operation are executed in batch.

  • ACIDity: The work executed has the same scoping as the one executed by the database transaction and is executed if and only if the transaction is committed. This is not ACID in the strict sense of it, but ACID behavior is rarely useful for full text search indexes since they can be rebuilt from the source at any time.

You can think of those two batch modes (no scope vs transactional) as the equivalent of the (infamous) autocommit vs transactional behavior. From a performance perspective, the in transaction mode is recommended. The scoping choice is made transparently. Hibernate Search detects the presence of a transaction and adjust the scoping (see Section 3.4, “Worker configuration”).

Tip

It is recommended - for both your database and Hibernate Search - to execute your operations in a transaction, be it JDBC or JTA.

Note

Hibernate Search works perfectly fine in the Hibernate / EntityManager long conversation pattern aka. atomic conversation.

Hibernate Search offers the ability to let the batched work being processed by different back ends. Several back ends are provided out of the box and you have the option to plugin your own. It is important to understand that in this context back end encompasses more than just the configuration option hibernate.search.default.worker.backend. This property just specifies a implementation of the BackendQueueProcessor interface which is a part of a back end configuration. In most cases, however, additional configuration settings are needed to successfully configure a specific backend setup, like for example the JMS back end.

The role of the index manager component is described in Chapter 2, Architecture. Hibernate Search provides two possible implementations for this interface to choose from.

  • directory-based: the default implementation which uses the Lucene Directory abstraction to manage index files.

  • near-real-time: avoid flushing writes to disk at each commit. This index manager is also Directory based, but also makes uses of Lucene's NRT functionality.

To select an alternative you specify the property:

hibernate.search.[default|<indexname>].indexmanager = near-real-time

As we have seen in Section 3.2, “Configuring the IndexManager the default index manager uses Lucene's notion of a Directory to store the index files. The Directory implementation can be customized and Lucene comes bundled with a file system and an in-memory implementation. DirectoryProvider is the Hibernate Search abstraction around a Lucene Directory and handles the configuration and the initialization of the underlying Lucene resources. Table 3.1, “List of built-in DirectoryProvider shows the list of the directory providers available in Hibernate Search together with their corresponding options.

To configure your DirectoryProvider you have to understand that each indexed entity is associated to a Lucene index (except of the case where multiple entities share the same index - Section 10.5, “Sharing indexes”). The name of the index is given by the index property of the @Indexed annotation. If the index property is not specified the fully qualified name of the indexed class will be used as name (recommended).

Knowing the index name, you can configure the directory provider and any additional options by using the prefix hibernate.search.<indexname>. The name default (hibernate.search.default) is reserved and can be used to define properties which apply to all indexes. Example 3.2, “Configuring directory providers” shows how hibernate.search.default.directory_provider is used to set the default directory provider to be the filesystem one. hibernate.search.default.indexBase sets then the default base directory for the indexes. As a result the index for the entity Status is created in /usr/lucene/indexes/org.hibernate.example.Status.

The index for the Rule entity, however, is using an in-memory directory, because the default directory provider for this entity is overridden by the property hibernate.search.Rules.directory_provider.

Finally the Action entity uses a custom directory provider CustomDirectoryProvider specified via hibernate.search.Actions.directory_provider.



Tip

Using the described configuration scheme you can easily define common rules like the directory provider and base directory, and override those defaults later on on a per index basis.

Table 3.1. List of built-in DirectoryProvider

Name and descriptionProperties
ram: Memory based directory, the directory will be uniquely identified (in the same deployment unit) by the @Indexed.index elementnone
filesystem: File system based directory. The directory used will be <indexBase>/< indexName >

indexBase : base directory

indexName: override @Indexed.index (useful for sharded indexes)

locking_strategy : optional, see Section 3.7.2, “LockFactory configuration”

filesystem_access_type: allows to determine the exact type of FSDirectory implementation used by this DirectoryProvider. Allowed values are auto (the default value, selects NIOFSDirectory on non Windows systems, SimpleFSDirectory on Windows), simple (SimpleFSDirectory), nio (NIOFSDirectory), mmap (MMapDirectory). Make sure to refer to Javadocs of these Directory implementations before changing this setting. Even though NIOFSDirectory or MMapDirectory can bring substantial performace boosts they also have their issues.

filesystem-master: File system based directory. Like filesystem. It also copies the index to a source directory (aka copy directory) on a regular basis.

The recommended value for the refresh period is (at least) 50% higher that the time to copy the information (default 3600 seconds - 60 minutes).

Note that the copy is based on an incremental copy mechanism reducing the average copy time.

DirectoryProvider typically used on the master node in a JMS back end cluster.

The buffer_size_on_copy optimum depends on your operating system and available RAM; most people reported good results using values between 16 and 64MB.

indexBase: base directory

indexName: override @Indexed.index (useful for sharded indexes)

sourceBase: source (copy) base directory.

source: source directory suffix (default to @Indexed.index). The actual source directory name being <sourceBase>/<source>

refresh: refresh period in seconds (the copy will take place every refresh seconds). If a copy is still in progress when the following refresh period elapses, the second copy operation will be skipped.

buffer_size_on_copy: The amount of MegaBytes to move in a single low level copy instruction; defaults to 16MB.

locking_strategy : optional, see Section 3.7.2, “LockFactory configuration”

filesystem_access_type: allows to determine the exact type of FSDirectory implementation used by this DirectoryProvider. Allowed values are auto (the default value, selects NIOFSDirectory on non Windows systems, SimpleFSDirectory on Windows), simple (SimpleFSDirectory), nio (NIOFSDirectory), mmap (MMapDirectory). Make sure to refer to Javadocs of these Directory implementations before changing this setting. Even though NIOFSDirectory or MMapDirectory can bring substantial performace boosts they also have their issues.

filesystem-slave: File system based directory. Like filesystem, but retrieves a master version (source) on a regular basis. To avoid locking and inconsistent search results, 2 local copies are kept.

The recommended value for the refresh period is (at least) 50% higher that the time to copy the information (default 3600 seconds - 60 minutes).

Note that the copy is based on an incremental copy mechanism reducing the average copy time. If a copy is still in progress when refresh period elapses, the second copy operation will be skipped.

DirectoryProvider typically used on slave nodes using a JMS back end.

The buffer_size_on_copy optimum depends on your operating system and available RAM; most people reported good results using values between 16 and 64MB.

indexBase: Base directory

indexName: override @Indexed.index (useful for sharded indexes)

sourceBase: Source (copy) base directory.

source: Source directory suffix (default to @Indexed.index). The actual source directory name being <sourceBase>/<source>

refresh: refresh period in second (the copy will take place every refresh seconds).

buffer_size_on_copy: The amount of MegaBytes to move in a single low level copy instruction; defaults to 16MB.

locking_strategy : optional, see Section 3.7.2, “LockFactory configuration”

retry_marker_lookup : optional, default to 0. Defines how many times we look for the marker files in the source directory before failing. Waiting 5 seconds between each try.

retry_initialize_period : optional, set an integer value in seconds to enable the retry initialize feature: if the slave can't find the master index it will try again until it's found in background, without preventing the application to start: fullText queries performed before the index is initialized are not blocked but will return empty results. When not enabling the option or explicitly setting it to zero it will fail with an exception instead of scheduling a retry timer. To prevent the application from starting without an invalid index but still control an initialization timeout, see retry_marker_lookup instead.

filesystem_access_type: allows to determine the exact type of FSDirectory implementation used by this DirectoryProvider. Allowed values are auto (the default value, selects NIOFSDirectory on non Windows systems, SimpleFSDirectory on Windows), simple (SimpleFSDirectory), nio (NIOFSDirectory), mmap (MMapDirectory). Make sure to refer to Javadocs of these Directory implementations before changing this setting. Even though NIOFSDirectory or MMapDirectory can bring substantial performace boosts they also have their issues.

infinispan: Infinispan based directory. Use it to store the index in a distributed grid, making index changes visible to all elements of the cluster very quickly. Also see Section 3.3.1, “Infinispan Directory configuration” for additional requirements and configuration settings. Infinispan needs a global configuration and additional dependencies; the settings defined here apply to each different index.

locking_cachename: name of the Infinispan cache to use to store locks.

data_cachename : name of the Infinispan cache to use to store the largest data chunks; this area will contain the largest objects, use replication if you have enough memory or switch to distribution.

metadata_cachename: name of the Infinispan cache to use to store the metadata relating to the index; this data is rather small and read very often, it's recommended to have this cache setup using replication.

chunk_size: large files of the index are split in smaller chunks, you might want to set the highest value efficiently handled by your network. Networking tuning might be useful.


Tip

If the built-in directory providers do not fit your needs, you can write your own directory provider by implementing the org.hibernate.store.DirectoryProvider interface. In this case, pass the fully qualified class name of your provider into the directory_provider property. You can pass any additional properties using the prefix hibernate.search.<indexname>.

Infinispan is a distributed, scalable, highly available data grid platform which supports autodiscovery of peer nodes. Using Infinispan and Hibernate Search in combination, it is possible to store the Lucene index in a distributed environment where index updates are quickly available on all nodes.

This section describes in greater detail how to configure Hibernate Search to use an Infinispan Lucene Directory.

When using an Infinispan Directory the index is stored in memory and shared across multiple nodes. It is considered a single directory distributed across all participating nodes. If a node updates the index, all other nodes are updated as well. Updates on one node can be immediately searched for in the whole cluster.

The default configuration replicates all data defining the index across all nodes, thus consuming a significant amount of memory. For large indexes it's suggested to enable data distribution, so that each piece of information is replicated to a subset of all cluster members.

It is also possible to offload part or most information to a CacheStore, such as plain filesystem, Amazon S3, Cassandra, Berkley DB or standard relational databases. You can configure it to have a CacheStore on each node or have a single centralized one shared by each node.

See the Infinispan documentation for all Infinispan configuration options.

The most simple configuration only requires to enable the backend:

hibernate.search.[default|<indexname>].directory_provider = infinispan

That's all what is needed to get a cluster-replicated index, but the default configuration does not enable any form of permanent persistence for the index; to enable such a feature an Infinispan configuration file should be provided.

To use Infinispan, Hibernate Search requires a CacheManager; it can lookup and reuse an existing CacheManager, via JNDI, or start and manage a new one. In the latter case Hibernate Search will start and stop it ( closing occurs when the Hibernate SessionFactory is closed).

To use and existing CacheManager via JNDI (optional parameter):

hibernate.search.infinispan.cachemanager_jndiname = [jndiname]

To start a new CacheManager from a configuration file (optional parameter):

hibernate.search.infinispan.configuration_resourcename = [infinispan configuration filename]

If both parameters are defined, JNDI will have priority. If none of these is defined, Hibernate Search will use the default Infinispan configuration included in hibernate-search-infinispan.jar. This configuration should work fine in most cases but does not store the index in a persistent cache store.

As mentioned in Table 3.1, “List of built-in DirectoryProvider, each index makes use of three caches, so three different caches should be configured as shown in the default-hibernatesearch-infinispan.xml provided in the hibernate-search-infinispan.jar. Several indexes can share the same caches.

It is possible to refine how Hibernate Search interacts with Lucene through the worker configuration. There exist several architectural components and possible extension points. Let's have a closer look.

First there is a Worker. An implementation of the Worker interface is responsible for receiving all entity changes, queuing them by context and applying them once a context ends. The most intuitive context, especially in connection with ORM, is the transaction. For this reason Hibernate Search will per default use the TransactionalWorker to scope all changes per transaction. One can, however, imagine a scenario where the context depends for example on the number of entity changes or some other application (lifecycle) events. For this reason the Worker implementation is configurable as shown in Table 3.2, “Scope configuration”.


Once a context ends it is time to prepare and apply the index changes. This can be done synchronously or asynchronously from within a new thread. Synchronous updates have the advantage that the index is at all times in sync with the databases. Asynchronous updates, on the other hand, can help to minimize the user response time. The drawback is potential discrepancies between database and index states. Lets look at the configuration options shown in Table 3.3, “Execution configuration”.

Note

The following options can be different on each index; in fact they need the indexName prefix or use default to set the default value for all indexes.


So far all work is done within the same Virtual Machine (VM), no matter which execution mode. The total amount of work has not changed for the single VM. Luckily there is a better approach, namely delegation. It is possible to send the indexing work to a different server by configuring hibernate.search.default.worker.backend - see Table 3.4, “Backend configuration”. Again this option can be configured differently for each index.



Warning

As you probably noticed, some of the shown properties are correlated which means that not all combinations of property values make sense. In fact you can end up with a non-functional configuration. This is especially true for the case that you provide your own implementations of some of the shown interfaces. Make sure to study the existing code before you write your own Worker or BackendQueueProcessor implementation.

This section describes in greater detail how to configure the Master/Slave Hibernate Search architecture.

Every index update operation is taken from a JMS queue and executed. The master index is copied on a regular basis.


Tip

It is recommended that the refresh period be higher than the expected copy time; if a copy operation is still being performed when the next refresh triggers, the second refresh is skipped: it's safe to set this value low even when the copy time is not known.

In addition to the Hibernate Search framework configuration, a Message Driven Bean has to be written and set up to process the index works queue through JMS.


This example inherits from the abstract JMS controller class available in the Hibernate Search source code and implements a JavaEE MDB. This implementation is given as an example and can be adjusted to make use of non Java EE Message Driven Beans. For more information about the getSession() and cleanSessionIfNeeded(), please check AbstractJMSHibernateSearchController's javadoc.

This section describes how to configure the JGroups Master/Slave back end. The master and slave roles are similar to what is illustrated in Section 3.4.1, “JMS Master/Slave back end”, only a different backend (hibernate.search.default.worker.backend) needs to be set.

A specific backend can be configured to act either as a slave using jgroupsSlave, as a master using jgroupsMaster, or can automatically switch between the roles as needed by using jgroups.

Note

Either you specify a single jgroupsMaster and a set of jgroupsSlave instances, or you specify all instances as jgroups. Never mix the two approaches!

All backends configured to use JGroups share the same channel. The JGroups JChannel is the main communication link across all nodes participating in the same cluster group; since it is convenient to have just one channel shared across all backends, the Channel configuration properties are not defined on a per-worker section but are defined globally. See Section 3.4.2.4, “JGroups channel configuration”.

Table Table 3.6, “JGroups backend configuration properties” contains all configuration options which can be set independently on each index backend. These apply to all three variants of the backend: jgroupsSlave, jgroupsMaster, jgroups. It is very unlikely that you need to change any of these from their defaults.


In this mode the different nodes will autonomously elect a master node. When a master fails, a new node is elected automatically.

When setting this backend it is expected that all Hibernate Search instances in the same cluster use the same backend for each specific index: this configuration is an alternative to the static jgroupsMaster and jgroupsSlave approach so make sure to not mix them.

To synchronize the indexes in this configuration avoid filesystem-master and filesystem-slave directory providers as their behaviour can not be switched dynamically; use the Infinispan Directory instead, which has no need for different configurations on each instance and allows dynamic switching of writers; see also Section 3.3.1, “Infinispan Directory configuration”.


Tip

Should you use jgroups or the couple jgroupsMaster, jgroupsSlave?

The dynamic jgroups backend is better suited for environments in which your master is more likely to need to failover to a different machine, as in clouds. The static configuration has the benefit of keeping the master at a well known location: your architecture might take advantage of it by sending most write requests to the known master. Also optimisation and MassIndexer operations need to be triggered on the master node.

Configuring the JGroups channel essentially entails specifying the transport in terms of a network protocol stack. To configure the JGroups transport, point the configuration property hibernate.search.services.jgroups.configurationFile to a JGroups configuration file; this can be either a file path or a Java resource name.

The default cluster name is Hibernate Search Cluster which can be configured as seen in Example 3.10, “JGroups cluster name configuration”.


The cluster name is what identifies a group: by changing the name you can run different clusters in the same network in isolation.

import org.hibernate.search.backend.impl.jgroups.JGroupsChannelProvider;


org.jgroups.JChannel channel = ...
Map<String,String> properties = new HashMap<String,String)(1);
properties.put( JGroupsChannelProvider.CHANNEL_INJECT, channel );
EntityManagerFactory emf = Persistence.createEntityManagerFactory( "userPU", properties );

The different reader strategies are described in Reader strategy. Out of the box strategies are:

  • shared: share index readers across several queries. This strategy is the most efficient.

  • not-shared: create an index reader for each individual query

The default reader strategy is shared. This can be adjusted:

hibernate.search.[default|<indexname>].reader.strategy = not-shared

Adding this property switches to the not-shared strategy.

Or if you have a custom reader strategy:

hibernate.search.[default|<indexname>].reader.strategy = my.corp.myapp.CustomReaderProvider

where my.corp.myapp.CustomReaderProvider is the custom strategy implementation.

Even though Hibernate Search will try to shield you as much as possible from Lucene specifics, there are several Lucene specifics which can be directly configured, either for performance reasons or for satisfying a specific usecase. The following sections discuss these configuration options.

Hibernate Search allows you to tune the Lucene indexing performance by specifying a set of parameters which are passed through to underlying Lucene IndexWriter such as mergeFactor, maxMergeDocs and maxBufferedDocs. You can specify these parameters either as default values applying for all indexes, on a per index basis, or even per shard.

There are several low level IndexWriter settings which can be tuned for different use cases. These parameters are grouped by the indexwriter keyword:

hibernate.search.[default|<indexname>].indexwriter.<parameter_name>

If no value is set for an indexwriter value in a specific shard configuration, Hibernate Search will look at the index section, then at the default section.


The configuration in
Example 3.11, “Example performance option configuration” will result in these settings applied on the second shard of the Animal index:

  • max_merge_docs = 10

  • merge_factor = 20

  • ram_buffer_size = 64MB

  • term_index_interval = Lucene default

All other values will use the defaults defined in Lucene.

The default for all values is to leave them at Lucene's own default. The values listed in Table 3.7, “List of indexing performance and behavior properties” depend for this reason on the version of Lucene you are using. The values shown are relative to version 2.4. For more information about Lucene indexing performance, please refer to the Lucene documentation.

Table 3.7. List of indexing performance and behavior properties

PropertyDescriptionDefault Value
hibernate.search.​[default|<indexname>].​exclusive_index_use

Set to true when no other process will need to write to the same index. This will enable Hibernate Search to work in exclusive mode on the index and improve performance when writing changes to the index.

true (improved performance, releases locks only at shutdown)
hibernate.search.​[default|<indexname>].​max_queue_length

Each index has a separate "pipeline" which contains the updates to be applied to the index. When this queue is full adding more operations to the queue becomes a blocking operation. Configuring this setting doesn't make much sense unless the worker.execution is configured as async.

1000
hibernate.search.​[default|<indexname>].​indexwriter.max_buffered_delete_terms

Determines the minimal number of delete terms required before the buffered in-memory delete terms are applied and flushed. If there are documents buffered in memory at the time, they are merged and a new segment is created.

Disabled (flushes by RAM usage)
hibernate.search.[default|<indexname>].index_flush_interval

The interval in milliseconds between flushes of write operations to the index storage. Ignored unless worker.execution is configured as async.

1000
hibernate.search.​[default|<indexname>].​indexwriter.max_buffered_docs

Controls the amount of documents buffered in memory during indexing. The bigger the more RAM is consumed.

Disabled (flushes by RAM usage)
hibernate.search.​[default|<indexname>].​indexwriter.max_merge_docs

Defines the largest number of documents allowed in a segment. Smaller values perform better on frequently changing indexes, larger values provide better search performance if the index does not change often.

Unlimited (Integer.MAX_VALUE)
hibernate.search.​[default|<indexname>].​indexwriter.merge_factor

Controls segment merge frequency and size.

Determines how often segment indexes are merged when insertion occurs. With smaller values, less RAM is used while indexing, and searches on unoptimized indexes are faster, but indexing speed is slower. With larger values, more RAM is used during indexing, and while searches on unoptimized indexes are slower, indexing is faster. Thus larger values (> 10) are best for batch index creation, and smaller values (< 10) for indexes that are interactively maintained. The value must not be lower than 2.

10
hibernate.search.​[default|<indexname>].​indexwriter.merge_min_size

Controls segment merge frequency and size.

Segments smaller than this size (in MB) are always considered for the next segment merge operation.

Setting this too large might result in expensive merge operations, even tough they are less frequent.

See also org.apache.lucene.index.LogDocMergePolicy. minMergeSize.

0 MB (actually ~1K)
hibernate.search.​[default|<indexname>].​indexwriter.merge_max_size

Controls segment merge frequency and size.

Segments larger than this size (in MB) are never merged in bigger segments.

This helps reduce memory requirements and avoids some merging operations at the cost of optimal search speed. When optimizing an index this value is ignored.

See also org.apache.lucene.index.LogDocMergePolicy. maxMergeSize.

Unlimited
hibernate.search.​[default|<indexname>].​indexwriter.merge_max_optimize_size

Controls segment merge frequency and size.

Segments larger than this size (in MB) are not merged in bigger segments even when optimizing the index (see merge_max_size setting as well).

Applied to org.apache.lucene.index.LogDocMergePolicy. maxMergeSizeForOptimize.

Unlimited
hibernate.search.​[default|<indexname>].​indexwriter.merge_calibrate_by_deletes

Controls segment merge frequency and size.

Set to false to not consider deleted documents when estimating the merge policy.

Applied to org.apache.lucene.index.LogMergePolicy. calibrateSizeByDeletes.

true
hibernate.search.​[default|<indexname>].​indexwriter.ram_buffer_size

Controls the amount of RAM in MB dedicated to document buffers. When used together max_buffered_docs a flush occurs for whichever event happens first.

Generally for faster indexing performance it's best to flush by RAM usage instead of document count and use as large a RAM buffer as you can.

16 MB
hibernate.search.​[default|<indexname>].​indexwriter.term_index_interval

Expert: Set the interval between indexed terms.

Large values cause less memory to be used by IndexReader, but slow random-access to terms. Small values cause more memory to be used by an IndexReader, and speed random-access to terms. See Lucene documentation for more details.

128
hibernate.search.​[default|<indexname>].​indexwriter.use_compound_file The advantage of using the compound file format is that less file descriptors are used. The disadvantage is that indexing takes more time and temporary disk space. You can set this parameter to false in an attempt to improve the indexing time, but you could run out of file descriptors if mergeFactor is also large.

Boolean parameter, use "true" or "false". The default value for this option is true.

true
hibernate.search.​enable_dirty_check

Not all entity changes require an update of the Lucene index. If all of the updated entity properties (dirty properties) are not indexed Hibernate Search will skip the re-indexing work.

Disable this option if you use custom FieldBridges which need to be invoked at each update event (even though the property for which the field bridge is configured has not changed).

This optimization will not be applied on classes using a @ClassBridge or a @DynamicBoost.

Boolean parameter, use "true" or "false". The default value for this option is true.

true

Tip

When your architecture permits it, always keep hibernate.search.default.exclusive_index_use=true as it greatly improves efficiency in index writing. This is the default since Hibernate Search version 4.

Tip

To tune the indexing speed it might be useful to time the object loading from database in isolation from the writes to the index. To achieve this set the blackhole as worker backend and start your indexing routines. This backend does not disable Hibernate Search: it will still generate the needed changesets to the index, but will discard them instead of flushing them to the index. In contrast to setting the hibernate.search.indexing_strategy to manual, using blackhole will possibly load more data from the database. because associated entities are re-indexed as well.

hibernate.search.[default|<indexname>].worker.backend blackhole

The recommended approach is to focus first on optimizing the object loading, and then use the timings you achieve as a baseline to tune the indexing process.

Warning

The blackhole backend is not meant to be used in production, only as a tool to identify indexing bottlenecks.

Lucene Directorys have default locking strategies which work generally good enough for most cases, but it's possible to specify for each index managed by Hibernate Search a specific LockingFactory you want to use. This is generally not needed but could be useful.

Some of these locking strategies require a filesystem level lock and may be used even on RAM based indexes, this combination is valid but in this case the indexBase configuration option usually needed only for filesystem based Directory instances must be specified to point to a filesystem location where to store the lock marker files.

To select a locking factory, set the hibernate.search.<index>.locking_strategy option to one of simple, native, single or none. Alternatively set it to the fully qualified name of an implementation of org.hibernate.search.store.LockFactoryProvider.


Configuration example:

hibernate.search.default.locking_strategy = simple
hibernate.search.Animals.locking_strategy = native
hibernate.search.Books.locking_strategy = org.custom.components.MyLockingFactory

The Infinispan Directory uses a custom implementation; it's still possible to override it but make sure you understand how that will work, especially with clustered indexes.

While Hibernate Search strives to offer a backwards compatible API making it easy to port your application to newer versions, it still delegates to Apache Lucene to handle the index writing and searching. This creates a dependency to the Lucene index format. The Lucene developers of course attempt to keep a stable index format, but sometimes a change in the format can not be avoided. In those cases you either have to reindex all your data or use an index upgrade tool. Sometimes Lucene is also able to read the old format so you don't need to take specific actions (besides making backup of your index).

While an index format incompatibility is a rare event, it can happen more often that Lucene's Analyzer implementations might slightly change its behaviour. This can lead to a poor recall score, possibly missing many hits from the results.

Hibernate Search exposes a configuration property hibernate.search.lucene_version which instructs the analyzers and other Lucene classes to conform to their behaviour as defined in an (older) specific version of Lucene. See also org.apache.lucene.util.Version contained in the lucene-core.jar. Depending on the specific version of Lucene you're using you might have different options available. When this option is not specified, Hibernate Search will instruct Lucene to use the default version, which is usually the best option for new projects. Still it's recommended to define the version you're using explicitly in the configuration so that when you happen to upgrade Lucene the analyzers will not change behaviour. You can then choose to update this value at a later time, when you for example have the chance to rebuild the index from scratch.


This option is global for the configured SearchFactory and affects all Lucene APIs having such a parameter, as this should be applied consistently. So if you are also making use of Lucene bypassing Hibernate Search, make sure to apply the same value too.

After looking at all these different configuration options, it is time to have a look at an API which allows you to prorgammatically access parts of the configuration. Via the metadata API you can determine the indexed types and also how they are mapped (see Chapter 4, Mapping entities to the index structure) to the index structure. The entry point into this API is the SearchFactory. It offers two methods, namely getIndexedTypes() and getIndexedTypeDescriptor(Class<?>). The former returns a set of all indexed type, where as the latter allows to retrieve a so called IndexedTypeDescriptorfor a gven type. This descriptor allows you determine whether the type is indexed at all and, if so, whether the index is for example sharded or not (see Section 10.4, “Sharding indexes”). It also allows you to determine the static boost of the type (see Section 4.2.1, “Static index time boosting”) as well as its dynamic boost strategy (see Section 4.2.2, “Dynamic index time boosting”). Most importantly, however, you get information about the indexed properties and generated Lucene Document fields. This is exposed via PropertyDescriptors respectively FieldDescriptors. The easiest way to get to know the API is to explore it via the IDE or its javadocs.

Note

All descriptor instances of the metadata API are read only. They do not allow to change any runtime configuration.

If you are deploying your application on WildFly 8, Hibernate Search is included in the application server. A benefit is that rather than including Hibernate Search jars as a dependency in your application, you can activate the module included in the server.

We provide modules for Hibernate Search, for Apache Lucene and for some useful Solr libraries. The Hibernate Search modules are:

There are two alternative ways to get the application server to make Hibernate Search ORM module available to your deployment:

More details about modules are described in Class Loading in WildFly 8.

Tip

Modular classloading is a feature of JBoss EAP as well, but if you are using JBoss EAP, you're reading the wrong version of the user guide! JBoss EAP subscriptions include official support for Hibernate Search (as part of the WFK) and come with a different edition of this guide specifically tailored for EAP users.

In Chapter 1, Getting started you have already learned that all the metadata information needed to index entities is described through annotations. There is no need for xml mapping files. You can still use Hibernate mapping files for the basic Hibernate configuration, but the Hibernate Search specific configuration has to be expressed via annotations.

Note

There is no XML configuration available for Hibernate Search but we provide a powerful programmatic mapping API that elegantly replaces this kind of deployment form (see Section 4.7, “Programmatic API” for more information).

If you want to contribute the XML mapping implementation, see HSEARCH-210.

Lets start with the most commonly used annotations when mapping an entity.

For each property (or attribute) of your entity, you have the ability to describe how it will be indexed. The default (no annotation present) means that the property is ignored by the indexing process. @Field does declare a property as indexed and allows you to configure several aspects of the indexing process by setting one or more of the following attributes:

  • name: describe under which name the property should be stored in the Lucene Document. The default value is the property name (following the JavaBeans convention)

  • store: describe whether or not the property is stored in the Lucene index. You can store the value Store.YES (consuming more space in the index but allowing projection, see Section 5.1.3.5, “Projection”), store it in a compressed way Store.COMPRESS (this does consume more CPU), or avoid any storage Store.NO (this is the default value). When a property is stored, you can retrieve its original value from the Lucene Document. Storing the property has no impact though on whether the value is searchable or not.

  • index: describe whether the property is indexed or not. The different values are Index.NO (no indexing, i.e. cannot be found by a query), Index.YES (the element gets indexed and is searchable). The default value is Index.YES. Index.NO can be useful for cases where a property is not required to be searchable, but should be available for projection.

    Tip

    Index.NO in combination with Analyze.YES or Norms.YES is not useful, since analyze and norms require the property to be indexed

  • analyze: determines whether the property is analyzed (Analyze.YES) or not (Analyze.NO). The default value is Analyze.YES.

    Tip

    Whether or not you want to analyze a property depends on whether you wish to search the element as is, or by the words it contains. It make sense to analyze a text field, but probably not a date field.

    Tip

    Fields used for sorting or faceting must not be analyzed.

  • norms: describes whether index time boosting information should be stored (Norms.YES) or not (Norms.NO). Not storing the norms can save a considerable amount of memory, but index time boosting will not be available in this case. The default value is Norms.YES.

  • termVector: describes collections of term-frequency pairs. This attribute enables the storing of the term vectors within the documents during indexing. The default value is TermVector.NO.

    The different values of this attribute are:

    ValueDefinition
    TermVector.YESStore the term vectors of each document. This produces two synchronized arrays, one contains document terms and the other contains the term's frequency.
    TermVector.NODo not store term vectors.
    TermVector.WITH_OFFSETSStore the term vector and token offset information. This is the same as TermVector.YES plus it contains the starting and ending offset position information for the terms.
    TermVector.WITH_POSITIONSStore the term vector and token position information. This is the same as TermVector.YES plus it contains the ordinal positions of each occurrence of a term in a document.
    TermVector.WITH_POSITION_OFFSETSStore the term vector, token position and offset information. This is a combination of the YES, WITH_OFFSETS and WITH_POSITIONS.
  • indexNullAs: Per default null values are ignored and not indexed. However, using indexNullAs you can specify a string which will be inserted as token for the null value. Per default this value is set to org.hibernate.search.annotations.Field.DO_NOT_INDEX_NULL indicating that null values should not be indexed. You can set this value to DEFAULT_NULL_TOKEN to indicate that a default null token should be used. This default null token can be specified in the configuration using hibernate.search.default_null_token. If this property is not set the string "_null_" will be used as default.

    Note

    When indexNullAs is used, it is important to use the chosen null token in search queries (see Querying) in order to find null values. It is also advisable to use this feature only with un-analyzed fields (analyze=Analyze.NO).

    Warning

    When implementing a custom FieldBridge or TwoWayFieldBridge it is up to the developer to handle the indexing of null values (see JavaDocs of LuceneOptions.indexNullAs()).

There exists also a companion annotation to @Field, called @NumericField. It can be specified in the same scope as @Field or @DocumentId, but only on Integer, Long, Float or Double properties. When used, the annoated property will be indexed using a Trie structure. This enables efficient range queries and sorting, resulting in query response times being orders of magnitude faster than the same query with plain @Field. The @NumericField annotation accepts the following parameters:

ValueDefinition
forField(Optional) Specify the name of of the related @Field that will be indexed as numeric. It's only mandatory when the property contains more than a @Field declaration
precisionStep(Optional) Change the way that the Trie structure is stored in the index. Smaller precisionSteps lead to more disk space usage and faster range and sort queries. Larger values lead to less space used and range query performance more close to the range query in normal @Fields. Default value is 4.

Lucene supports the numeric types: Double, Long, Integer and Float. Other numeric types should use the default string encoding (via @Field), unless the application can deal with a potential loss in precision, in which case a custom NumericFieldBridge can be used. See Example 4.2, “Defining a custom NumericFieldBridge for BigDecimal.


You would use this custom bridge like seen in Example 4.3, “Use of BigDecimalNumericFieldBridge”. In this case three annotations are used - @Field, @NumericField and @FieldBridge. @Field is required to mark the property for being indexed (a standalone @NumericField is never allowed). @NumericField might be ommited in this specific case, because the used @FieldBridge annotation refers already to a NumericFieldBridge instance. However, the use of @NumericField is recommended to make the use of the property as numeric value explicit.


Associated objects as well as embedded objects can be indexed as part of the root entity index. This is useful if you expect to search a given entity based on properties of associated objects. In Example 4.6, “Indexing associations”t the aim is to return places where the associated city is Atlanta (In the Lucene query parser language, it would translate into address.city:Atlanta). The place fields will be indexed in the Place index. The Place index documents will also contain the fields address.id, address.street, and address.city which you will be able to query.


Be careful. Because the data is denormalized in the Lucene index when using the @IndexedEmbedded technique, Hibernate Search needs to be aware of any change in the Place object and any change in the Address object to keep the index up to date. To make sure the Place Lucene document is updated when it's Address changes, you need to mark the other side of the bidirectional relationship with @ContainedIn.

Tip

@ContainedIn is useful on both associations pointing to entities and on embedded (collection of) objects.

Let's make Example 4.6, “Indexing associations” a bit more complex by nesting @IndexedEmbedded as seen in Example 4.7, “Nested usage of @IndexedEmbedded and @ContainedIn.


As you can see, any @*ToMany, @*ToOne and @Embedded attribute can be annotated with @IndexedEmbedded. The attributes of the associated class will then be added to the main entity index. In Example 4.7, “Nested usage of @IndexedEmbedded and @ContainedIn the index will contain the following fields

  • id

  • name

  • address.street

  • address.city

  • address.ownedBy_name

The default prefix is propertyName., following the traditional object navigation convention. You can override it using the prefix attribute as it is shown on the ownedBy property.

Note

The prefix cannot be set to the empty string.

The depth property is necessary when the object graph contains a cyclic dependency of classes (not instances). For example, if Owner points to Place. Hibernate Search will stop including Indexed embedded attributes after reaching the expected depth (or the object graph boundaries are reached). A class having a self reference is an example of cyclic dependency. In our example, because depth is set to 1, any @IndexedEmbedded attribute in Owner (if any) will be ignored.

Using @IndexedEmbedded for object associations allows you to express queries (using Lucene's query syntax) such as:

  • Return places where name contains JBoss and where address city is Atlanta. In Lucene query this would be

    +name:jboss +address.city:atlanta
  • Return places where name contains JBoss and where owner's name contain Joe. In Lucene query this would be

    +name:jboss +address.ownedBy_name:joe

In a way it mimics the relational join operation in a more efficient way (at the cost of data duplication). Remember that, out of the box, Lucene indexes have no notion of association, the join operation is simply non-existent. It might help to keep the relational model normalized while benefiting from the full text index speed and feature richness.

Note

An associated object can itself (but does not have to) be @Indexed

When @IndexedEmbedded points to an entity, the association has to be directional and the other side has to be annotated @ContainedIn (as seen in the previous example). If not, Hibernate Search has no way to update the root index when the associated entity is updated (in our example, a Place index document has to be updated when the associated Address instance is updated).

Sometimes, the object type annotated by @IndexedEmbedded is not the object type targeted by Hibernate and Hibernate Search. This is especially the case when interfaces are used in lieu of their implementation. For this reason you can override the object type targeted by Hibernate Search using the targetElement parameter.


The @IndexedEmbedded annotation provides also an attribute includePaths which can be used as an alternative to depth, or be combined with it.

When using only depth all indexed fields of the embedded type will be added recursively at the same depth; this makes it harder to pick only a specific path without adding all other fields as well, which might not be needed.

To avoid unnecessarily loading and indexing entities you can specify exactly which paths are needed. A typical application might need different depths for different paths, or in other words it might need to specify paths explicitly, as shown in Example 4.9, “Using the includePaths property of @IndexedEmbedded


Using a mapping as in Example 4.9, “Using the includePaths property of @IndexedEmbedded, you would be able to search on a Person by name and/or surname, and/or the name of the parent. It will not index the surname of the parent, so searching on parent's surnames will not be possible but speeds up indexing, saves space and improve overall performance.

The @IndexedEmbedded includePaths will include the specified paths in addition to what you would index normally specifying a limited value for depth. When using includePaths, and leaving depth undefined, behavior is equivalent to setting depth=0: only the included paths are indexed.


In Example 4.10, “Using the includePaths property of @IndexedEmbedded, every human will have it's name and surname attributes indexed. The name and surname of parents will be indexed too, recursively up to second line because of the depth attribute. It will be possible to search by name or surname, of the person directly, his parents or of his grand parents. Beyond the second level, we will in addition index one more level but only the name, not the surname.

This results in the following fields in the index:

  • id - as primary key

  • _hibernate_class - stores entity type

  • name - as direct field

  • surname - as direct field

  • parents.name - as embedded field at depth 1

  • parents.surname - as embedded field at depth 1

  • parents.parents.name - as embedded field at depth 2

  • parents.parents.surname - as embedded field at depth 2

  • parents.parents.parents.name - as additional path as specifyed by includePaths. The first parents. is inferred from the field name, the remaining path is the attribute of includePaths

Having explicit control of the indexed paths might be easier if you're designing your application by defining the needed queries first, as at that point you might know exactly which fields you need, and which other fields are unnecessary to implement your use case.

Lucene has the notion of boosting which allows you to give certain documents or fields more or less importance than others. Lucene differentiates between index and search time boosting. The following sections show you how you can achieve index time boosting using Hibernate Search.

To define a static boost value for an indexed class or property you can use the @Boost annotation. You can use this annotation within @Field or specify it directly on method or class level.


In Example 4.11, “Different ways of using @Boost”, Essay's probability to reach the top of the search list will be multiplied by 1.7. The summary field will be 3.0 (2 * 1.5, because @Field.boost and @Boost on a property are cumulative) more important than the isbn field. The text field will be 1.2 times more important than the isbn field. Note that this explanation is wrong in strictest terms, but it is simple and close enough to reality for all practical purposes. Please check the Lucene documentation or the excellent Lucene In Action from Otis Gospodnetic and Erik Hatcher.

The @Boost annotation used in Section 4.2.1, “Static index time boosting” defines a static boost factor which is independent of the state of of the indexed entity at runtime. However, there are usecases in which the boost factor may depend on the actual state of the entity. In this case you can use the @DynamicBoost annotation together with an accompanying custom BoostStrategy.


In Example 4.12, “Dynamic boost example” a dynamic boost is defined on class level specifying VIPBoostStrategy as implementation of the BoostStrategy interface to be used at indexing time. You can place the @DynamicBoost either at class or field level. Depending on the placement of the annotation either the whole entity is passed to the defineBoost method or just the annotated field/property value. It's up to you to cast the passed object to the correct type. In the example all indexed values of a VIP person would be double as important as the values of a normal person.

Note

The specified BoostStrategy implementation must define a public no-arg constructor.

Of course you can mix and match @Boost and @DynamicBoost annotations in your entity. All defined boost factors are cumulative.

Analysis is the process of converting text into single terms (words) and can be considered as one of the key features of a fulltext search engine. Lucene uses the concept of Analyzers to control this process. In the following section we cover the multiple ways Hibernate Search offers to configure the analyzers.

Analyzers can become quite complex to deal with. For this reason introduces Hibernate Search the notion of analyzer definitions. An analyzer definition can be reused by many @Analyzer declarations and is composed of:

This separation of tasks - a list of char filters, and a tokenizer followed by a list of filters - allows for easy reuse of each individual component and let you build your customized analyzer in a very flexible way (just like Lego). Generally speaking the char filters do some pre-processing in the character input, then the Tokenizer starts the tokenizing process by turning the character input into tokens which are then further processed by the TokenFilters. Hibernate Search supports this infrastructure by utilizing the Solr analyzer framework.

Note

Some of the analyzers and filters will require additional dependencies. For example to use the snowball stemmer you have to also include the lucene-snowball jar and for the PhoneticFilterFactory you need the commons-codec jar. Your distribution of Hibernate Search provides these dependencies in its lib/optional directory. Have a look at Table 4.2, “Example of available tokenizers” and Table 4.3, “Examples of available filters” to see which anaylzers and filters have additional dependencies

Prior to Search version 3.3.0.Beta2 it was required to add the Solr dependency org.apache.solr:solr-core when you wanted to use the analyzer definition framework. In case you are using Maven this is no longer needed: all required Solr dependencies are now defined as dependencies of the artifact org.hibernate:hibernate-search-analyzers; just add the following dependency :

<dependency>
   <groupId>org.hibernate</groupId>
   <artifactId>hibernate-search-analyzers</artifactId>
   <version>4.5.3.Final</version>
<dependency>

Let's have a look at a concrete example now - Example 4.14, “@AnalyzerDef and the Solr framework”. First a char filter is defined by its factory. In our example, a mapping char filter is used, and will replace characters in the input based on the rules specified in the mapping file. Next a tokenizer is defined. This example uses the standard tokenizer. Last but not least, a list of filters is defined by their factories. In our example, the StopFilter filter is built reading the dedicated words property file. The filter is also expected to ignore case.


Tip

Filters and char filters are applied in the order they are defined in the @AnalyzerDef annotation. Order matters!

Some tokenizers, token filters or char filters load resources like a configuration or metadata file. This is the case for the stop filter and the synonym filter. If the resource charset is not using the VM default, you can explicitly specify it by adding a resource_charset parameter.


Once defined, an analyzer definition can be reused by an @Analyzer declaration as seen in Example 4.16, “Referencing an analyzer by name”.


Analyzer instances declared by @AnalyzerDef are also available by their name in the SearchFactory which is quite useful wen building queries.

Analyzer analyzer = fullTextSession.getSearchFactory().getAnalyzer("customanalyzer");

Fields in queries should be analyzed with the same analyzer used to index the field so that they speak a common "language": the same tokens are reused between the query and the indexing process. This rule has some exceptions but is true most of the time. Respect it unless you know what you are doing.

Solr and Lucene come with a lot of useful default char filters, tokenizers and filters. You can find a complete list of char filter factories, tokenizer factories and filter factories at http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters. Let's check a few of them.



Table 4.3. Examples of available filters

FactoryDescriptionParametersAdditional dependencies
StandardFilterFactoryRemove dots from acronyms and 's from wordsnonesolr-core
LowerCaseFilterFactoryLowercases all wordsnonesolr-core
StopFilterFactoryRemove words (tokens) matching a list of stop words

words: points to a resource file containing the stop words

ignoreCase: true if case should be ignore when comparing stop words, false otherwise

solr-core
SnowballPorterFilterFactoryReduces a word to it's root in a given language. (eg. protect, protects, protection share the same root). Using such a filter allows searches matching related words.language: Danish, Dutch, English, Finnish, French, German, Italian, Norwegian, Portuguese, Russian, Spanish, Swedish and a few moresolr-core
ISOLatin1AccentFilterFactoryRemove accents for languages like Frenchnonesolr-core
PhoneticFilterFactoryInserts phonetically similar tokens into the token stream

encoder: One of DoubleMetaphone, Metaphone, Soundex or RefinedSoundex

inject: true will add tokens to the stream, false will replace the existing token

maxCodeLength: sets the maximum length of the code to be generated. Supported only for Metaphone and DoubleMetaphone encodings

solr-core and commons-codec
CollationKeyFilterFactoryConverts each token into its java.text.CollationKey, and then encodes the CollationKey with IndexableBinaryStringTools, to allow it to be stored as an index term.custom, language, country, variant, strength, decomposition see Lucene's CollationKeyFilter javadocs for more infosolr-core and commons-io

We recommend to check all the implementations of org.apache.solr.analysis.TokenizerFactory and org.apache.solr.analysis.TokenFilterFactory in your IDE to see the implementations available.

So far all the introduced ways to specify an analyzer were static. However, there are use cases where it is useful to select an analyzer depending on the current state of the entity to be indexed, for example in a multilingual applications. For an BlogEntry class for example the analyzer could depend on the language property of the entry. Depending on this property the correct language specific stemmer should be chosen to index the actual text.

To enable this dynamic analyzer selection Hibernate Search introduces the AnalyzerDiscriminator annotation. Example 4.17, “Usage of @AnalyzerDiscriminator” demonstrates the usage of this annotation.


The prerequisite for using @AnalyzerDiscriminator is that all analyzers which are going to be used dynamically are predefined via @AnalyzerDef definitions. If this is the case, one can place the @AnalyzerDiscriminator annotation either on the class or on a specific property of the entity for which to dynamically select an analyzer. Via the impl parameter of the AnalyzerDiscriminator you specify a concrete implementation of the Discriminator interface. It is up to you to provide an implementation for this interface. The only method you have to implement is getAnalyzerDefinitionName() which gets called for each field added to the Lucene document. The entity which is getting indexed is also passed to the interface method. The value parameter is only set if the AnalyzerDiscriminator is placed on property level instead of class level. In this case the value represents the current value of this property.

An implemention of the Discriminator interface has to return the name of an existing analyzer definition or null if the default analyzer should not be overridden. Example 4.17, “Usage of @AnalyzerDiscriminator” assumes that the language parameter is either 'de' or 'en' which matches the specified names in the @AnalyzerDefs.

In some situations retrieving analyzers can be handy. For example, if your domain model makes use of multiple analyzers (maybe to benefit from stemming, use phonetic approximation and so on), you need to make sure to use the same analyzers when you build your query.

Whether you are using the Lucene programmatic API or the Lucene query parser, you can retrieve the scoped analyzer for a given entity. A scoped analyzer is an analyzer which applies the right analyzers depending on the field indexed. Remember, multiple analyzers can be defined on a given entity each one working on an individual field. A scoped analyzer unifies all these analyzers into a context-aware analyzer. While the theory seems a bit complex, using the right analyzer in a query is very easy.


In the example above, the song title is indexed in two fields: the standard analyzer is used in the field title and a stemming analyzer is used in the field title_stemmed. By using the analyzer provided by the search factory, the query uses the appropriate analyzer depending on the field targeted.

Tip

You can also retrieve analyzers defined via @AnalyzerDef by their definition name using searchFactory.getAnalyzer(String).

When discussing the basic mapping for an entity one important fact was so far disregarded. In Lucene all index fields have to be represented as strings. All entity properties annotated with @Field have to be converted to strings to be indexed. The reason we have not mentioned it so far is, that for most of your properties Hibernate Search does the translation job for you thanks to a set of built-in bridges. However, in some cases you need a more fine grained control over the translation process.

Hibernate Search comes bundled with a set of built-in bridges between a Java property type and its full text representation.

null

Per default null elements are not indexed. Lucene does not support null elements. However, in some situation it can be useful to insert a custom token representing the null value. See Section 4.1.1.2, “@Field” for more information.

java.lang.String

Strings are indexed as are

short, Short, integer, Integer, long, Long, float, Float, double, Double, BigInteger, BigDecimal

Numbers are converted into their string representation. Note that numbers cannot be compared by Lucene (ie used in ranged queries) out of the box: they have to be padded

Note

Using a Range query has drawbacks; an alternative approach is to use a Filter query which will filter the result query to the appropriate range.

Hibernate Search will support a padding mechanism

java.util.Date

Dates are stored as yyyyMMddHHmmssSSS in GMT time (200611072203012 for Nov 7th of 2006 4:03PM and 12ms EST). You shouldn't really bother with the internal format. What is important is that when using a TermRangeQuery, you should know that the dates have to be expressed in GMT time.

Usually, storing the date up to the millisecond is not necessary. @DateBridge defines the appropriate resolution you are willing to store in the index (@DateBridge(resolution=Resolution.DAY)). The date pattern will then be truncated accordingly.

@Entity 

@Indexed
public class Meeting {
    @Field(analyze=Analyze.NO)
    @DateBridge(resolution=Resolution.MINUTE)
    private Date date;
    ...                 

Warning

A Date whose resolution is lower than MILLISECOND cannot be a @DocumentId

Important

The default Date bridge uses Lucene's DateTools to convert from and to String. This means that all dates are expressed in GMT time. If your requirements are to store dates in a fixed time zone you have to implement a custom date bridge. Make sure you understand the requirements of your applications regarding to date indexing and searching.

java.net.URI, java.net.URL

URI and URL are converted to their string representation

java.lang.Class

Class are converted to their fully qualified class name. The thread context classloader is used when the class is rehydrated

Hibernate Search allows you to extract text from various document types using the built-in TikaBridge which utilizes Apache Tika to extract text and metadata from the provided documents. The TikaBridge annotation can be used with String, URI, byte[] or java.sql.Blob properties. In the case of String and URI the bridge interprets the values are file paths and tries to open a file to parse the document. In the case of byte[] and Blob the values are directly passed to Tika for parsing.

Tika uses metadata as in- and output of the parsing process and it also allows to provide additional context information. This process is described in Parser interface. The Hibernate Search Tika bridge allows you to make use of these additional configuration options by providing two interfaces in conjunction with TikaBridge. The first interface is the TikaParseContextProvider. It allows you to create a custom ParseContext for the document parsing. The second interface is TikaMetadataProcessor which has two methods - prepareMetadata() and set(String, Object, Document, LuceneOptions, Metadata metadata). The former allows to add additional metadata to the parsing process (for example the file name) and the latter allows you to index metadata discovered during the parsing process.

TikaParseContextProvider as well as TikaMetadataProcessor implementation classes can both be specified as parameters on the TikaBridge annotation.


In the Example 4.19, “Example mapping with Apache Tika” the property mp3FileName represents a path to an MP3 file; the headers of this file will be indexed and so the performed query will be able to match the MP3 metadata.

Warning

TikaBridge does not implement TwoWayFieldBridge: queries built using the DSL (as in the Example 4.19, “Example mapping with Apache Tika”) need to explicitly enable the option ignoreFieldBridge().

Sometimes, the built-in bridges of Hibernate Search do not cover some of your property types, or the String representation used by the bridge does not meet your requirements. The following paragraphs describe several solutions to this problem.

The simplest custom solution is to give Hibernate Search an implementation of your expected Object to String bridge. To do so you need to implement the org.hibernate.search.bridge.StringBridge interface. All implementations have to be thread-safe as they are used concurrently.


Given the string bridge defined in Example 4.20, “Custom StringBridge implementation”, any property or field can use this bridge thanks to the @FieldBridge annotation:

@FieldBridge(impl = PaddedIntegerBridge.class)

private Integer length;                

Parameters can also be passed to the bridge implementation making it more flexible. Example 4.21, “Passing parameters to your bridge implementation” implements a ParameterizedBridge interface and parameters are passed through the @FieldBridge annotation.


The ParameterizedBridge interface can be implemented by StringBridge, TwoWayStringBridge, FieldBridge implementations.

All implementations have to be thread-safe, but the parameters are set during initialization and no special care is required at this stage.

If you expect to use your bridge implementation on an id property (ie annotated with @DocumentId ), you need to use a slightly extended version of StringBridge named TwoWayStringBridge. Hibernate Search needs to read the string representation of the identifier and generate the object out of it. There is no difference in the way the @FieldBridge annotation is used.


Important

It is important for the two-way process to be idempotent (ie object = stringToObject( objectToString( object ) ) ).

Some use cases require more than a simple object to string translation when mapping a property to a Lucene index. To give you the greatest possible flexibility you can also implement a bridge as a FieldBridge. This interface gives you a property value and let you map it the way you want in your Lucene Document. You can for example store a property in two different document fields. The interface is very similar in its concept to the Hibernate UserTypes.

Example 4.23. Implementing the FieldBridge interface

/**

 * Store the date in 3 different fields - year, month, day - to ease Range Query per
 * year, month or day (eg get all the elements of December for the last 5 years).
 * @author Emmanuel Bernard
 */
public class DateSplitBridge implements FieldBridge {
    private final static TimeZone GMT = TimeZone.getTimeZone("GMT");
    public void set(String name, Object value, Document document, LuceneOptions luceneOptions) {
        Date date = (Date) value;
        Calendar cal = GregorianCalendar.getInstance(GMT);
        cal.setTime(date);
        int year = cal.get(Calendar.YEAR);
        int month = cal.get(Calendar.MONTH) + 1;
        int day = cal.get(Calendar.DAY_OF_MONTH);
  
        // set year
        luceneOptions.addFieldToDocument(
            name + ".year",
            String.valueOf( year ),
            document );
  
        // set month and pad it if needed
        luceneOptions.addFieldToDocument(
            name + ".month",
            month < 10 ? "0" : "" + String.valueOf( month ),
            document );
  
        // set day and pad it if needed
        luceneOptions.addFieldToDocument(
            name + ".day",
            day < 10 ? "0" : "" + String.valueOf( day ),
            document );
    }
}
//property @FieldBridge(impl = DateSplitBridge.class)
private Date date;                

In Example 4.23, “Implementing the FieldBridge interface” the fields are not added directly to Document. Instead the addition is delegated to the LuceneOptions helper; this helper will apply the options you have selected on @Field, like Store or TermVector, or apply the choosen @Boost value. It is especially useful to encapsulate the complexity of COMPRESS implementations. Even though it is recommended to delegate to LuceneOptions to add fields to the Document, nothing stops you from editing the Document directly and ignore the LuceneOptions in case you need to.

Tip

Classes like LuceneOptions are created to shield your application from changes in Lucene API and simplify your code. Use them if you can, but if you need more flexibility you're not required to.

It is sometimes useful to combine more than one property of a given entity and index this combination in a specific way into the Lucene index. The @ClassBridge respectively @ClassBridges annotations can be defined at class level (as opposed to the property level). In this case the custom field bridge implementation receives the entity instance as the value parameter instead of a particular property. Though not shown in Example 4.24, “Implementing a class bridge”, @ClassBridge supports the termVector attribute discussed in section Section 4.1.1, “Basic mapping”.

Example 4.24. Implementing a class bridge

@Entity

@Indexed @ClassBridge(name="branchnetwork",
             store=Store.YES,
             impl = CatFieldsClassBridge.class,
             params = @Parameter( name="sepChar", value=" " ) )
public class Department {
    private int id;
    private String network;
    private String branchHead;
    private String branch;
    private Integer maxEmployees
    ...
}
public class CatFieldsClassBridge implements FieldBridge, ParameterizedBridge {
    private String sepChar;
    public void setParameterValues(Map parameters) {
        this.sepChar = (String) parameters.get( "sepChar" );
    }
    public void set( String name, Object value, Document document, LuceneOptions luceneOptions) {
        // In this particular class the name of the new field was passed
        // from the name field of the ClassBridge Annotation. This is not
        // a requirement. It just works that way in this instance. The
        // actual name could be supplied by hard coding it below.
        Department dep = (Department) value;
        String fieldValue1 = dep.getBranch();
        if ( fieldValue1 == null ) {
            fieldValue1 = "";
        }
        String fieldValue2 = dep.getNetwork();
        if ( fieldValue2 == null ) {
            fieldValue2 = "";
        }
        String fieldValue = fieldValue1 + sepChar + fieldValue2;
        Field field = new Field( name, fieldValue, luceneOptions.getStore(),
            luceneOptions.getIndex(), luceneOptions.getTermVector() );
        field.setBoost( luceneOptions.getBoost() );
        document.add( field );
   }
}

In this example, the particular CatFieldsClassBridge is applied to the department instance, the field bridge then concatenate both branch and network and index the concatenation.

In some situations, you want to index an entity only when it is in a given state, for example:

This serves both functional and technical needs. You don't want your blog readers to find your draft entries and filtering them off the query is a bit annoying. Very few of your entities are actually required to be indexed and you want to limit indexing overhead and keep indexes small and fast.

Hibernate Search lets you intercept entity indexing operations and override them. It is quite simple:

Let's look at the blog example at Example 4.25, “Index blog entries only when they are published and remove them when they are in a different state”

Example 4.25. Index blog entries only when they are published and remove them when they are in a different state

/**

 * Only index blog when it is in published state
 *
 * @author Emmanuel Bernard <emmanuel@hibernate.org>
 */
public class IndexWhenPublishedInterceptor implements EntityIndexingInterceptor<Blog> {
    @Override
    public IndexingOverride onAdd(Blog entity) {
        if (entity.getStatus() == BlogStatus.PUBLISHED) {
            return IndexingOverride.APPLY_DEFAULT;
        }
        return IndexingOverride.SKIP;
    }
    @Override
    public IndexingOverride onUpdate(Blog entity) {
        if (entity.getStatus() == BlogStatus.PUBLISHED) {
            return IndexingOverride.UPDATE;
        }
        return IndexingOverride.REMOVE;
    }
    @Override
    public IndexingOverride onDelete(Blog entity) {
        return IndexingOverride.APPLY_DEFAULT;
    }
    @Override
    public IndexingOverride onCollectionUpdate(Blog entity) {
        return onUpdate(entity);
    }
}
@Entity

@Indexed(interceptor=IndexWhenPublishedInterceptor.class)
public class Blog {
    @Id
    @GeneratedValue
    public Integer getId() { return id; }
    public void setId(Integer id) {  this.id = id; }
    private Integer id;
    @Field
    public String getTitle() { return title; }
    public void setTitle(String title) {  this.title = title; }
    private String title;
    public BlogStatus getStatus() { return status; }
    public void setStatus(BlogStatus status) {  this.status = status; }
    private BlogStatus status;
    [...]
}

We mark the Blog entity with @Indexed.interceptor. As you can see, IndexWhenPublishedInterceptor implements EntityIndexingInterceptor and accepts Blog entities (it could have accepted superclasses as well - for example Object if you create a generic interceptor.

You can react to several planned indexing events:

  • when an entity is added to your datastore

  • when an entity is updated in your datastore

  • when an entity is deleted from your datastore

  • when a collection own by this entity is updated in your datastore

For each occurring event you can respond with one of the following actions:

  • APPLY_DEFAULT: that's the basic operation that lets Hibernate Search update the index as expected - creating, updating or removing the document

  • SKIP: ask Hibernate Search to not do anything to the index for this event - data will not be created, updated or removed from the index in any way

  • REMOVE: ask Hibernate Search to remove indexing data about this entity - you can safely ask for REMOVE even if the entity has not yet been indexed

  • UPDATE: ask Hibernate Search to either index or update the index for this entity - it is safe to ask for UPDATE even if the entity has never been indexed

Note

Be careful, not every combination makes sense: for example, asking to UPDATE the index upon onDelete. Note that you could ask for SKIP in this situation if saving indexing time is critical for you. That's rarely the case though.

By default, no interceptor is applied on an entity. You have to explicitly define an interceptor via the @Indexed annotation (see Section 4.1.1.1, “@Indexed”) or programmatically (see Section 4.7, “Programmatic API”). This class and all its subclasses will then be intercepted. You can stop or change the interceptor used in a subclass by overriding @Indexed.interceptor. Hibernate Search provides DontInterceptEntityInterceptor which will explicitly not intercept any call. This is useful to reset interception within a class hierarchy.

Note

Dirty checking optimization is disabled when interceptors are used. Dirty checking optimization does check what has changed in an entity and only triggers an index update if indexed properties are changed. The reason is simple, your interceptor might depend on a non indexed property which would be ignored by this optimization.

Warning

An EntityIndexingInterceptor can never override an explicit indexing operation such as index(T), purge(T, id) or purgeAll(class).

Although the recommended approach for mapping indexed entities is to use annotations, it is sometimes more convenient to use a different approach:

While it has been a popular demand in the past, the Hibernate team never found the idea of an XML alternative to annotations appealing due to it's heavy duplication, lack of code refactoring safety, because it did not cover all the use case spectrum and because we are in the 21st century :)

The idea of a programmatic API was much more appealing and has now become a reality. You can programmatically define your mapping using a programmatic API: you define entities and fields as indexable by using mapping classes which effectively mirror the annotation concepts in Hibernate Search. Note that fan(s) of XML approach can design their own schema and use the programmatic API to create the mapping while parsing the XML stream.

In order to use the programmatic model you must first construct a SearchMapping object which you can do in two ways:

You can pass the SearchMapping object directly via the property key hibernate.search.model_mapping or the constant Environment.MODEL_MAPPING. Use the Configuration API or the Map passed to the JPA Persistence bootstrap methods.



Alternatively, you can create a factory class (ie. hosting a method annotated with @Factory) whose factory method returns the SearchMapping object. The factory class must have a no-arg constructor and its fully qualified class name is passed to the property key hibernate.search.model_mapping or its type-safe representation Environment.MODEL_MAPPING. This approach is useful when you do not necessarily control the bootstrap process like in a Java EE, CDI or Spring Framework container.


The SearchMapping is the root object which contains all the necessary indexable entities and fields. From there, the SearchMapping object exposes a fluent (and thus intuitive) API to express your mappings: it contextually exposes the relevant mapping options in a type-safe way. Just let your IDE autocompletion feature guide you through.

Today, the programmatic API cannot be used on a class annotated with Hibernate Search annotations, chose one approach or the other. Also note that the same default values apply in annotations and the programmatic API. For example, the @Field.name is defaulted to the property name and does not have to be set.

Each core concept of the programmatic API has a corresponding example to depict how the same definition would look using annotation. Therefore seeing an annotation example of the programmatic approach should give you a clear picture of what Hibernate Search will build with the marked entities and associated properties.

Analyzers can be programmatically defined using the analyzerDef(String analyzerDef, Class<? extends TokenizerFactory> tokenizerFactory) method. This method also enables you to define filters for the analyzer definition. Each filter that you define can optionally take in parameters as seen in the following example :


The analyzer mapping defined above is equivalent to the annotation model using @AnalyzerDef in conjunction with @AnalyzerDefs:


The programmatic API provides easy mechanism for defining full text filter definitions which is available via @FullTextFilterDef and @FullTextFilterDefs (see Section 5.3, “Filters”). The next example depicts the creation of full text filter definition using the fullTextFilterDef method.


The previous example can effectively been seen as annotating your entity with @FullTextFilterDef like below:


When defining fields for indexing using the programmatic API, call field() on the property(String propertyName, ElementType elementType) method. From field() you can specify the name, index, store, bridge and analyzer definitions.


The above example of marking fields as indexable is equivalent to defining fields using @Field as seen below:


Note

When using a programmatic mapping for a given type X, you can only refer to fields defined on X. Fields or methods inherited from a super type are not configurable. In case you need to configure a super class property, you need to either override the property in X or create a programmatic mapping for the super class. This mimics the usage of annotations where you cannot annotate a field or method of a super class either, unless it is redefined in the given type.

In this section you will see how to programmatically define entities to be embedded into the indexed entity similar to using the @IndexedEmbedded model. In order to define this you must mark the property as indexEmbedded.There is the option to add a prefix to the embedded entity definition which can be done by calling prefix as seen in the example below:


The next example shows the same definition using annotation (@IndexedEmbedded):


@ContainedIn can be define as seen in the example below:


This is equivalent to defining @ContainedIn in your entity:


It is possible to associate bridges to programmatically defined fields. When you define a field() programmatically you can use the bridge(Class<?> impl) to associate a FieldBridge implementation class. The bridge method also provides optional methods to include any parameters required for the bridge class. The below shows an example of programmatically defining a bridge:


The above can equally be defined using annotations, as seen in the next example.


The second most important capability of Hibernate Search is the ability to execute Lucene queries and retrieve entities managed by a Hibernate session. The search provides the power of Lucene without leaving the Hibernate paradigm, giving another dimension to the Hibernate classic search mechanisms (HQL, Criteria query, native SQL query).

Preparing and executing a query consists of four simple steps:

  • Creating a FullTextSession

  • Creating a Lucene query either via the Hibernate Search query DSL (recommended) or by utilizing the Lucene query API

  • Wrapping the Lucene query using an org.hibernate.Query

  • Executing the search by calling for example list() or scroll()

To access the querying facilities, you have to use a FullTextSession. This Search specific session wraps a regular org.hibernate.Session in order to provide query and indexing capabilities.


Once you have a FullTextSession you have two options to build the full-text query: the Hibernate Search query DSL or the native Lucene query.

If you use the Hibernate Search query DSL, it will look like this:

QueryBuilder b = fullTextSession.getSearchFactory()

    .buildQueryBuilder().forEntity(Myth.class).get();
org.apache.lucene.search.Query luceneQuery =
    b.keyword()
        .onField("history").boostedTo(3)
        .matching("storm")
        .createQuery();
org.hibernate.Query fullTextQuery = fullTextSession.createFullTextQuery(luceneQuery);
List result = fullTextQuery.list(); //return a list of managed objects

You can alternatively write your Lucene query either using the Lucene query parser or Lucene programmatic API.


Note

The Hibernate query built on top of the Lucene query is a regular org.hibernate.Query, which means you are in the same paradigm as the other Hibernate query facilities (HQL, Native or Criteria). The regular list() , uniqueResult(), iterate() and scroll() methods can be used.

In case you are using the Java Persistence APIs of Hibernate, the same extensions exist:


Note

The following examples we will use the Hibernate APIs but the same example can be easily rewritten with the Java Persistence API by just adjusting the way the FullTextQuery is retrieved.

Hibernate Search queries are built on top of Lucene queries which gives you total freedom on the type of Lucene query you want to execute. However, once built, Hibernate Search wraps further query processing using org.hibernate.Query as your primary query manipulation API.

Writing full-text queries with the Lucene programmatic API is quite complex. It's even more complex to understand the code once written. Besides the inherent API complexity, you have to remember to convert your parameters to their string equivalent as well as make sure to apply the correct analyzer to the right field (a ngram analyzer will for example use several ngrams as the tokens for a given word and should be searched as such).

The Hibernate Search query DSL makes use of a style of API called a fluent API. This API has a few key characteristics:

Let's see how to use the API. You first need to create a query builder that is attached to a given indexed entity type. This QueryBuilder will know what analyzer to use and what field bridge to apply. You can create several QueryBuilders (one for each entity type involved in the root of your query). You get the QueryBuilder from the SearchFactory.

QueryBuilder mythQB = searchFactory.buildQueryBuilder().forEntity( Myth.class ).get();

You can also override the analyzer used for a given field or fields. This is rarely needed and should be avoided unless you know what you are doing.

QueryBuilder mythQB = searchFactory.buildQueryBuilder()

    .forEntity( Myth.class )
        .overridesForField("history","stem_analyzer_definition")
    .get();

Using the query builder, you can then build queries. It is important to realize that the end result of a QueryBuilder is a Lucene query. For this reason you can easily mix and match queries generated via Lucene's query parser or Query objects you have assembled with the Lucene programmatic API and use them with the Hibernate Search DSL. Just in case the DSL is missing some features.

Let's start with the most basic use case - searching for a specific word:

Query luceneQuery = mythQB.keyword().onField("history").matching("storm").createQuery();

keyword() means that you are trying to find a specific word. onField() specifies in which Lucene field to look. matching() tells what to look for. And finally createQuery() creates the Lucene query object. A lot is going on with this line of code.

Let's see how you can search a property that is not of type string.

@Entity 

@Indexed 
public class Myth {
  @Field(analyze = Analyze.NO) 
  @DateBridge(resolution = Resolution.YEAR)
  public Date getCreationDate() { return creationDate; }
  public Date setCreationDate(Date creationDate) { this.creationDate = creationDate; }
  private Date creationDate;
  
  ...
}
Date birthdate = ...;
Query luceneQuery = mythQb.keyword().onField("creationDate").matching(birthdate).createQuery();

This conversion works for any object, not just Date, provided that the FieldBridge has an objectToString method (and all built-in FieldBridge implementations do).

We make the example a little more advanced now and have a look at how to search a field that uses ngram analyzers. ngram analyzers index succession of ngrams of your words which helps to recover from user typos. For example the 3-grams of the word hibernate are hib, ibe, ber, rna, nat, ate.

@AnalyzerDef(name = "ngram",

  tokenizer = @TokenizerDef(factory = StandardTokenizerFactory.class ),
  filters = {
    @TokenFilterDef(factory = StandardFilterFactory.class),
    @TokenFilterDef(factory = LowerCaseFilterFactory.class),
    @TokenFilterDef(factory = StopFilterFactory.class),
    @TokenFilterDef(factory = NGramFilterFactory.class,
      params = { 
        @Parameter(name = "minGramSize", value = "3"),
        @Parameter(name = "maxGramSize", value = "3") } )
  }
)
@Entity 
@Indexed 
public class Myth {
  @Field(analyzer=@Analyzer(definition="ngram") 
  @DateBridge(resolution = Resolution.YEAR)
  public String getName() { return name; }
  public String setName(Date name) { this.name = name; }
  private String name;
  
  ...
}
Date birthdate = ...;
Query luceneQuery = mythQb.keyword().onField("name").matching("Sisiphus")
   .createQuery();

The matching word "Sisiphus" will be lower-cased and then split into 3-grams: sis, isi, sip, phu, hus. Each of these n-gram will be part of the query. We will then be able to find the Sysiphus myth (with a y). All that is transparently done for you.

To search for multiple possible words in the same field, simply add them all in the matching clause.

//search document with storm or lightning in their history

Query luceneQuery = 
    mythQB.keyword().onField("history").matching("storm lightning").createQuery();

To search the same word on multiple fields, use the onFields method.

Query luceneQuery = mythQB

    .keyword()
    .onFields("history","description","name")
    .matching("storm")
    .createQuery();

Sometimes, one field should be treated differently from another field even if searching the same term, you can use the andField() method for that.

Query luceneQuery = mythQB.keyword()

    .onField("history")
    .andField("name")
      .boostedTo(5)
    .andField("description")
    .matching("storm")
    .createQuery();

In the previous example, only field name is boosted to 5.

Finally, you can aggregate (combine) queries to create more complex queries. The following aggregation operators are available:

The subqueries can be any Lucene query including a boolean query itself. Let's look at a few examples:

//look for popular modern myths that are not urban

Date twentiethCentury = ...;
Query luceneQuery = mythQB
    .bool()
      .must( mythQB.keyword().onField("description").matching("urban").createQuery() )
        .not()
      .must( mythQB.range().onField("starred").above(4).createQuery() )
      .must( mythQB
        .range()
        .onField("creationDate")
        .above(twentiethCentury)
        .createQuery() )
    .createQuery();
//look for popular myths that are preferably urban
Query luceneQuery = mythQB
    .bool()
      .should( mythQB.keyword().onField("description").matching("urban").createQuery() )
      .must( mythQB.range().onField("starred").above(4).createQuery() )
    .createQuery();
//look for all myths except religious ones
Query luceneQuery = mythQB
    .all()
      .except( monthQb
        .keyword()
        .onField( "description_stem" )
        .matching( "religion" )
        .createQuery() 
      )
    .createQuery();

So far we only covered the process of how to create your Lucene query (see Section 5.1, “Building queries”). However, this is only the first step in the chain of actions. Let's now see how to build the Hibernate Search query from the Lucene query.

For some use cases, returning the domain object (including its associations) is overkill. Only a small subset of the properties is necessary. Hibernate Search allows you to return a subset of properties:


Hibernate Search extracts the properties from the Lucene index and convert them back to their object representation, returning a list of Object[]. Projections avoid a potential database round trip (useful if the query response time is critical). However, it also has several constraints:

  • the properties projected must be stored in the index (@Field(store=Store.YES)), which increases the index size

  • the properties projected must use a FieldBridge implementing org.hibernate.search.bridge.TwoWayFieldBridge or org.hibernate.search.bridge.TwoWayStringBridge, the latter being the simpler version.

    Note

    All Hibernate Search built-in types are two-way.

  • you can only project simple properties of the indexed entity or its embedded associations. This means you cannot project a whole embedded entity.

  • projection does not work on collections or maps which are indexed via @IndexedEmbedded

Projection is also useful for another kind of use case. Lucene can provide metadata information about the results. By using some special projection constants, the projection mechanism can retrieve this metadata:


You can mix and match regular fields and projection constants. Here is the list of the available constants:

  • FullTextQuery.THIS: returns the initialized and managed entity (as a non projected query would have done).

  • FullTextQuery.DOCUMENT: returns the Lucene Document related to the object projected.

  • FullTextQuery.OBJECT_CLASS: returns the class of the indexed entity.

  • FullTextQuery.SCORE: returns the document score in the query. Scores are handy to compare one result against an other for a given query but are useless when comparing the result of different queries.

  • FullTextQuery.ID: the id property value of the projected object.

  • FullTextQuery.DOCUMENT_ID: the Lucene document id. Careful, Lucene document id can change overtime between two different IndexReader opening.

  • FullTextQuery.EXPLANATION: returns the Lucene Explanation object for the matching object/document in the given query. Do not use if you retrieve a lot of data. Running explanation typically is as costly as running the whole Lucene query per matching element. Make sure you use projection!

By default, Hibernate Search uses the most appropriate strategy to initialize entities matching your full text query. It executes one (or several) queries to retrieve the required entities. This is the best approach to minimize database round trips in a scenario where none / few of the retrieved entities are present in the persistence context (ie the session) or the second level cache.

If most of your entities are present in the second level cache, you can force Hibernate Search to look into the cache before retrieving an object from the database.


ObjectLookupMethod defines the strategy used to check if an object is easily accessible (without database round trip). Other options are:

  • ObjectLookupMethod.PERSISTENCE_CONTEXT: useful if most of the matching entities are already in the persistence context (ie loaded in the Session or EntityManager)

  • ObjectLookupMethod.SECOND_LEVEL_CACHE: check first the persistence context and then the second-level cache.

Note

Note that to search in the second-level cache, several settings must be in place:

  • the second level cache must be properly configured and active

  • the entity must have enabled second-level cache (eg via @Cacheable)

  • the Session, EntityManager or Query must allow access to the second-level cache for read access (ie CacheMode.NORMAL in Hibernate native APIs or CacheRetrieveMode.USE in JPA 2 APIs).

Warning

Avoid using ObjectLookupMethod.SECOND_LEVEL_CACHE unless your second level cache implementation is either EHCache or Infinispan; other second level cache providers don't currently implement this operation efficiently.

You can also customize how objects are loaded from the database (if not found before). Use DatabaseRetrievalMethod for that:

  • QUERY (default): use a (set of) queries to load several objects in batch. This is usually the best approach.

  • FIND_BY_ID: load objects one by one using the Session.get or EntityManager.find semantic. This might be useful if batch-size is set on the entity (in which case, entities will be loaded in batch by Hibernate Core). QUERY should be preferred almost all the time.

You can limit the time a query takes in Hibernate Search in two ways:

You can decide to stop a query if when it takes more than a predefined amount of time. Note that this is a best effort basis but if Hibernate Search still has significant work to do and if we are beyond the time limit, a QueryTimeoutException will be raised (org.hibernate.QueryTimeoutException or javax.persistence.QueryTimeoutException depending on your programmatic API).

To define the limit when using the native Hibernate APIs, use one of the following approaches


Likewise getResultSize(), iterate() and scroll() honor the timeout but only until the end of the method call. That simply means that the methods of Iterable or the ScrollableResults ignore the timeout.

Note

explain() does not honor the timeout: this method is used for debug purposes and in particular to find out why a query is slow

When using JPA, simply use the standard way of limiting query execution time.


Important

Remember, this is a best effort approach and does not guarantee to stop exactly on the specified timeout.

Alternatively, you can return the number of results which have already been fetched by the time the limit is reached. Note that only the Lucene part of the query is influenced by this limit. It is possible that, if you retrieve managed object, it takes longer to fetch these objects.

To define this soft limit, use the following approach


Likewise getResultSize(), iterate() and scroll() honor the time limit but only until the end of the method call. That simply means that the methods of Iterable or the ScrollableResults ignore the timeout.

You can determine if the results have been partially loaded by invoking the hasPartialResults method.


If you use the JPA API, limitExecutionTimeTo and hasPartialResults are also available to you.

Once the Hibernate Search query is built, executing it is in no way different than executing a HQL or Criteria query. The same paradigm and object semantic applies. All the common operations are available: list(), uniqueResult(), iterate(), scroll().

You will find yourself sometimes puzzled by a result showing up in a query or a result not showing up in a query. Luke is a great tool to understand those mysteries. However, Hibernate Search also gives you access to the Lucene Explanation object for a given result (in a given query). This class is considered fairly advanced to Lucene users but can provide a good understanding of the scoring of an object. You have two ways to access the Explanation object for a given result:

The first approach takes a document id as a parameter and return the Explanation object. The document id can be retrieved using projection and the FullTextQuery.DOCUMENT_ID constant.

In the second approach you project the Explanation object using the FullTextQuery.EXPLANATION constant.


Be careful, building the explanation object is quite expensive, it is roughly as expensive as running the Lucene query again. Don't do it if you don't need the object

Apache Lucene has a powerful feature that allows to filter query results according to a custom filtering process. This is a very powerful way to apply additional data restrictions, especially since filters can be cached and reused. Some interesting use cases are:

Hibernate Search pushes the concept further by introducing the notion of parameterizable named filters which are transparently cached. For people familiar with the notion of Hibernate Core filters, the API is very similar:


In this example we enabled two filters on top of the query. You can enable (or disable) as many filters as you like.

Declaring filters is done through the @FullTextFilterDef annotation. This annotation can be on any @Indexed entity regardless of the query the filter is later applied to. This implies that filter definitions are global and their names must be unique. A SearchException is thrown in case two different @FullTextFilterDef annotations with the same name are defined. Each named filter has to specify its actual filter implementation.


BestDriversFilter is an example of a simple Lucene filter which reduces the result set to drivers whose score is 5. In this example the specified filter implements the org.apache.lucene.search.Filter directly and contains a no-arg constructor.

If your Filter creation requires additional steps or if the filter you want to use does not have a no-arg constructor, you can use the factory pattern:


Hibernate Search will look for a @Factory annotated method and use it to build the filter instance. The factory must have a no-arg constructor.

Named filters come in handy where parameters have to be passed to the filter. For example a security filter might want to know which security level you want to apply:


Each parameter name should have an associated setter on either the filter or filter factory of the targeted named filter definition.


Note the method annotated @Key returning a FilterKey object. The returned object has a special contract: the key object must implement equals() / hashCode() so that 2 keys are equal if and only if the given Filter types are the same and the set of parameters are the same. In other words, 2 filter keys are equal if and only if the filters from which the keys are generated can be interchanged. The key object is used as a key in the cache mechanism.

@Key methods are needed only if:

  • you enabled the filter caching system (enabled by default)

  • your filter has parameters

In most cases, using the StandardFilterKey implementation will be good enough. It delegates the equals() / hashCode() implementation to each of the parameters equals and hashcode methods.

As mentioned before the defined filters are per default cached and the cache uses a combination of hard and soft references to allow disposal of memory when needed. The hard reference cache keeps track of the most recently used filters and transforms the ones least used to SoftReferences when needed. Once the limit of the hard reference cache is reached additional filters are cached as SoftReferences. To adjust the size of the hard reference cache, use hibernate.search.filter.cache_strategy.size (defaults to 128). For advanced use of filter caching, you can implement your own FilterCachingStrategy. The classname is defined by hibernate.search.filter.cache_strategy.

This filter caching mechanism should not be confused with caching the actual filter results. In Lucene it is common practice to wrap filters using the IndexReader around a CachingWrapperFilter. The wrapper will cache the DocIdSet returned from the getDocIdSet(IndexReader reader) method to avoid expensive recomputation. It is important to mention that the computed DocIdSet is only cachable for the same IndexReader instance, because the reader effectively represents the state of the index at the moment it was opened. The document list cannot change within an opened IndexReader. A different/new IndexReader instance, however, works potentially on a different set of Documents (either from a different index or simply because the index has changed), hence the cached DocIdSet has to be recomputed.

Hibernate Search also helps with this aspect of caching. Per default the cache flag of @FullTextFilterDef is set to FilterCacheModeType.INSTANCE_AND_DOCIDSETRESULTS which will automatically cache the filter instance as well as wrap the specified filter around a Hibernate specific implementation of CachingWrapperFilter. In contrast to Lucene's version of this class SoftReferences are used together with a hard reference count (see discussion about filter cache). The hard reference count can be adjusted using hibernate.search.filter.cache_docidresults.size (defaults to 5). The wrapping behaviour can be controlled using the @FullTextFilterDef.cache parameter. There are three different values for this parameter:

ValueDefinition
FilterCacheModeType.NONENo filter instance and no result is cached by Hibernate Search. For every filter call, a new filter instance is created. This setting might be useful for rapidly changing data sets or heavily memory constrained environments.
FilterCacheModeType.INSTANCE_ONLYThe filter instance is cached and reused across concurrent Filter.getDocIdSet() calls. DocIdSet results are not cached. This setting is useful when a filter uses its own specific caching mechanism or the filter results change dynamically due to application specific events making DocIdSet caching in both cases unnecessary.
FilterCacheModeType.INSTANCE_AND_DOCIDSETRESULTSBoth the filter instance and the DocIdSet results are cached. This is the default value.

Last but not least - why should filters be cached? There are two areas where filter caching shines:

  • the system does not update the targeted entity index often (in other words, the IndexReader is reused a lot)

  • the Filter's DocIdSet is expensive to compute (compared to the time spent to execute the query)

It is possible, in a sharded environment to execute queries on a subset of the available shards. This can be done in two steps:

Let's first look at an example of sharding strategy that query on a specific customer shard if the customer filter is activated.

public class CustomerShardingStrategy implements IndexShardingStrategy {


 // stored IndexManagers in a array indexed by customerID
 private IndexManager[] indexManagers;
 
 public void initialize(Properties properties, IndexManager[] indexManagers) {
   this.indexManagers = indexManagers;
 }
 public IndexManager[] getIndexManagersForAllShards() {
   return indexManagers;
 }
 public IndexManager getIndexManagerForAddition(
     Class<?> entity, Serializable id, String idInString, Document document) {
   Integer customerID = Integer.parseInt(document.getFieldable("customerID").stringValue());
   return indexManagers[customerID];
 }
 public IndexManager[] getIndexManagersForDeletion(
     Class<?> entity, Serializable id, String idInString) {
   return getIndexManagersForAllShards();
 }
  /**
  * Optimization; don't search ALL shards and union the results; in this case, we 
  * can be certain that all the data for a particular customer Filter is in a single
  * shard; simply return that shard by customerID.
  */
 public IndexManager[] getIndexManagersForQuery(
     FullTextFilterImplementor[] filters) {
   FullTextFilter filter = getCustomerFilter(filters, "customer");
   if (filter == null) {
     return getIndexManagersForAllShards();
   }
   else {
     return new IndexManager[] { indexManagers[Integer.parseInt(
       filter.getParameter("customerID").toString())] };
   }
 }
 private FullTextFilter getCustomerFilter(FullTextFilterImplementor[] filters, String name) {
   for (FullTextFilterImplementor filter: filters) {
     if (filter.getName().equals(name)) return filter;
   }
   return null;
 }
}

In this example, if the filter named customer is present, we make sure to only use the shard dedicated to this customer. Otherwise, we return all shards. A given Sharding strategy can react to one or more filters and depends on their parameters.

The second step is simply to activate the filter at query time. While the filter can be a regular filter (as defined in Section 5.3, “Filters”) which also filters Lucene results after the query, you can make use of a special filter that will only be passed to the sharding strategy and otherwise ignored for the rest of the query. Simply use the ShardSensitiveOnlyFilter class when declaring your filter.

@Entity @Indexed
@FullTextFilterDef(name="customer", impl=ShardSensitiveOnlyFilter.class)

public class Customer {
   ...
}
FullTextQuery query = ftEm.createFullTextQuery(luceneQuery, Customer.class); query.enableFulltextFilter("customer").setParameter("CustomerID", 5);
@SuppressWarnings("unchecked")
List<Customer> results = query.getResultList();

Note that by using the ShardSensitiveOnlyFilter, you do not have to implement any Lucene filter. Using filters and sharding strategy reacting to these filters is recommended to speed up queries in a sharded environment.

Faceted search is a technique which allows to divide the results of a query into multiple categories. This categorization includes the calculation of hit counts for each category and the ability to further restrict search results based on these facets (categories). Example 5.24, “Search for 'Hibernate Search' on Amazon” shows a faceting example. The search for 'Hibernate Search' results in fifteen hits which are displayed on the main part of the page. The navigation bar on the left, however, shows the category Computers & Internet with its subcategories Programming, Computer Science, Databases, Software, Web Development, Networking and Home Computing. For each of these subcategories the number of books is shown matching the main search criteria and belonging to the respective subcategory. This division of the category Computers & Internet is one facet of this search. Another one is for example the average customer review rating.


In Hibernate Search the classes QueryBuilder and FullTextQuery are the entry point to the faceting API. The former allows to create faceting requests whereas the latter gives access to the so called FacetManager. With the help of the FacetManager faceting requests can be applied on a query and selected facets can be added to an existing query in order to refine search results. The following sections will describe the faceting process in more detail. The examples will use the entity Cd as shown in Example 5.25, “Example entity for faceting”:


The first step towards a faceted search is to create the FacetingRequest. Currently two types of faceting requests are supported. The first type is called discrete faceting and the second type range faceting request.

In the case of a discrete faceting request you start with giving the request a unique name. This name will later be used to retrieve the facet values (see Section 5.4.4, “Interpreting a Facet result”). Then you need to specify on which index field you want to categorize on and which faceting options to apply. An example for a discrete faceting request can be seen in Example 5.26, “Creating a discrete faceting request”:


When executing this faceting request a Facet instance will be created for each discrete value for the indexed field label. The Facet instance will record the actual field value including how often this particular field value occurs within the original query results. Parameters orderedBy, includeZeroCounts and maxFacetCount are optional and can be applied on any faceting request. Parameter orderedBy allows to specify in which order the created facets will be returned. The default is FacetSortOrder.COUNT_DESC, but you can also sort on the field value. Parameter includeZeroCount determines whether facets with a count of 0 will be included in the result (by default they are) and maxFacetCount allows to limit the maximum amount of facets returned.

Note

There are several preconditions an indexed field has to meet in order to categorize (facet) on it. The indexed property must be of type String, Date or a subtype of Number; also null values should be avoided. Finally the property has to be indexed with Analyze.NO and you can not use it in combination with @NumericField. When you need these other options we suggest to index the property twice and use the appropriate field depending on the use case:

...
@Fields({
  @Field( name="price" ),
  @Field( name="price_facet", analyze=Analyze.NO )
})
@NumericFields({
  @NumericField( forField="price" )
})
private int price;
...

The result of applying a faceting request is a list of Facet instances as seen in Example 5.28, “Applying a faceting request”. The order within the list is given by the FacetSortOrder parameter specified via orderedBy when creating the faceting request. The default value is FacetSortOrder.COUNT_DESC, meaning facets are ordered by their count in descending order (highest count first). Other values are COUNT_ASC, FIELD_VALUE and RANGE_DEFINITION_ORDER. COUNT_ASC returns the facets in ascending count order whereas FIELD_VALUE will return them in alphabetical order of the facet/category value (see Section 5.4.4, “Interpreting a Facet result”). RANGE_DEFINITION_ORDER only applies for range faceting request and returns the facets in the same order in which the ranges are defined. For Example 5.27, “Creating a range faceting request” this would mean the facet for the range of below 1000 would be returned first, followed by the facet for the range 1001 to 1500 and finally the facet for above 1500.

Each facet request results in a list of Facet instances. Each instance represents one facet/category value. In the CD example (Example 5.26, “Creating a discrete faceting request”) where we want to categorize on the Cd labels, there would for example be a Facet for each of the record labels Universal, Sony and Warner. Example 5.29, “Facet API” shows the API of Facet.


getFacetingName() and getFieldName() are returning the facet request name and the targeted document field name as specified by the underlying FacetRequest. For Example 5.26, “Creating a discrete faceting request” that would be labelFacetRequest and label respectively. The interesting information is provided by getValue() and getCount(). The former is the actual facet/category value, for example a concrete record label like Universal. The latter returns the count for this value. To stick with the example again, the count value tells you how many Cds are released under the Universal label. Last but not least, getFacetQuery() returns a Lucene query which can be used to retrieve the entities counted in this facet.

A common use case for faceting is a "drill-down" functionality which allows you to narrow your original search by applying a given facet on it. To do this, you can apply any of the returned Facets as additional criteria on your original query via FacetSelection. FacetSelections are available via the FacetManager and allow you to select a facet as query criteria (selectFacets), remove a facet restriction (deselectFacets), remove all facet restrictions (clearSelectedFacets) and retrieve all currently selected facets (getSelectedFacets). Example 5.30, “Restricting query results via the application of a FacetSelection shows an example.


Query performance depends on several criteria:

The primary function of a Lucene index is to identify matches to your queries, still after the query is performed the results must be analyzed to extract useful information: typically Hibernate Search might need to extract the Class type and the primary key.

Extracting the needed values from the index has a performance cost, which in some cases might be very low and not noticeable, but in some other cases might be a good candidate for caching.

What is exactly needed depends on the kind of Projections being used (see Section 5.1.3.5, “Projection”), and in some cases the Class type is not needed as it can be inferred from the query context or other means.

Using the @CacheFromIndex annotation you can experiment different kinds of caching of the main metadata fields required by Hibernate Search:



import static org.hibernate.search.annotations.FieldCacheType.CLASS;
import static org.hibernate.search.annotations.FieldCacheType.ID;
@Indexed
@CacheFromIndex( { CLASS, ID } )
public class Essay {
    ...
         

It is currently possible to cache Class types and IDs using this annotation:

  • CLASS: Hibernate Search will use a Lucene FieldCache to improve peformance of the Class type extraction from the index.

    This value is enabled by default, and is what Hibernate Search will apply if you don't specify the @CacheFromIndex annotation.

  • ID: Extracting the primary identifier will use a cache. This is likely providing the best performing queries, but will consume much more memory which in turn might reduce performance.

Note

Measure the performance and memory consumption impact after warmup (executing some queries): enabling Field Caches is likely to improve performance but this is not always the case.

Using a FieldCache has two downsides to consider:

  • Memory usage: these caches can be quite memory hungry. Typically the CLASS cache has lower requirements than the ID cache.

  • Index warmup: when using field caches, the first query on a new index or segment will be slower than when you don't have caching enabled.

With some queries the classtype won't be needed at all, in that case even if you enabled the CLASS field cache, this might not be used; for example if you are targeting a single class, obviously all returned values will be of that type (this is evaluated at each Query execution).

For the ID FieldCache to be used, the ids of targeted entities must be using a TwoWayFieldBridge (as all builting bridges), and all types being loaded in a specific query must use the fieldname for the id, and have ids of the same type (this is evaluated at each Query execution).

As Hibernate core applies changes to the Database, Hibernate Search detects these changes and will update the index automatically (unless the EventListeners are disabled). Sometimes changes are made to the database without using Hibernate, as when backup is restored or your data is otherwise affected; for these cases Hibernate Search exposes the Manual Index APIs to explicitly update or remove a single entity from the index, or rebuild the index for the whole database, or remove all references to a specific type.

All these methods affect the Lucene Index only, no changes are applied to the Database.

It is equally possible to remove an entity or all entities of a given type from a Lucene index without the need to physically remove them from the database. This operation is named purging and is also done through the FullTextSession.


Purging will remove the entity with the given id from the Lucene index but will not touch the database.

If you need to remove all entities of a given type, you can use the purgeAll method. This operation removes all entities of the type passed as a parameter as well as all its subtypes.


As in the previous example, it is suggested to optimize the index after many purge operation to actually free the used space.

As is the case with method FullTextSession.index(T entity), also purge and purgeAll are considered explicit indexinging operations: any registered EntityIndexingInterceptor won't be applied. For more information on EntityIndexingInterceptor see Section 4.5, “Conditional indexing: to index or not based on entity state”.

Note

Methods index, purge and purgeAll are available on FullTextEntityManager as well.

Note

All manual indexing methods (index, purge and purgeAll) only affect the index, not the database, nevertheless they are transactional and as such they won't be applied until the transaction is successfully committed, or you make use of flushToIndexes.

If you change the entity mapping to the index, chances are that the whole Index needs to be updated; For example if you decide to index a an existing field using a different analyzer you'll need to rebuild the index for affected types. Also if the Database is replaced (like restored from a backup, imported from a legacy system) you'll want to be able to rebuild the index from existing data. Hibernate Search provides two main strategies to choose from:

This strategy consists in removing the existing index and then adding all entities back to the index using FullTextSession.purgeAll() and FullTextSession.index(), however there are some memory and efficiency contraints. For maximum efficiency Hibernate Search batches index operations and executes them at commit time. If you expect to index a lot of data you need to be careful about memory consumption since all documents are kept in a queue until the transaction commit. You can potentially face an OutOfMemoryException if you don't empty the queue periodically: to do this you can use fullTextSession.flushToIndexes(). Every time fullTextSession.flushToIndexes() is called (or if the transaction is committed), the batch queue is processed applying all index changes. Be aware that, once flushed, the changes cannot be rolled back.


Try to use a batch size that guarantees that your application will not run out of memory: with a bigger batch size objects are fetched faster from database but more memory is needed.

Hibernate Search's MassIndexer uses several parallel threads to rebuild the index; you can optionally select which entities need to be reloaded or have it reindex all entities. This approach is optimized for best performance but requires to set the application in maintenance mode: making queries to the index is not recommended when a MassIndexer is busy.


This will rebuild the index, deleting it and then reloading all entities from the database. Although it's simple to use, some tweaking is recommended to speed up the process: there are several parameters configurable.

Warning

During the progress of a MassIndexer the content of the index is undefined! If a query is performed while the MassIndexer is working most likely some results will be missing.


This will rebuild the index of all User instances (and subtypes), and will create 12 parallel threads to load the User instances using batches of 25 objects per query; these same 12 threads will also need to process indexed embedded relations and custom FieldBridges or ClassBridges, to finally output a Lucene document. In this conversion process these threads are likely going to need to trigger lazyloading of additional attributes, so you will probably need a high number of threads working in parallel. The number of threads working on actual index writing is defined by the backend configuration of each index. See the option worker.thread_pool.size in Table 3.3, “Execution configuration”.

As of Hibernate Search 4.4.0, instead of indexing all the types in parallel, the MassIndexer is configured by default to index only one type in parallel. It prevents resource exhaustion especially database connections and usually does not slow down the indexing. You can however configure this behavior using MassIndexer.typesToIndexInParallel(int threadsToIndexObjects):


Generally we suggest to leave cacheMode to CacheMode.IGNORE (the default), as in most reindexing situations the cache will be a useless additional overhead; it might be useful to enable some other CacheMode depending on your data: it could increase performance if the main entity is relating to enum-like data included in the index.

Note

The MassIndexer was designed for speed and is unaware of transactions, so there is no need to begin one or committing. Also because it is not transactional it is not recommended to let users use the system during its processing, as it is unlikely people will be able to find results and the system load might be too high anyway.

The MassIndexer was designed to finish the reindexing task as quickly as possible, but this requires a bit of care in its configuration to behave fairly with your server resources.

There is a simple formula to understand how the different options applied to the MassIndexer affect the number of used worker threads and connections: each thread will require a JDBC connection.

threads = typesToIndexInParallel * (threadsToLoadObjects + 1);
required JDBC connections = threads;

Let's see some suggestions for a roughly sane tuning starting point:

This section explains some low level tricks to keep your indexes at peak performance. We cover some Lucene details which in most cases you don't have to know about: Hibernate Search will handle these operations optimally and transparently in most cases without the need for further configuration. Still, it is good to know that there are ways to configure the behaviour, if the need arises.

The index is physically stored in several smaller segments. Each segment is immutable and represents a generation of index writes. Index segments are periodically compacted, both to merge smaller segments and to remove stale entries; this merging process happens constantly in the background and can be tuned with the options specified in Section 3.7.1, “Tuning indexing performance”, but you can also define policies to fully run index optimizations when it is most suited for your specific workload.

With older versions of Lucene it was important to frequently optimize the index to maintain good performance, but with current Lucene versions this doesn't apply anymore. The benefit of explicit optimization is very low, and in certain cases even counter-productive. During an explicit optimization the whole index is processed and rewritten inflicting a significant performance cost. Optimization is for this reason a double-edged sword.

Another reason to avoid optimizing the index too often is that an optimization will, as a side effect, invalidate cached filters and field caches and internal buffers need to be refreshed.

Tip

Optimizing the index is often not needed, does not benefit write (update) performance at all, and is a slow operation: make sure you need it before activating it.

Of course optimizing the index does not only present drawbacks: after the optimization process is completed and new IndexReader instances have loaded their buffers, queries will perform at peak performance and you will have reclaimed all disk space potentially used by stale entries.

It is recommended to not schedule any optimization, but if you wish to perform it periodically you should run it:

  • on an idle system or when the searches are less frequent

  • after a lot of index modifications

When using a MassIndexer (see Section 6.3.2, “Using a MassIndexer”) it will optimize involved indexes by default at the start and at the end of processing; you can change this behavior by using MassIndexer.optimizeAfterPurge and MassIndexer.optimizeOnFinish respectively. The initial optimization is actually very cheap as it is performed on an emtpy index: its purpose is to release the storage space occupied by the old index.

While in most cases this is not needed, Hibernate Search can automatically optimize an index after:

The configuration for automatic index optimization can be defined on a global level or per index:


With the above example an optimization will be triggered to the Animal index as soon as either:

  • the number of additions and deletions reaches 1000

  • the number of transactions reaches 50 (hibernate.search.Animal.optimizer.transaction_limit.max having priority over hibernate.search.default.optimizer.transaction_limit.max)

If none of these parameters are defined, no optimization is processed automatically.

The default implementation of OptimizerStrategy can be overriden by implementing org.hibernate.search.store.optimization.OptimizerStrategy and setting the optimizer.implementation property to the fully qualified name of your implementation. This implementation must implement the interface, be a public class and have a public constructor taking no arguments.


The keyword default can be used to select the Hibernate Search default implementation; all properties after the .optimizer key separator will be passed to the implementation's initialize method at start.

Hibernate Search offers access to a Statistics object via SearchFactory.getStatistics(). It allows you for example to determine which classes are indexed and how many entities are in the index. This information is always available. However, by specifying the hibernate.search.generate_statistics property in your configuration you can also collect total and average Lucene query and object loading timings.

You can also enable access to the statistics via JMX. Setting the property hibernate.search.jmx_enabled will automatically register the StatisticsInfoMBean. Depending on your the configuration the IndexControlMBean and IndexingProgressMonitorMBean will also be registered. In case you are having more than one JMX enabled Hibernate Search instance running within a single JVM, you should also set hibernate.search.jmx_bean_suffix to a different value for each of the instances. The specified suffix will be used to distinguish between the different MBean instances. Let's have a closer look at the mentioned MBeans.

This MBean allows to build, optimize and purge the index for a given entity. Indexing occurs via the mass indexing API (seeSection 6.3.2, “Using a MassIndexer”). A requirement for this bean to be registered in JMX is, that the Hibernate SessionFactory is bound to JNDI via the hibernate.session_factory_name property. Refer to the Hibernate Core manual for more information on how to configure JNDI. The IndexControlMBean and its API are for now experimental.

With the Spatial extensions you can combine fulltext queries with restrictions based on distance from a point in space, filter results based on distances from coordinates or sort results on such a distance criteria.

The spatial support of Hibernate Search has a few goals:

  • Enable spatial search on entities: find entities within x km from a location (latitude, longitude) on Earth

  • Provide an easy way to enable spatial indexing via expressive annotations

  • Provide a simple way for querying

  • Hide geographical complexity

For example, you might search for that Italian place named approximately "Il Ciociaro" and is somewhere in the 2 km area around your office.

To be able to filter an @Indexed @Entity on a distance criteria you need to add the @Spatial annotation (org.hibernate.search.annotations.Spatial) and specify one or more sets of coordinates.

There are different techniques to index point coordinates, in particular Hibernate Search Spatial offers a choice between two strategies:

We will now describe both methods so you can make a suitable choice; of course you can pick different strategies for each set of coordinates. These strategies are selected by specifying spatialMode, an attribute of the @Spatial annotation.

Instead of using the @Latitude and @Longitue annotations you can choose to implement the org.hibernate.search.spatial.Coordinates interface.


As we will see in the section Section 9.3, “Multiple Coordinate pairs”, a @Spatial @Indexed @Entity can have multiple @Spatial annotations; when having the entity implement Coordinates, the implemented methods refer to the default Spatial name: the default pair of coordinates.

An alternative is to use properties implementing the Coordinates interface; this way you can have multiple Spatial instances:


When using this form the @Spatial .name automatically defaults to the propery name.

The Hibernate Search DSL has been extended to support the spatial feature. You can build a query to search around a pair of coordinates (latitude,longitude) or around a bean implementing the Coordinates interface.

As with any fulltext queries, also for Spatial queries you:


A fully working example can be found in the source code, in the testsuite. See SpatialIndexingTest.testSpatialAnnotationOnClassLevel() and in the Hotel class.

As an alternative to passing separate values for latitude and longitude values, you can also pass an object implementing the Coordinates interface:


To get the distance to the center of the search returned with the results you just need to project it:


  • Use FullTextQuery.setProjection with FullTextQuery.SPATIAL_DISTANCE as one of the projected fields.

  • Call FullTextQuery.setSpatialParameters with the latitude, longitude and the name of the spatial field used to build the spatial query. Note that using coordinates different thans the center used for the query will have unexpected results.

Distance projection and null values

Using distance projection on non @Spatial enabled entities and/or with a non spatial Query will have unexpected results as entities not spatially indexed and/or having null values for latitude or longitude will be considered to be at (0,0)/(lat,0)/(0,long).

Using distance projection with a spatial query on spatially indexed entities having, eventually, null values for latitude and/or longitude is safe as they will not be found by the spatial query and won't have distance calculated.

To sort the results by distance to the center of the search you will have to build a Sort object using a DistanceSortField:


The DistanceSortField must be constructed using the same coordinates on the same spatial field used to build the spatial query otherwise the sorting will occur with another center than the query. This repetition is needed to allow you to define Queries with any tool.

Sorting and null values

Using distance sort on non @Spatial enabled entities and/or with a non spatial Query will have also unexpected results as entities non spatially indexed and/or with null values for latitude or longitude will be considered to be at (0,0)/(lat,0)/(0,long)

Using distance sort with a spatial query on spatially indexed entities having, potentially, null values for latitude and/or longitude is safe as they will not be found by the spatial query and so won't be sorted

You can associate multiple pairs of coordinates to the same entity, as long as each pair is uniquelly identified by using a different name. This is achieved by stacking multiple @Spatial annotations in a @Spatials annotation, and specifying the name attribute on the @Spatial annotation.


In the example Example 9.5, “Search for an Hotel by distance” we used onDefaultCoordinates() which points to the coordinates defined by a @Spatial annotation whose name attribute was not specified.

To target an alternative pair of coordinates at query time, we need to specify the pair by name using onCoordinates (String) instead of onDefaultCoordinates():


The present chapter is meant to provide a technical insight in quad-tree (grid) indexing: how coordinates are mapped to the index and how queries are implemented.

When Hibernate Search indexes the entity annotated with @Spatial, it instantiates a SpatialFieldBridge to transform the latitude and longitude fields accessed via the Coordinates interface to the multiple index fields stored in the Lucene index.

Principle of the spatial index: the spatial index used in Hibernate Search is a QuadTree (http://en.wikipedia.org/wiki/Quadtree).

To make computation in a flat coordinates system the latitude and longitude field values will be projected with a sinusoidal projection ( http://en.wikipedia.org/wiki/Sinusoidal_projection). Origin values space is :

[-90 -> +90],]-180 -> 180]

for latitude,longitude coordinates and projected space is:

]-pi -> +pi],[-pi/2 -> +pi/2]

for cartesian x,y coordinates (beware of fields order inversion: x is longitude and y is latitude).

The index is divided into n levels labeled from 0 to n-1.

At the level 0 the projected space is the whole Earth. At the level 1 the projected space is devided into 4 rectangles (called boxes as in bounding box):

[-pi,-pi/2]->[0,0], [-pi,0]->[0,+pi/2], [0,-pi/2]->[+pi,0] and [0,0]->[+pi,+pi/2]

At level n+1 each box of level n is divided into 4 new boxes and so on. The numbers of boxes at a given level is 4^n.

Each box is given an id, in this format: [Box index on the X axis]|[Box index on the Y axis] To calculate the index of a box on an axis we divide the axis range in 2^n slots and find the slot the box belongs to. At the n level the indexes on an axis are from -(2^n)/2 to (2^n)/2. For instance, the 5th level has 4^5 = 1024 boxes with 32 indexes on each axis (32x32 is 1024) and the box of Id "0|8" is covering the [0,8/32*pi/2]->[1/32*pi,9/32*pi/2] rectangle is projected space.

Beware! The boxes are rectangles in projected space but the related area on Earth is not a rectangle!

Now that we have all these boxes at all these levels will be indexing points "into" them.

For a point (lat,long) we calculate its projection (x,y) and then we calculate for each level of the spatial index, the ids of the boxes it belongs to.

At each level the point is in one and only one box. For points on the edges the box are considered exclusive n the left side and inclusive on the right i-e ]start,end] (the points are normalized before projection to [-90,+90],]-180,+180]).

We store in the Lucene document corresponding to the entity to index one field for each level of the quad tree. The field is named: [spatial index fields name]_HSSI_[n]. [spatial index fields name] is given either by the parameter at class level annotation or derived from the name of the spatial annoted method of he entitiy, HSSI stands for Hibernate Search Spatial Index and n is the level of the quad tree.

We also store the latitude and longitude as a Numeric field under [spatial index fields name]_HSSI_Latitude and [spatial index fields name]_HSSI_Longitude fields. They will be used to filter precisely results by distance in the second stage of the search.

Now that we have all these fields, what are they used for?

When you ask for a spatial search by providing a search discus (center+radius) we will calculate the boxes ids that do cover the search discus in the projected space, fetch all the documents that belong to these boxes (thus narrowing the number of documents for which we will have to calculate distance to the center) and then filter this subset with a real distance calculation. This is called two level spatial filtering.

In this final chapter we are offering a smorgasbord of tips and tricks which might become useful as you dive deeper and deeper into Hibernate Search.

Queries in Lucene are executed on an IndexReader. Hibernate Search caches index readers to maximize performance and implements other strategies to retrieve updated IndexReaders in order to minimize IO operations. Your code can access these cached resources, but you have to follow some "good citizen" rules.


In this example the SearchFactory figures out which indexes are needed to query this entity. Using the configured ReaderProvider (described in Reader strategy) on each index, it returns a compound IndexReader on top of all involved indexes. Because this IndexReader is shared amongst several clients, you must adhere to the following rules:

  • Never call indexReader.close(), but always call readerProvider.closeReader(reader), using a finally block.

  • Don't use this IndexReader for modification operations: it's a readonly IndexReader, you would get an exception).

Aside from those rules, you can use the IndexReader freely, especially to do native Lucene queries. Using this shared IndexReaders will be more efficient than by opening one directly from - for example - the filesystem.

As an alternative to the method open(Class... types) you can use open(String... indexNames); in this case you pass in one or more index names; using this strategy you can also select a subset of the indexes for any indexed type if sharding is used.


In some cases it can be useful to split (shard) the data into several Lucene indexes. There are two main use use cases:

Dynamic sharding allows you to manage the shards yourself and even create new shards on the fly. To do so you need to implement the interface ShardIdentifierProvider and set the hibernate.search.[default|<indexName>].sharding_strategy property to the fully qualified name of this class. Note that instead of implementing the interface directly, you should rather derive your implementation from org.hibernate.search.store.ShardIdentifierProviderTemplate which provides a basic implementation. Let's look at Example 10.6, “Custom ShardidentiferProvider for an example.


The are several things happening in AnimalShardIdentifierProvider. First off its purpose is to create one shard per animal type (e.g. mammal, insect, etc.). It does so by inspecting the class type and the Lucene document passed to the getShardIdentifier() method. It extracts the type field from the document and uses it as shard name. getShardIdentifier() is called for every addition to the index and a new shard will be created with every new animal type encountered. The base class ShardIdentifierProviderTemplate maintains a set with all known shards to which any identifier must be added by calling addShard().

It is important to understand that Hibernate Search cannot know which shards already exist when the application starts. When using ShardIdentifierProviderTemplate as base class of a ShardIdentifierProvider implementation, the initial set of shard identifiers must be returned by the loadInitialShardNames() method. How this is done will depend on the use case. However, a common case in combination with Hibernate ORM is that the initial shard set is defined by the the distinct values of a given database column. Example 10.6, “Custom ShardidentiferProvider shows how to handle such a case. AnimalShardIdentifierProvider makes in its loadInitialShardNames() implementation use of a service called HibernateSessionFactoryServiceProvider (see also Section 10.6, “Using external services”) which is available within an ORM environment. It allows to request a Hibernate SessionFactory instance which can be used to run a Criteria query in order to determine the initial set of shard identifers.

Last but not least, the ShardIdentifierProvider also allows for optimizing searches by selecting which shard to run a query against. By activating a filter (see Section 5.3.1, “Using filters in a sharded environment”), a sharding strategy can select a subset of the shards used to answer a query (getShardIdentifiersForQuery(), not shown in the example) and thus speed up the query execution.

Important

This ShardIdentifierProvider is considered experimental. We might need to apply some changes to the defined method signatures to accomodate for unforeseen use cases. Please provide feedback if you have ideas, or just to let us know how you're using this API.

Any of the pluggable contracts we have seen so far allows for the injection of a service. The most notable example being the DirectoryProvider. The full list is:

Some of these components need to access a service which is either available in the environment or whose lifecycle is bound to the SearchFactory. Sometimes, you even want the same service to be shared amongst several instances of these contract. One example is the ability the share an Infinispan cache instance between several directory providers running in different JVMs to store the various indexes using the same underlying infrastructure; this provides real-time replication of indexes across nodes.

To expose a service, you need to implement org.hibernate.search.spi.ServiceProvider<T>. T is the type of the service you want to use. Services are retrieved by components via their ServiceProvider class implementation.

Lucene allows the user to customize its scoring formula by extending org.apache.lucene.search.Similarity. The abstract methods defined in this class match the factors of the following formula calculating the score of query q for document d:

score(q,d) = coord(q,d) · queryNorm(q) · ∑ t in q ( tf(t in d) · idf(t) 2 · t.getBoost() · norm(t,d) )

FactorDescription
tf(t ind)Term frequency factor for the term (t) in the document (d).
idf(t)Inverse document frequency of the term.
coord(q,d)Score factor based on how many of the query terms are found in the specified document.
queryNorm(q)Normalizing factor used to make scores between queries comparable.
t.getBoost()Field boost.
norm(t,d)Encapsulates a few (indexing time) boost and length factors.

It is beyond the scope of this manual to explain this formula in more detail. Please refer to Similarity's Javadocs for more information.

Hibernate Search provides two ways to modify Lucene's similarity calculation.

First you can set the default similarity by specifying the fully specified classname of your Similarity implementation using the property hibernate.search.similarity. The default value is org.apache.lucene.search.DefaultSimilarity.

Secondly, you can override the similarity used for a specific index by setting the similarity property for this index (see Section 3.3, “Directory configuration” for more information about index configuration):

hibernate.search.[default|<indexname>].similarity = my.custom.Similarity

As an example, let's assume it is not important how often a term appears in a document. Documents with a single occurrence of the term should be scored the same as documents with multiple occurrences. In this case your custom implementation of the method tf(float freq) should return 1.0.

Note

When two entities share the same index they must declare the same Similarity implementation.

Note

The use of @Similarity which was used to configure the similarity on a class level is deprecated since Hibernate Search 4.4. Instead of using the annotation use the configuration property.

Last but not least, a few pointers to further information. We highly recommend you to get a copy Hibernate Search in Action. This excellent book covers Hibernate Search in much more depth than this online documentation can and has a great range of additional examples. If you want to increase your knowledge of Lucene we recommend Lucene in Action (Second Edition). Because Hibernate Search's functionality is tightly coupled to Hibernate Core it is a good idea to understand Hibernate. Start with the online documentation or get hold of a copy of Java Persistence with Hibernate.

If you have any further questions regarding Hibernate Search or want to share some of your use cases have a look at the Hibernate Search Wiki and the Hibernate Search Forum. We are looking forward hearing from you.

In case you would like to report a bug use the Hibernate Search JIRA instance. Feedback is always welcome!