Full text search engines like Apache Lucene are very powerful technologies to add efficient free text search capabilities to applications. However, Lucene suffers several mismatches when dealing with object domain model. Amongst other things indexes have to be kept up to date and mismatches between index structure and domain model as well as query mismatches have to be avoided.
Hibernate Search addresses these shortcomings - it indexes your domain model with the help of a few annotations, takes care of database/index synchronization and brings back regular managed objects from free text queries. To achieve this Hibernate Search is combining the power of Hibernate and Apache Lucene.
Welcome to Hibernate Search. The following chapter will guide you through the initial steps required to integrate Hibernate Search into an existing Hibernate enabled application. In case you are a Hibernate new timer we recommend you start here.
Table 1.1. System requirements
Java Runtime | A JDK or JRE version 5 or greater. You can download a Java Runtime for Windows/Linux/Solaris here. |
Hibernate Search | hibernate-search.jar and all runtime
dependencies from the dist/lib directory of the
Hibernate Search distribution. |
Hibernate Core | This instructions have been tested against Hibernate 3.5.
You will need hibernate-core.jar and its
transitive dependencies from the lib directory
of the distribution. Refer to README.txt in the
lib directory of the distribution to determine
the minimum runtime requirements. |
Hibernate Annotations | Even though Hibernate Search can be used without Hibernate Annotations the following instructions will use them for basic entity configuration (@Entity, @Id, @OneToMany,...). This part of the configuration could also be expressed in xml or code. However, Hibernate Search itself has its own set of annotations (@Indexed, @DocumentId, @Field,...) for which there exists so far no alternative configuration. The tutorial is tested against version 3.5 of Hibernate Annotations (part of the Hibernate Core distribution). |
You can download all dependencies from the Hibernate download site.
Instead of managing all dependencies manually, maven users have the
possibility to use the JBoss
maven repository. Add the following to your Maven
settings.xml file
(see also Maven
Getting Started):
Example 1.1. Adding the JBoss maven repository to
settings.xml
<settings> ... <profiles> ... <profile> <id>jboss-public-repository</id> <repositories> <repository> <id>jboss-public-repository-group</id> <name>JBoss Public Maven Repository Group</name> <url>https://repository.jboss.org/nexus/content/groups/public/</url> <layout>default</layout> <releases> <enabled>true</enabled> <updatePolicy>never</updatePolicy> </releases> <snapshots> <enabled>true</enabled> <updatePolicy>never</updatePolicy> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>jboss-public-repository-group</id> <name>JBoss Public Maven Repository Group</name> <url>https://repository.jboss.org/nexus/content/groups/public/</url> <layout>default</layout> <releases> <enabled>true</enabled> <updatePolicy>never</updatePolicy> </releases> <snapshots> <enabled>true</enabled> <updatePolicy>never</updatePolicy> </snapshots> </pluginRepository> </pluginRepositories> </profile> </profiles> <activeProfiles> <activeProfile>jboss-public-repository</activeProfile> </activeProfiles> ... </settings>
Then add the following dependencies to your pom.xml:
Example 1.2. Maven dependencies for Hibernate Search
<dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-search</artifactId> <version>3.2.1.Final</version> </dependency> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-entitymanager</artifactId> <version>3.5.0-Final</version> </dependency>
Only the hibernate-search dependency is mandatory, because it contains together with its required transitive dependencies all required classes needed to use Hibernate Search. hibernate-entitymanager is only required if you want to use Hibernate Search in conjunction with JPA.
There is no XML configuration available for Hibernate Search but we provide a powerful programmatic mapping API that elegantly replace this kind of deployment form (see Section 4.4, “Programmatic API” for more information).
Once you have downloaded and added all required dependencies to your
application you have to add a couple of properties to your hibernate
configuration file. If you are using Hibernate directly this can be done
in hibernate.properties
or
hibernate.cfg.xml
. If you are using Hibernate via JPA
you can also add the properties to persistence.xml
. The
good news is that for standard use most properties offer a sensible
default. An example persistence.xml
configuration
could look like this:
Example 1.3. Basic configuration options to be added to
,
hibernate.properties
or
hibernate.cfg.xml
persistence.xml
... <property name="hibernate.search.default.directory_provider" value="org.hibernate.search.store.FSDirectoryProvider"/> <property name="hibernate.search.default.indexBase" value="/var/lucene/indexes"/> ...
First you have to tell Hibernate Search which
DirectoryProvider
to use. This can be achieved by
setting the hibernate.search.default.directory_provider
property. Apache Lucene has the notion of a Directory
to store the index files. Hibernate Search handles the initialization and
configuration of a Lucene Directory
instance via a
DirectoryProvider
. In this tutorial we will use a
subclass of DirectoryProvider
called
FSDirectoryProvider
. This will give us the ability
to physically inspect the Lucene indexes created by Hibernate Search (eg
via Luke). Once you have
a working configuration you can start experimenting with other directory
providers (see Section 3.1, “Directory configuration”). Next to
the directory provider you also have to specify the default root directory
for all indexes via
hibernate.search.default.indexBase
.
Lets assume that your application contains the Hibernate managed
classes example.Book
and
example.Author
and you want to add free text search
capabilities to your application in order to search the books contained in
your database.
Example 1.4. Example entities Book and Author before adding Hibernate Search specific annotations
package example; ... @Entity public class Book { @Id @GeneratedValue private Integer id; private String title; private String subtitle; @ManyToMany private Set<Author> authors = new HashSet<Author>(); private Date publicationDate; public Book() {} // standard getters/setters follow here ... }
package example; ... @Entity public class Author { @Id @GeneratedValue private Integer id; private String name; public Author() {} // standard getters/setters follow here ... }
To achieve this you have to add a few annotations to the
Book
and Author
class. The
first annotation @Indexed
marks
Book
as indexable. By design Hibernate Search needs
to store an untokenized id in the index to ensure index unicity for a
given entity. @DocumentId
marks the property to use for
this purpose and is in most cases the same as the database primary key. In
fact since the 3.1.0 release of Hibernate Search
@DocumentId
is optional in the case where an
@Id
annotation exists.
Next you have to mark the fields you want to make searchable. Let's
start with title
and subtitle
and
annotate both with @Field
. The parameter
index=Index.TOKENIZED
will ensure that the text will be
tokenized using the default Lucene analyzer. Usually, tokenizing means
chunking a sentence into individual words and potentially excluding common
words like 'a'
or 'the
'. We will
talk more about analyzers a little later on. The second parameter we
specify within @Field
,
store=Store.NO
, ensures that the actual data will not be stored
in the index. Whether this data is stored in the index or not has nothing
to do with the ability to search for it. From Lucene's perspective it is
not necessary to keep the data once the index is created. The benefit of
storing it is the ability to retrieve it via projections (Section 5.1.2.5, “Projection”).
Without projections, Hibernate Search will per default execute a
Lucene query in order to find the database identifiers of the entities
matching the query critera and use these identifiers to retrieve managed
objects from the database. The decision for or against projection has to
be made on a case to case basis. The default behaviour -
Store.NO
- is recommended since it returns managed
objects whereas projections only return object arrays.
After this short look under the hood let's go back to annotating the
Book
class. Another annotation we have not yet
discussed is @DateBridge
. This annotation is one of the
built-in field bridges in Hibernate Search. The Lucene index is purely
string based. For this reason Hibernate Search must convert the data types
of the indexed fields to strings and vice versa. A range of predefined
bridges are provided, including the DateBridge
which will convert a java.util.Date
into a
String
with the specified resolution. For more
details see Section 4.2, “Property/Field Bridge”.
This leaves us with @IndexedEmbedded.
This
annotation is used to index associated entities
(@ManyToMany
, @*ToOne
and
@Embedded
) as part of the owning entity. This is needed
since a Lucene index document is a flat data structure which does not know
anything about object relations. To ensure that the authors' name wil be
searchable you have to make sure that the names are indexed as part of the
book itself. On top of @IndexedEmbedded
you will also
have to mark all fields of the associated entity you want to have included
in the index with @Indexed
. For more details see Section 4.1.3, “Embedded and associated objects”.
These settings should be sufficient for now. For more details on entity mapping refer to Section 4.1, “Mapping an entity”.
Example 1.5. Example entities after adding Hibernate Search annotations
package example; ... @Entity @Indexed public class Book { @Id @GeneratedValue private Integer id; @Field(index=Index.TOKENIZED, store=Store.NO) private String title; @Field(index=Index.TOKENIZED, store=Store.NO) private String subtitle; @IndexedEmbedded @ManyToMany private Set<Author> authors = new HashSet<Author>(); @Field(index = Index.UN_TOKENIZED, store = Store.YES) @DateBridge(resolution = Resolution.DAY) private Date publicationDate; public Book() { } // standard getters/setters follow here ... }
package example;
...
@Entity
public class Author {
@Id
@GeneratedValue
private Integer id;
@Field(index=Index.TOKENIZED, store=Store.NO)
private String name;
public Author() {
}
// standard getters/setters follow here
...
}
Hibernate Search will transparently index every entity persisted, updated or removed through Hibernate Core. However, you have to create an initial Lucene index for the data already present in your database. Once you have added the above properties and annotations it is time to trigger an initial batch index of your books. You can achieve this by using one of the following code snippets (see also Section 6.3, “Rebuilding the whole Index”):
Example 1.6. Using Hibernate Session to index data
FullTextSession fullTextSession = Search.getFullTextSession(session); fullTextSession.createIndexer().startAndWait();
Example 1.7. Using JPA to index data
EntityManager em = entityManagerFactory.createEntityManager(); FullTextEntityManager fullTextEntityManager = Search.getFullTextEntityManager(em); fullTextEntityManager.createIndexer().startAndWait();
After executing the above code, you should be able to see a Lucene
index under /var/lucene/indexes/example.Book
. Go ahead
an inspect this index with Luke. It will help you to
understand how Hibernate Search works.
Now it is time to execute a first search. The general approach is to
create a native Lucene query and then wrap this query into a
org.hibernate.Query in order to get all the functionality one is used to
from the Hibernate API. The following code will prepare a query against
the indexed fields, execute it and return a list of
Book
s.
Example 1.8. Using Hibernate Session to create and execute a search
FullTextSession fullTextSession = Search.getFullTextSession(session); Transaction tx = fullTextSession.beginTransaction(); // create native Lucene query String[] fields = new String[]{"title", "subtitle", "authors.name", "publicationDate"}; MultiFieldQueryParser parser = new MultiFieldQueryParser(fields, new StandardAnalyzer()); org.apache.lucene.search.Query query = parser.parse( "Java rocks!" ); // wrap Lucene query in a org.hibernate.Query org.hibernate.Query hibQuery = fullTextSession.createFullTextQuery(query, Book.class); // execute search List result = hibQuery.list(); tx.commit(); session.close();
Example 1.9. Using JPA to create and execute a search
EntityManager em = entityManagerFactory.createEntityManager(); FullTextEntityManager fullTextEntityManager = org.hibernate.search.jpa.Search.getFullTextEntityManager(em); em.getTransaction().begin(); // create native Lucene query String[] fields = new String[]{"title", "subtitle", "authors.name", "publicationDate"}; MultiFieldQueryParser parser = new MultiFieldQueryParser(fields, new StandardAnalyzer()); org.apache.lucene.search.Query query = parser.parse( "Java rocks!" ); // wrap Lucene query in a javax.persistence.Query javax.persistence.Query persistenceQuery = fullTextEntityManager.createFullTextQuery(query, Book.class); // execute search List result = persistenceQuery.getResultList(); em.getTransaction().commit(); em.close();
Let's make things a little more interesting now. Assume that one of your indexed book entities has the title "Refactoring: Improving the Design of Existing Code" and you want to get hits for all of the following queries: "refactor", "refactors", "refactored" and "refactoring". In Lucene this can be achieved by choosing an analyzer class which applies word stemming during the indexing as well as search process. Hibernate Search offers several ways to configure the analyzer to use (see Section 4.1.6, “Analyzer”):
Setting the hibernate.search.analyzer
property in the configuration file. The specified class will then be
the default analyzer.
Setting the
annotation at the entity level.@Analyzer
Setting the @
annotation at the field level.Analyzer
When using the @Analyzer
annotation one can
either specify the fully qualified classname of the analyzer to use or one
can refer to an analyzer definition defined by the
@AnalyzerDef
annotation. In the latter case the Solr
analyzer framework with its factories approach is utilized. To find out
more about the factory classes available you can either browse the Solr
JavaDoc or read the corresponding section on the Solr
Wiki.
In the example below a
StandardTokenizerFactory
is used followed by two
filter factories, LowerCaseFilterFactory
and
SnowballPorterFilterFactory
. The standard tokenizer
splits words at punctuation characters and hyphens while keeping email
addresses and internet hostnames intact. It is a good general purpose
tokenizer. The lowercase filter lowercases the letters in each token
whereas the snowball filter finally applies language specific
stemming.
Generally, when using the Solr framework you have to start with a tokenizer followed by an arbitrary number of filters.
Example 1.10. Using @AnalyzerDef
and the Solr framework
to define and use an analyzer
package example; ... @Entity @Indexed @AnalyzerDef(name = "customanalyzer", tokenizer = @TokenizerDef(factory = StandardTokenizerFactory.class), filters = { @TokenFilterDef(factory = LowerCaseFilterFactory.class), @TokenFilterDef(factory = SnowballPorterFilterFactory.class, params = { @Parameter(name = "language", value = "English") }) }) public class Book { @Id @GeneratedValue @DocumentId private Integer id; @Field(index=Index.TOKENIZED, store=Store.NO) @Analyzer(definition = "customanalyzer") private String title; @Field(index=Index.TOKENIZED, store=Store.NO) @Analyzer(definition = "customanalyzer") private String subtitle; @IndexedEmbedded @ManyToMany private Set<Author> authors = new HashSet<Author>(); @Field(index = Index.UN_TOKENIZED, store = Store.YES) @DateBridge(resolution = Resolution.DAY) private Date publicationDate; public Book() { } // standard getters/setters follow here ... }
The above paragraphs helped you getting an overview of Hibernate Search. The next step after this tutorial is to get more familiar with the overall architecture of Hibernate Search (Chapter 2, Architecture) and explore the basic features in more detail. Two topics which were only briefly touched in this tutorial were analyzer configuration (Section 4.1.6, “Analyzer”) and field bridges (Section 4.2, “Property/Field Bridge”), both important features required for more fine-grained indexing. More advanced topics cover clustering (Section 3.5, “JMS Master/Slave configuration”) and large indexes handling (Section 3.2, “Sharding indexes”).
Hibernate Search consists of an indexing component and an index search component. Both are backed by Apache Lucene.
Each time an entity is inserted, updated or removed in/from the database, Hibernate Search keeps track of this event (through the Hibernate event system) and schedules an index update. All the index updates are handled without you having to use the Apache Lucene APIs (see Section 3.8, “Enabling Hibernate Search and automatic indexing”).
To interact with Apache Lucene indexes, Hibernate Search has the
notion of DirectoryProvider
s. A directory provider
will manage a given Lucene Directory
type. You can
configure directory providers to adjust the directory target (see Section 3.1, “Directory configuration”).
Hibernate Search uses the Lucene index to search an entity and
return a list of managed entities saving you the tedious object to Lucene
document mapping. The same persistence context is shared between Hibernate
and Hibernate Search. As a matter of fact, the
FullTextSession
is built on top of the Hibernate
Session. so that the application code can use the unified
org.hibernate.Query
or
javax.persistence.Query
APIs exactly the way a HQL,
JPA-QL or native queries would do.
To be more efficient, Hibernate Search batches the write interactions with the Lucene index. There is currently two types of batching depending on the expected scope. Outside a transaction, the index update operation is executed right after the actual database operation. This scope is really a no scoping setup and no batching is performed. However, it is recommended - for both your database and Hibernate Search - to execute your operation in a transaction be it JDBC or JTA. When in a transaction, the index update operation is scheduled for the transaction commit phase and discarded in case of transaction rollback. The batching scope is the transaction. There are two immediate benefits:
Performance: Lucene indexing works better when operation are executed in batch.
ACIDity: The work executed has the same scoping as the one executed by the database transaction and is executed if and only if the transaction is committed. This is not ACID in the strict sense of it, but ACID behavior is rarely useful for full text search indexes since they can be rebuilt from the source at any time.
You can think of those two scopes (no scope vs transactional) as the equivalent of the (infamous) autocommit vs transactional behavior. From a performance perspective, the in transaction mode is recommended. The scoping choice is made transparently. Hibernate Search detects the presence of a transaction and adjust the scoping.
Hibernate Search offers the ability to let the scoped work being processed by different back ends. Two back ends are provided out of the box for two different scenarios.
In this mode, all index update operations applied on a given node (JVM) will be executed to the Lucene directories (through the directory providers) by the same node. This mode is typically used in non clustered environment or in clustered environments where the directory store is shared.
Lucene back end configuration.
This mode targets non clustered applications, or clustered applications where the Directory is taking care of the locking strategy.
The main advantage is simplicity and immediate visibility of the changes in Lucene queries (a requirement in some applications).
All index update operations applied on a given node are sent to a JMS queue. A unique reader will then process the queue and update the master index. The master index is then replicated on a regular basis to the slave copies. This is known as the master/slaves pattern. The master is the sole responsible for updating the Lucene index. The slaves can accept read as well as write operations. However, they only process the read operation on their local index copy and delegate the update operations to the master.
JMS back end configuration.
This mode targets clustered environments where throughput is critical, and index update delays are affordable. Reliability is ensured by the JMS provider and by having the slaves working on a local copy of the index.
The JGroups based back end works similarly as the JMS one. Designed on the same master/slave pattern, instead of JMS the JGroups toolkit is used as a replication mechanism. This back end can be used as an alternative to JMS one when response time is still critical, but i.e. JNDI service is not available.
hibernate-dev@lists.jboss.org
.The indexing work (done by the back end) can be executed synchronously with the transaction commit (or update operation if out of transaction), or asynchronously.
This is the safe mode where the back end work is executed in concert with the transaction commit. Under highly concurrent environment, this can lead to throughput limitations (due to the Apache Lucene lock mechanism) and it can increase the system response time if the backend is significantly slower than the transactional process and if a lot of IO operations are involved.
This mode delegates the work done by the back end to a different thread. That way, throughput and response time are (to a certain extend) decorrelated from the back end performance. The drawback is that a small delay appears between the transaction commit and the index update and a small overhead is introduced to deal with thread management.
It is recommended to use synchronous execution first and evaluate asynchronous execution if performance problems occur and after having set up a proper benchmark (ie not a lonely cowboy hitting the system in a completely unrealistic way).
When executing a query, Hibernate Search interacts with the Apache Lucene indexes through a reader strategy. Choosing a reader strategy will depend on the profile of the application (frequent updates, read mostly, asynchronous index update etc). See also Section 3.7, “Reader strategy configuration”
With this strategy, Hibernate Search will share the same
IndexReader
, for a given Lucene index, across
multiple queries and threads provided that the
IndexReader
is still up-to-date. If the
IndexReader
is not up-to-date, a new one is
opened and provided. Each IndexReader
is made of
several SegmentReader
s. This strategy only
reopens segments that have been modified or created after last opening
and shares the already loaded segments from the previous instance. This
strategy is the default.
The name of this strategy is shared
.
Every time a query is executed, a Lucene
IndexReader
is opened. This strategy is not the
most efficient since opening and warming up an
IndexReader
can be a relatively expensive
operation.
The name of this strategy is not-shared
.
Apache Lucene has a notion of Directory
to store
the index files. The Directory
implementation can
be customized, but Lucene comes bundled with a file system
(FSDirectoryProvider
) and an in memory
(RAMDirectoryProvider
) implementation.
DirectoryProvider
s are the Hibernate Search abstraction
around a Lucene Directory
and handle the
configuration and the initialization of the underlying Lucene resources.
Table 3.1, “List of built-in Directory Providers” shows the list of the
directory providers bundled with Hibernate Search.
Table 3.1. List of built-in Directory Providers
Class | Description | Properties |
---|---|---|
org.hibernate.search.store.RAMDirectoryProvider | Memory based directory, the directory will be uniquely
identified (in the same deployment unit) by the
@Indexed.index element | none |
org.hibernate.search.store.FSDirectoryProvider | File system based directory. The directory used will be <indexBase>/< indexName > |
|
org.hibernate.search.store.FSMasterDirectoryProvider | File system based directory. Like FSDirectoryProvider. It also copies the index to a source directory (aka copy directory) on a regular basis. The recommended value for the refresh period is (at least) 50% higher that the time to copy the information (default 3600 seconds - 60 minutes). Note that the copy is based on an incremental copy mechanism reducing the average copy time. DirectoryProvider typically used on the master node in a JMS back end cluster. The |
|
org.hibernate.search.store.FSSlaveDirectoryProvider | File system based directory. Like FSDirectoryProvider, but retrieves a master version (source) on a regular basis. To avoid locking and inconsistent search results, 2 local copies are kept. The recommended value for the refresh period is (at least) 50% higher that the time to copy the information (default 3600 seconds - 60 minutes). Note that the copy is based on an incremental copy mechanism reducing the average copy time. DirectoryProvider typically used on slave nodes using a JMS back end. The
|
|
If the built-in directory providers do not fit your needs, you can
write your own directory provider by implementing the
org.hibernate.store.DirectoryProvider
interface.
Each indexed entity is associated to a Lucene index (an index can be
shared by several entities but this is not usually the case). You can
configure the index through properties prefixed by
hibernate.search.
indexname
. Default properties inherited to all indexes can be defined using the
prefix hibernate.search.default.
To define the directory provider of a given index, you use the
hibernate.search.
indexname
.directory_provider
Example 3.1. Configuring directory providers
hibernate.search.default.directory_provider org.hibernate.search.store.FSDirectoryProvider hibernate.search.default.indexBase=/usr/lucene/indexes hibernate.search.Rules.directory_provider org.hibernate.search.store.RAMDirectoryProvider
applied on
Example 3.2. Specifying the index name using the index
parameter of @Indexed
@Indexed(index="Status") public class Status { ... } @Indexed(index="Rules") public class Rule { ... }
will create a file system directory in
/usr/lucene/indexes/Status
where the Status entities
will be indexed, and use an in memory directory named
Rules
where Rule entities will be indexed.
You can easily define common rules like the directory provider and base directory, and override those defaults later on on a per index basis.
Writing your own DirectoryProvider
, you can
utilize this configuration mechanism as well.
In some cases, it is necessary to split (shard) the indexing data of a given entity type into several Lucene indexes. This solution is not recommended unless there is a pressing need because by default, searches will be slower as all shards have to be opened for a single search. In other words don't do it until you have problems :)
For example, sharding may be desirable if:
A single index is so huge that index update times are slowing the application down.
A typical search will only hit a sub-set of the index, such as when data is naturally segmented by customer, region or application.
Hibernate Search allows you to index a given entity type into
several sub indexes. Data is sharded into the different sub indexes thanks
to an IndexShardingStrategy
. By default, no
sharding strategy is enabled, unless the number of shards is configured.
To configure the number of shards use the following property
Example 3.3. Enabling index sharding by specifying nbr_of_shards for a specific index
hibernate.search.<indexName>.sharding_strategy.nbr_of_shards 5
This will use 5 different shards.
The default sharding strategy, when shards are set up, splits the
data according to the hash value of the id string representation
(generated by the Field Bridge). This ensures a fairly balanced sharding.
You can replace the strategy by implementing
IndexShardingStrategy
and by setting the following
property
Example 3.4. Specifying a custom sharding strategy
hibernate.search.<indexName>.sharding_strategy my.shardingstrategy.Implementation
Using a custom IndexShardingStrategy
implementation, it's possible to define what shard a given entity is
indexed to.
It also allows for optimizing searches by selecting which shard to
run the query onto. By activating a filter (see Section 5.3.1, “Using filters in a sharded environment”), a sharding strategy can select a subset
of the shards used to answer a query
(IndexShardingStrategy.getDirectoryProvidersForQuery
)
and thus speed up the query execution.
Each shard has an independent directory provider configuration as
described in Section 3.1, “Directory configuration”. The
DirectoryProvider
default name for the previous
example are <indexName>.0
to
<indexName>.4
. In other words, each shard has the
name of it's owning index followed by .
(dot) and its
index number.
Example 3.5. Configuring the sharding configuration for an example entity
Animal
hibernate.search.default.indexBase /usr/lucene/indexes hibernate.search.Animal.sharding_strategy.nbr_of_shards 5 hibernate.search.Animal.directory_provider org.hibernate.search.store.FSDirectoryProvider hibernate.search.Animal.0.indexName Animal00 hibernate.search.Animal.3.indexBase /usr/lucene/sharded hibernate.search.Animal.3.indexName Animal03
This configuration uses the default id string hashing strategy and
shards the Animal index into 5 subindexes. All subindexes are
FSDirectoryProvider
instances and the directory
where each subindex is stored is as followed:
for subindex 0: /usr/lucene/indexes/Animal00 (shared indexBase but overridden indexName)
for subindex 1: /usr/lucene/indexes/Animal.1 (shared indexBase, default indexName)
for subindex 2: /usr/lucene/indexes/Animal.2 (shared indexBase, default indexName)
for subindex 3: /usr/lucene/shared/Animal03 (overridden indexBase, overridden indexName)
for subindex 4: /usr/lucene/indexes/Animal.4 (shared indexBase, default indexName)
This is only presented here so that you know the option is available. There is really not much benefit in sharing indexes.
It is technically possible to store the information of more than one entity into a single Lucene index. There are two ways to accomplish this:
Configuring the underlying directory providers to point to the
same physical index directory. In practice, you set the property
hibernate.search.[fully qualified entity
name].indexName
to the same value. As an example let’s use
the same index (directory) for the Furniture
and Animal
entity. We just set
indexName
for both entities to for example
“Animal”. Both entities will then be stored in the Animal
directory
hibernate.search.org.hibernate.search.test.shards.Furniture.indexName = Animal
hibernate.search.org.hibernate.search.test.shards.Animal.indexName = Animal
Setting the @Indexed
annotation’s
index
attribute of the entities you want to
merge to the same value. If we again wanted all
Furniture
instances to be indexed in the
Animal
index along with all instances of
Animal
we would specify
@Indexed(index=”Animal”)
on both
Animal
and Furniture
classes.
It is possible to refine how Hibernate Search interacts with Lucene through the worker configuration. The work can be executed to the Lucene directory or sent to a JMS queue for later processing. When processed to the Lucene directory, the work can be processed synchronously or asynchronously to the transaction commit.
You can define the worker configuration using the following properties
Table 3.2. worker configuration
Property | Description |
hibernate.search.worker.backend | Out of the box support for the Apache Lucene back end and
the JMS back end. Default to lucene . Supports
also jms , blackhole ,
jgroupsMaster and jgroupsSlave . |
hibernate.search.worker.execution | Supports synchronous and asynchronous execution. Default to
. Supports also
async . |
hibernate.search.worker.thread_pool.size | Defines the number of threads in the pool. useful only for asynchronous execution. Default to 1. |
hibernate.search.worker.buffer_queue.max | Defines the maximal number of work queue if the thread poll is starved. Useful only for asynchronous execution. Default to infinite. If the limit is reached, the work is done by the main thread. |
hibernate.search.worker.jndi.* | Defines the JNDI properties to initiate the InitialContext (if needed). JNDI is only used by the JMS back end. |
hibernate.search.worker.jms.connection_factory | Mandatory for the JMS back end. Defines the JNDI name to
lookup the JMS connection factory from
(/ConnectionFactory by default in JBoss
AS) |
hibernate.search.worker.jms.queue | Mandatory for the JMS back end. Defines the JNDI name to lookup the JMS queue from. The queue will be used to post work messages. |
hibernate.search.worker.jgroups.clusterName | Optional for JGroups back end. Defines the name of JGroups channel. |
hibernate.search.worker.jgroups.configurationFile | Optional JGroups network stack configuration. Defines the name of a JGroups configuration file, which must exist on classpath. |
hibernate.search.worker.jgroups.configurationXml | Optional JGroups network stack configuration. Defines a String representing JGroups configuration as XML. |
hibernate.search.worker.jgroups.configurationString | Optional JGroups network stack configuration. Provides JGroups configuration in plain text. |
This section describes in greater detail how to configure the Master / Slaves Hibernate Search architecture.
JMS back end configuration.
Every index update operation is sent to a JMS queue. Index querying operations are executed on a local index copy.
Example 3.6. JMS Slave configuration
### slave configuration ## DirectoryProvider # (remote) master location hibernate.search.default.sourceBase = /mnt/mastervolume/lucenedirs/mastercopy # local copy location hibernate.search.default.indexBase = /Users/prod/lucenedirs # refresh every half hour hibernate.search.default.refresh = 1800 # appropriate directory provider hibernate.search.default.directory_provider = org.hibernate.search.store.FSSlaveDirectoryProvider ## Backend configuration hibernate.search.worker.backend = jms hibernate.search.worker.jms.connection_factory = /ConnectionFactory hibernate.search.worker.jms.queue = queue/hibernatesearch #optional jndi configuration (check your JMS provider for more information) ## Optional asynchronous execution strategy # hibernate.search.worker.execution = async # hibernate.search.worker.thread_pool.size = 2 # hibernate.search.worker.buffer_queue.max = 50
A file system local copy is recommended for faster search results.
The refresh period should be higher that the expected time copy.
Every index update operation is taken from a JMS queue and executed. The master index is copied on a regular basis.
Example 3.7. JMS Master configuration
### master configuration ## DirectoryProvider # (remote) master location where information is copied to hibernate.search.default.sourceBase = /mnt/mastervolume/lucenedirs/mastercopy # local master location hibernate.search.default.indexBase = /Users/prod/lucenedirs # refresh every half hour hibernate.search.default.refresh = 1800 # appropriate directory provider hibernate.search.default.directory_provider = org.hibernate.search.store.FSMasterDirectoryProvider ## Backend configuration #Backend is the default lucene one
The refresh period should be higher that the expected time copy.
In addition to the Hibernate Search framework configuration, a Message Driven Bean should be written and set up to process the index works queue through JMS.
Example 3.8. Message Driven Bean processing the indexing queue
@MessageDriven(activationConfig = { @ActivationConfigProperty(propertyName="destinationType", propertyValue="javax.jms.Queue"), @ActivationConfigProperty(propertyName="destination", propertyValue="queue/hibernatesearch"), @ActivationConfigProperty(propertyName="DLQMaxResent", propertyValue="1") } ) public class MDBSearchController extends AbstractJMSHibernateSearchController implements MessageListener { @PersistenceContext EntityManager em; //method retrieving the appropriate session protected Session getSession() { return (Session) em.getDelegate(); } //potentially close the session opened in #getSession(), not needed here protected void cleanSessionIfNeeded(Session session) } }
This example inherits from the abstract JMS controller class
available in the Hibernate Search source code and implements a JavaEE 5
MDB. This implementation is given as an example and, while most likely
be more complex, can be adjusted to make use of non Java EE Message
Driven Beans. For more information about the
getSession()
and
cleanSessionIfNeeded()
, please check
AbstractJMSHibernateSearchController
's
javadoc.
Describes how to configure JGroups Master/Slave back end. Configuration examples illustrated in JMS Master/Slave configuration section (Section 3.5, “JMS Master/Slave configuration”) also apply here, only a different backend needs to be set.
Every index update operation is sent through a JGroups channel to the master node. Index querying operations are executed on a local index copy.
Example 3.9. JGroups Slave configuration
### slave configuration ## Backend configuration hibernate.search.worker.backend = jgroupsSlave
Every index update operation is taken from a JGroups channel and executed. The master index is copied on a regular basis.
Example 3.10. JGroups Master configuration
### master configuration ## Backend configuration hibernate.search.worker.backend = jgroupsMaster
Optionally configuration for JGroups transport protocols
(UDP, TCP) and channel name can be defined. It can be applied to both master and slave nodes.
There are several ways to configure JGroups transport details.
If it is not defined explicity, configuration found in the
flush-udp.xml
file is used.
Example 3.11. JGroups transport protocols configuration
## configuration #udp.xml file needs to be located in the classpath hibernate.search.worker.backend.jgroups.configurationFile = udp.xml #protocol stack configuration provided in XML format hibernate.search.worker.backend.jgroups.configurationXml = <config xmlns="urn:org:jgroups" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:org:jgroups file:schema/JGroups-2.8.xsd"> <UDP mcast_addr="${jgroups.udp.mcast_addr:228.10.10.10}" mcast_port="${jgroups.udp.mcast_port:45588}" tos="8" thread_naming_pattern="pl" thread_pool.enabled="true" thread_pool.min_threads="2" thread_pool.max_threads="8" thread_pool.keep_alive_time="5000" thread_pool.queue_enabled="false" thread_pool.queue_max_size="100" thread_pool.rejection_policy="Run"/> <PING timeout="1000" num_initial_members="3"/> <MERGE2 max_interval="30000" min_interval="10000"/> <FD_SOCK/> <FD timeout="3000" max_tries="3"/> <VERIFY_SUSPECT timeout="1500"/> <pbcast.STREAMING_STATE_TRANSFER/> <pbcast.FLUSH timeout="0"/> </config> #protocol stack configuration provided in "old style" jgroups format hibernate.search.worker.backend.jgroups.configurationString = UDP(mcast_addr=228.1.2.3;mcast_port=45566;ip_ttl=32):PING(timeout=3000; num_initial_members=6):FD(timeout=5000):VERIFY_SUSPECT(timeout=1500): pbcast.NAKACK(gc_lag=10;retransmit_timeout=3000):UNICAST(timeout=5000): FRAG:pbcast.GMS(join_timeout=3000;shun=false;print_local_addr=true)
Master and slave nodes communicate over JGroups channel
that is identified by this same name. Name of the channel can be defined
explicity, if not default HSearchCluster
is used.
Example 3.12. JGroups channel name configuration
## Backend configuration hibernate.search.worker.backend.jgroups.clusterName = Hibernate-Search-Cluster
The different reader strategies are described in Reader strategy. Out of the box strategies are:
shared
: share index readers across several
queries. This strategy is the most efficient.
not-shared
: create an index reader for each
individual query
The default reader strategy is shared
. This can
be adjusted:
hibernate.search.reader.strategy = not-shared
Adding this property switches to the not-shared
strategy.
Or if you have a custom reader strategy:
hibernate.search.reader.strategy = my.corp.myapp.CustomReaderProvider
where my.corp.myapp.CustomReaderProvider
is
the custom strategy implementation.
Hibernate Search is enabled out of the box when using Hibernate
Annotations or Hibernate EntityManager. If, for some reason you need to
disable it, set
hibernate.search.autoregister_listeners
to false.
Note that there is no performance penalty when the listeners are enabled
but no entities are annotated as indexed.
To enable Hibernate Search in Hibernate Core (ie. if you don't use
Hibernate Annotations), add the
FullTextIndexEventListener
for the following six
Hibernate events and also add it after the default
DefaultFlushEventListener
, as in the following
example.
Example 3.13. Explicitly enabling Hibernate Search by configuring the
FullTextIndexEventListener
<hibernate-configuration> <session-factory> ... <event type="post-update"> <listener class="org.hibernate.search.event.FullTextIndexEventListener"/> </event> <event type="post-insert"> <listener class="org.hibernate.search.event.FullTextIndexEventListener"/> </event> <event type="post-delete"> <listener class="org.hibernate.search.event.FullTextIndexEventListener"/> </event> <event type="post-collection-recreate"> <listener class="org.hibernate.search.event.FullTextIndexEventListener"/> </event> <event type="post-collection-remove"> <listener class="org.hibernate.search.event.FullTextIndexEventListener"/> </event> <event type="post-collection-update"> <listener class="org.hibernate.search.event.FullTextIndexEventListener"/> </event> <event type="flush"> <listener class="org.hibernate.event.def.DefaultFlushEventListener"/> <listener class="org.hibernate.search.event.FullTextIndexEventListener"/> </event> </session-factory> </hibernate-configuration>
By default, every time an object is inserted, updated or deleted through Hibernate, Hibernate Search updates the according Lucene index. It is sometimes desirable to disable that features if either your index is read-only or if index updates are done in a batch way (see Section 6.3, “Rebuilding the whole Index”).
To disable event based indexing, set
hibernate.search.indexing_strategy manual
In most case, the JMS backend provides the best of both world, a lightweight event based system keeps track of all changes in the system, and the heavyweight indexing process is done by a separate process or machine.
Hibernate Search allows you to tune the Lucene indexing performance
by specifying a set of parameters which are passed through to underlying
Lucene IndexWriter
such as
mergeFactor
, maxMergeDocs
and
maxBufferedDocs
. You can specify these parameters
either as default values applying for all indexes, on a per index basis,
or even per shard.
There are two sets of parameters allowing for different performance
settings depending on the use case. During indexing operations triggered
by database modifications, the parameters are grouped by the
transaction
keyword:
hibernate.search.[default|<indexname>].indexwriter.transaction.<parameter_name>
When indexing occurs via FullTextSession.index()
or
via a MassIndexer
(see
Section 6.3, “Rebuilding the whole Index”), the used properties are those
grouped under the batch
keyword:
hibernate.search.[default|<indexname>].indexwriter.batch.<parameter_name>
If no value is set for a
.batch
value in a specific shard configuration,
Hibernate Search will look at the index section, then at the default
section:
hibernate.search.Animals.2.indexwriter.transaction.max_merge_docs 10 hibernate.search.Animals.2.indexwriter.transaction.merge_factor 20 hibernate.search.default.indexwriter.batch.max_merge_docs 100
This configuration will result in these settings applied to the second shard of Animals index:
transaction.max_merge_docs
= 10
batch.max_merge_docs
= 100
transaction.merge_factor
= 20
batch.merge_factor
= Lucene default
All other values will use the defaults defined in Lucene.
The default for all values is to leave them at Lucene's own default,
so the listed values in the following table actually depend on the version
of Lucene you are using; values shown are relative to version
2.4
. For more information about Lucene indexing
performances, please refer to the Lucene documentation.
Previous versions had the batch
parameters inherit from transaction
properties.
This needs now to be explicitly set.
Table 3.3. List of indexing performance and behavior properties
Property | Description | Default Value |
---|---|---|
hibernate.search.[default|<indexname>].exclusive_index_use | Set to | false (releases locks as soon as possible) |
hibernate.search.[default|<indexname>].indexwriter.[transaction|batch].max_buffered_delete_terms | Determines the minimal number of delete terms required before the buffered in-memory delete terms are applied and flushed. If there are documents buffered in memory at the time, they are merged and a new segment is created. | Disabled (flushes by RAM usage) |
hibernate.search.[default|<indexname>].indexwriter.[transaction|batch].max_buffered_docs | Controls the amount of documents buffered in memory during indexing. The bigger the more RAM is consumed. | Disabled (flushes by RAM usage) |
hibernate.search.[default|<indexname>].indexwriter.[transaction|batch].max_field_length | The maximum number of terms that will be indexed for a single field. This limits the amount of memory required for indexing so that very large data will not crash the indexing process by running out of memory. This setting refers to the number of running terms, not to the number of different terms. This silently truncates large documents, excluding from the index all terms that occur further in the document. If you know your source documents are large, be sure to set this value high enough to accommodate the expected size. If you set it to Integer.MAX_VALUE, then the only limit is your memory, but you should anticipate an OutOfMemoryError. If setting this value in | 10000 |
hibernate.search.[default|<indexname>].indexwriter.[transaction|batch].max_merge_docs | Defines the largest number of documents allowed in a segment. Larger values are best for batched indexing and speedier searches. Small values are best for transaction indexing. | Unlimited (Integer.MAX_VALUE) |
hibernate.search.[default|<indexname>].indexwriter.[transaction|batch].merge_factor | Controls segment merge frequency and size. Determines how often segment indexes are merged when insertion occurs. With smaller values, less RAM is used while indexing, and searches on unoptimized indexes are faster, but indexing speed is slower. With larger values, more RAM is used during indexing, and while searches on unoptimized indexes are slower, indexing is faster. Thus larger values (> 10) are best for batch index creation, and smaller values (< 10) for indexes that are interactively maintained. The value must no be lower than 2. | 10 |
hibernate.search.[default|<indexname>].indexwriter.[transaction|batch].ram_buffer_size | Controls the amount of RAM in MB dedicated to document buffers. When used together max_buffered_docs a flush occurs for whichever event happens first. Generally for faster indexing performance it's best to flush by RAM usage instead of document count and use as large a RAM buffer as you can. | 16 MB |
hibernate.search.[default|<indexname>].indexwriter.[transaction|batch].term_index_interval | Expert: Set the interval between indexed terms. Large values cause less memory to be used by IndexReader, but slow random-access to terms. Small values cause more memory to be used by an IndexReader, and speed random-access to terms. See Lucene documentation for more details. | 128 |
hibernate.search.[default|<indexname>].indexwriter.[transaction|batch].use_compound_file | The advantage of using the compound file format is that
less file descriptors are used. The disadvantage is that indexing
takes more time and temporary disk space. You can set this
parameter to false in an attempt to improve the
indexing time, but you could run out of file descriptors if
mergeFactor is also
large.Boolean parameter, use
" | true |
When your architecture permits it, always set
hibernate.search.default.exclusive_index_use=true
as it greatly improves efficiency in index writing.
To tune the indexing speed it might be useful to time the object
loading from database in isolation from the writes to the index. To
achieve this set the blackhole
as worker backend and
start you indexing routines. This backend does not disable Hibernate
Search: it will still generate the needed changesets to the index, but
will discard them instead of flushing them to the index. As opposite to
setting the hibernate.search.indexing_strategy
to
manual
when using blackhole
it will
possibly load more data to rebuild the index from associated
entities.
hibernate.search.worker.backend blackhole
The recommended approach is to focus first on optimizing the object loading, and then use the timings you achieve as a baseline to tune the indexing process.
The blackhole
backend is not meant to be used in
production, only as a tool to identify indexing bottlenecks.
Lucene Directories have default locking strategies which work well for most cases, but it's possible to specify for each index managed by Hibernate Search which LockingFactory you want to use.
Some of these locking strategies require a filesystem level lock and may be used even on RAM based indexes, but this is not recommended and of no practical use.
To select a locking factory, set the
hibernate.search.<index>.locking_strategy
option
to one of simple
, native
,
single
or none
, or set it to the
fully qualified name of an implementation of
org.hibernate.search.store.LockFactoryFactory
;
Implementing this interface you can provide a custom
org.apache.lucene.store.LockFactory
.
Table 3.4. List of available LockFactory implementations
name | Class | Description |
---|---|---|
simple | org.apache.lucene.store.SimpleFSLockFactory |
Safe implementation based on Java's File API, it marks the usage of the index by creating a marker file. If for some reason you had to kill your application, you will need to remove this file before restarting it. This is the default implementation for
|
native | org.apache.lucene.store.NativeFSLockFactory |
As does This implementation has known problems on NFS. |
single | org.apache.lucene.store.SingleInstanceLockFactory |
This LockFactory doesn't use a file marker but is a Java object lock held in memory; therefore it's possible to use it only when you are sure the index is not going to be shared by any other process. This is the default implementation for
|
none | org.apache.lucene.store.NoLockFactory |
All changes to this index are not coordinated by any lock; test your application carefully and make sure you know what it means. |
hibernate.search.default.locking_strategy simple hibernate.search.Animals.locking_strategy native hibernate.search.Books.locking_strategy org.custom.components.MyLockingFactory
Hibernate Search allows you to configure how exceptions are handled during the indexing process. If no configuration is provided then exceptions are logged to the log output by default. It is possible to explicitly declare the exception logging mechanism as seen below:
hibernate.search.error_handler log
The default exception handling occurs for both synchronous and asynchronous indexing. Hibernate Search provides an easy mechanism to override the default error handling implementation.
In order to provide your own implementation you must implement the
ErrorHandler
interface, which provides handle (
ErrorContext context )
method. The ErrorContext
provides a reference to the primary LuceneWork
that failed, the
underlying exception and any subsequent LuceneWork
that could
not be processed due to the primary exception.
public interface ErrorContext { List<LuceneWork> getFailingOperations(); LuceneWork getOperationAtFault(); Throwable getThrowable(); boolean hasErrors(); }
The following provides an example implementation of
ErrorHandler
:
public class CustomErrorHandler implements ErrorHandler { public void handle ( ErrorContext context ) { ... //publish error context to some internal error handling system ... } }
To register this error handler with Hibernate Search you
must declare the CustomErrorHandler
fully qualified classname
in the configuration properties:
hibernate.search.error_handler CustomerErrorHandler
All the metadata information needed to index entities is described through annotations. There is no need for xml mapping files. In fact there is currently no xml configuration option available (see HSEARCH-210). You can still use Hibernate mapping files for the basic Hibernate configuration, but the Hibernate Search specific configuration has to be expressed via annotations.
First, we must declare a persistent class as indexable. This is
done by annotating the class with @Indexed
(all
entities not annotated with @Indexed
will be ignored
by the indexing process):
Example 4.1. Making a class indexable using the
@Indexed
annotation
@Entity
@Indexed(index="indexes/essays")
public class Essay {
...
}
The index
attribute tells Hibernate what the
Lucene directory name is (usually a directory on your file system). It
is recommended to define a base directory for all Lucene indexes using
the hibernate.search.default.indexBase
property in
your configuration file. Alternatively you can specify a base directory
per indexed entity by specifying
hibernate.search.<index>.indexBase,
where
<index>
is the fully qualified classname of the
indexed entity. Each entity instance will be represented by a Lucene
Document
inside the given index (aka
Directory).
For each property (or attribute) of your entity, you have the
ability to describe how it will be indexed. The default (no annotation
present) means that the property is ignored by the indexing process.
@Field
does declare a property as indexed. When
indexing an element to a Lucene document you can specify how it is
indexed:
name
: describe under which name, the
property should be stored in the Lucene Document. The default value
is the property name (following the JavaBeans convention)
store
: describe whether or not the
property is stored in the Lucene index. You can store the value
Store.YES
(consuming more space in the index but
allowing projection, see Section 5.1.2.5, “Projection” for more
information), store it in a compressed way
Store.COMPRESS
(this does consume more CPU), or
avoid any storage Store.NO
(this is the default
value). When a property is stored, you can retrieve its original
value from the Lucene Document. This is not related to whether the
element is indexed or not.
index: describe how the element is indexed and the type of
information store. The different values are
Index.NO
(no indexing, ie cannot be found by a
query), Index.TOKENIZED
(use an analyzer to
process the property), Index.UN_TOKENIZED
(no
analyzer pre-processing), Index.NO_NORMS
(do not
store the normalization data). The default value is
TOKENIZED
.
termVector: describes collections of term-frequency pairs. This attribute enables term vectors being stored during indexing so they are available within documents. The default value is TermVector.NO.
The different values of this attribute are:
Value | Definition |
---|---|
TermVector.YES | Store the term vectors of each document. This produces two synchronized arrays, one contains document terms and the other contains the term's frequency. |
TermVector.NO | Do not store term vectors. |
TermVector.WITH_OFFSETS | Store the term vector and token offset information. This is the same as TermVector.YES plus it contains the starting and ending offset position information for the terms. |
TermVector.WITH_POSITIONS | Store the term vector and token position information. This is the same as TermVector.YES plus it contains the ordinal positions of each occurrence of a term in a document. |
TermVector.WITH_POSITION_OFFSETS | Store the term vector, token position and offset information. This is a combination of the YES, WITH_OFFSETS and WITH_POSITIONS. |
Whether or not you want to store the original data in the index depends on how you wish to use the index query result. For a regular Hibernate Search usage storing is not necessary. However you might want to store some fields to subsequently project them (see Section 5.1.2.5, “Projection” for more information).
Whether or not you want to tokenize a property depends on whether you wish to search the element as is, or by the words it contains. It make sense to tokenize a text field, but probably not a date field.
Fields used for sorting must not be tokenized.
Finally, the id property of an entity is a special property used
by Hibernate Search to ensure index unicity of a given entity. By
design, an id has to be stored and must not be tokenized. To mark a
property as index id, use the @DocumentId
annotation.
If you are using Hibernate Annotations and you have specified @Id you
can omit @DocumentId. The chosen entity id will also be used as document
id.
Example 4.2. Adding @DocumentId
ad
@Field
annotations to an indexed entity
@Entity @Indexed(index="indexes/essays") public class Essay { ... @Id @DocumentId public Long getId() { return id; } @Field(name="Abstract", index=Index.TOKENIZED, store=Store.YES) public String getSummary() { return summary; } @Lob @Field(index=Index.TOKENIZED) public String getText() { return text; } }
Example 4.2, “Adding @DocumentId ad
@Field annotations to an indexed entity” define an index with
three fields: id
, Abstract
and
text
. Note that by default the field name is
decapitalized, following the JavaBean specification
Sometimes one has to map a property multiple times per index, with
slightly different indexing strategies. For example, sorting a query by
field requires the field to be UN_TOKENIZED
. If one
wants to search by words in this property and still sort it, one need to
index it twice - once tokenized and once untokenized. @Fields allows to
achieve this goal.
Example 4.3. Using @Fields to map a property multiple times
@Entity @Indexed(index = "Book" ) public class Book { @Fields( { @Field(index = Index.TOKENIZED), @Field(name = "summary_forSort", index = Index.UN_TOKENIZED, store = Store.YES) } ) public String getSummary() { return summary; } ... }
In Example 4.3, “Using @Fields to map a property multiple times” the field
summary
is indexed twice, once as
summary
in a tokenized way, and once as
summary_forSort
in an untokenized way. @Field
supports 2 attributes useful when @Fields is used:
analyzer: defines a @Analyzer annotation per field rather than per property
bridge: defines a @FieldBridge annotation per field rather than per property
See below for more information about analyzers and field bridges.
Associated objects as well as embedded objects can be indexed as
part of the root entity index. This is useful if you expect to search a
given entity based on properties of associated objects. In the following
example the aim is to return places where the associated city is Atlanta
(In the Lucene query parser language, it would translate into
address.city:Atlanta
).
Example 4.4. Using @IndexedEmbedded to index associations
@Entity @Indexed public class Place { @Id @GeneratedValue @DocumentId private Long id; @Field( index = Index.TOKENIZED ) private String name; @OneToOne( cascade = { CascadeType.PERSIST, CascadeType.REMOVE } ) @IndexedEmbedded private Address address; .... } @Entity public class Address { @Id @GeneratedValue private Long id; @Field(index=Index.TOKENIZED) private String street; @Field(index=Index.TOKENIZED) private String city; @ContainedIn @OneToMany(mappedBy="address") private Set<Place> places; ... }
In this example, the place fields will be indexed in the
Place
index. The Place
index
documents will also contain the fields address.id
,
address.street
, and address.city
which you will be able to query. This is enabled by the
@IndexedEmbedded
annotation.
Be careful. Because the data is denormalized in the Lucene index
when using the @IndexedEmbedded
technique,
Hibernate Search needs to be aware of any change in the
Place
object and any change in the
Address
object to keep the index up to date. To
make sure the
Lucene
document is updated when it's Place
Address
changes,
you need to mark the other side of the bidirectional relationship with
@ContainedIn
.
@ContainedIn
is only useful on associations
pointing to entities as opposed to embedded (collection of)
objects.
Let's make our example a bit more complex:
Example 4.5. Nested usage of @IndexedEmbedded
and
@ContainedIn
@Entity @Indexed public class Place { @Id @GeneratedValue @DocumentId private Long id; @Field( index = Index.TOKENIZED ) private String name; @OneToOne( cascade = { CascadeType.PERSIST, CascadeType.REMOVE } ) @IndexedEmbedded private Address address; .... } @Entity public class Address { @Id @GeneratedValue private Long id; @Field(index=Index.TOKENIZED) private String street; @Field(index=Index.TOKENIZED) private String city; @IndexedEmbedded(depth = 1, prefix = "ownedBy_") private Owner ownedBy; @ContainedIn @OneToMany(mappedBy="address") private Set<Place> places; ... } @Embeddable public class Owner { @Field(index = Index.TOKENIZED) private String name; ... }
Any @*ToMany, @*ToOne
and
@Embedded
attribute can be annotated with
@IndexedEmbedded
. The attributes of the associated
class will then be added to the main entity index. In the previous
example, the index will contain the following fields
id
name
address.street
address.city
address.ownedBy_name
The default prefix is propertyName.
, following
the traditional object navigation convention. You can override it using
the prefix
attribute as it is shown on the
ownedBy
property.
The prefix cannot be set to the empty string.
The depth
property is necessary when the object
graph contains a cyclic dependency of classes (not instances). For
example, if Owner
points to
Place
. Hibernate Search will stop including
Indexed embedded attributes after reaching the expected depth (or the
object graph boundaries are reached). A class having a self reference is
an example of cyclic dependency. In our example, because
depth
is set to 1, any
@IndexedEmbedded
attribute in Owner (if any) will be
ignored.
Using @IndexedEmbedded
for object associations
allows you to express queries such as:
Return places where name contains JBoss and where address city is Atlanta. In Lucene query this would be
+name:jboss +address.city:atlanta
Return places where name contains JBoss and where owner's name contain Joe. In Lucene query this would be
+name:jboss +address.orderBy_name:joe
In a way it mimics the relational join operation in a more efficient way (at the cost of data duplication). Remember that, out of the box, Lucene indexes have no notion of association, the join operation is simply non-existent. It might help to keep the relational model normalized while benefiting from the full text index speed and feature richness.
An associated object can itself (but does not have to) be
@Indexed
When @IndexedEmbedded points to an entity, the association has to
be directional and the other side has to be annotated
@ContainedIn
(as seen in the previous example). If
not, Hibernate Search has no way to update the root index when the
associated entity is updated (in our example, a Place
index document has to be updated when the associated
Address
instance is updated).
Sometimes, the object type annotated by
@IndexedEmbedded
is not the object type targeted
by Hibernate and Hibernate Search. This is especially the case when
interfaces are used in lieu of their implementation. For this reason you
can override the object type targeted by Hibernate Search using the
targetElement
parameter.
Example 4.6. Using the targetElement
property of
@IndexedEmbedded
@Entity
@Indexed
public class Address {
@Id
@GeneratedValue
@DocumentId
private Long id;
@Field(index= Index.TOKENIZED)
private String street;
@IndexedEmbedded(depth = 1, prefix = "ownedBy_", targetElement = Owner.class)
@Target(Owner.class)
private Person ownedBy;
...
}
@Embeddable
public class Owner implements Person { ... }
Lucene has the notion of boost factor. It's a
way to give more weight to a field or to an indexed element over others
during the indexation process. You can use @Boost
at
the @Field, method or class level.
Example 4.7. Using different ways of increasing the weight of an indexed element using a boost factor
@Entity @Indexed(index="indexes/essays") @Boost(1.7f) public class Essay { ... @Id @DocumentId public Long getId() { return id; } @Field(name="Abstract", index=Index.TOKENIZED, store=Store.YES, boost=@Boost(2f)) @Boost(1.5f) public String getSummary() { return summary; } @Lob @Field(index=Index.TOKENIZED, boost=@Boost(1.2f)) public String getText() { return text; } @Field public String getISBN() { return isbn; } }
In our example, Essay
's probability to
reach the top of the search list will be multiplied by 1.7. The
summary
field will be 3.0 (2 * 1.5 -
@Field.boost
and @Boost
on a property are cumulative) more important than the
isbn
field. The text
field will be 1.2 times more important than the
isbn
field. Note that this explanation in
strictest terms is actually wrong, but it is simple and close enough to
reality for all practical purposes. Please check the Lucene
documentation or the excellent Lucene In Action
from Otis Gospodnetic and Erik Hatcher.
The @Boost
annotation used in Section 4.1.4, “Boost factor” defines a static boost factor
which is is independent of the state of of the indexed entity at
runtime. However, there are usecases in which the boost factor may
depends on the actual state of the entity. In this case you can use the
@DynamicBoost
annotation together with an
accompanying custom BoostStrategy
.
Example 4.8. Dynamic boost examle
public enum PersonType { NORMAL, VIP } @Entity @Indexed @DynamicBoost(impl = VIPBoostStrategy.class) public class Person { private PersonType type; // .... } public class VIPBoostStrategy implements BoostStrategy { public float defineBoost(Object value) { Person person = ( Person ) value; if ( person.getType().equals( PersonType.VIP ) ) { return 2.0f; } else { return 1.0f; } } }
In Example 4.8, “Dynamic boost examle” a dynamic
boost is defined on class level specifying
VIPBoostStrategy
as implementation of the
BoostStrategy
interface to be used at indexing
time. You can place the @DynamicBoost
either at class
or field level. Depending on the placement of the annotation either the
whole entity is passed to the defineBoost
method or just the annotated field/property value. It's up to you to
cast the passed object to the correct type. In the example all indexed
values of a VIP person would be double as important as the values of a
normal person.
The specified BoostStrategy
implementation must define a public no-arg constructor.
Of course you can mix and match @Boost
and
@DynamicBoost
annotations in your entity. All defined
boost factors are cummulative as described in Section 4.1.4, “Boost factor”.
The default analyzer class used to index tokenized fields is
configurable through the hibernate.search.analyzer
property. The default value for this property is
org.apache.lucene.analysis.standard.StandardAnalyzer
.
You can also define the analyzer class per entity, property and even per @Field (useful when multiple fields are indexed from a single property).
Example 4.9. Different ways of specifying an analyzer
@Entity @Indexed @Analyzer(impl = EntityAnalyzer.class) public class MyEntity { @Id @GeneratedValue @DocumentId private Integer id; @Field(index = Index.TOKENIZED) private String name; @Field(index = Index.TOKENIZED) @Analyzer(impl = PropertyAnalyzer.class) private String summary; @Field(index = Index.TOKENIZED, analyzer = @Analyzer(impl = FieldAnalyzer.class) private String body; ... }
In this example, EntityAnalyzer
is used to
index all tokenized properties (eg. name
), except
summary
and body
which are indexed
with PropertyAnalyzer
and
FieldAnalyzer
respectively.
Mixing different analyzers in the same entity is most of the time a bad practice. It makes query building more complex and results less predictable (for the novice), especially if you are using a QueryParser (which uses the same analyzer for the whole query). As a rule of thumb, for any given field the same analyzer should be used for indexing and querying.
Analyzers can become quite complex to deal with for which reason
Hibernate Search introduces the notion of analyzer definitions. An
analyzer definition can be reused by many
@Analyzer
declarations. An analyzer definition
is composed of:
a name: the unique string used to refer to the definition
a list of char filters: each char filter is responsible to pre-process input characters before the tokenization. Char filters can add, change or remove characters; one common usage is for characters normalization
a tokenizer: responsible for tokenizing the input stream into individual words
a list of filters: each filter is responsible to remove, modify or sometimes even add words into the stream provided by the tokenizer
This separation of tasks - a list of char filters, and a tokenizer followed by a list of
filters - allows for easy reuse of each individual component and let
you build your customized analyzer in a very flexible way (just like
Lego). Generally speaking the char filters
do some
pre-processing in the character input, then the Tokenizer
starts
the tokenizing process by turning the character input into tokens which
are then further processed by the TokenFilter
s.
Hibernate Search supports this infrastructure by utilizing the Solr
analyzer framework. Make sure to add solr-core.jar and
solr-solrj.jar
to your classpath to
use analyzer definitions. In case you also want to use the
snowball stemmer also include the
lucene-snowball.jar.
Other Solr analyzers might
depend on more libraries. For example, the
PhoneticFilterFactory
depends on commons-codec. Your
distribution of Hibernate Search provides these dependencies in its
lib
directory.
Example 4.10. @AnalyzerDef
and the Solr
framework
@AnalyzerDef(name="customanalyzer", charFilters = { @CharFilterDef(factory = MappingCharFilterFactory.class, params = { @Parameter(name = "mapping", value = "org/hibernate/search/test/analyzer/solr/mapping-chars.properties") }) }, tokenizer = @TokenizerDef(factory = StandardTokenizerFactory.class), filters = { @TokenFilterDef(factory = ISOLatin1AccentFilterFactory.class), @TokenFilterDef(factory = LowerCaseFilterFactory.class), @TokenFilterDef(factory = StopFilterFactory.class, params = { @Parameter(name="words", value= "org/hibernate/search/test/analyzer/solr/stoplist.properties" ), @Parameter(name="ignoreCase", value="true") }) }) public class Team { ... }
A char filter is defined by its factory which is responsible for building the char filter and using the optional list of parameters. In our example, a mapping char filter is used, and will replace characters in the input based on the rules specified in the mapping file. A tokenizer is also defined by its factory. This example use the standard tokenizer. A filter is defined by its factory which is responsible for creating the filter instance using the optional parameters. In our example, the StopFilter filter is built reading the dedicated words property file and is expected to ignore case. The list of parameters is dependent on the tokenizer or filter factory.
Filters and char filters are applied in the order they are defined in the
@AnalyzerDef
annotation. Make sure to think
twice about this order.
Once defined, an analyzer definition can be reused by an
@Analyzer
declaration using the definition name
rather than declaring an implementation class.
Example 4.11. Referencing an analyzer by name
@Entity
@Indexed
@AnalyzerDef(name="customanalyzer", ... )
public class Team {
@Id
@DocumentId
@GeneratedValue
private Integer id;
@Field
private String name;
@Field
private String location;
@Field @Analyzer(definition = "customanalyzer")
private String description;
}
Analyzer instances declared by
@AnalyzerDef
are available by their name in the
SearchFactory
.
Analyzer analyzer = fullTextSession.getSearchFactory().getAnalyzer("customanalyzer");
This is quite useful wen building queries. Fields in queries should be analyzed with the same analyzer used to index the field so that they speak a common "language": the same tokens are reused between the query and the indexing process. This rule has some exceptions but is true most of the time. Respect it unless you know what you are doing.
Solr and Lucene come with a lot of useful default char filters, tokenizers and filters. You can find a complete list of char filter factories, tokenizer factories and filter factories at http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters. Let check a few of them.
Table 4.1. Some of the available char filters
Factory | Description | parameters |
---|---|---|
MappingCharFilterFactory | Replaces one or more characters with one or more characters, based on mappings specified in the resource file |
|
HTMLStripCharFilterFactory | Remove HTML standard tags, keeping the text | none |
Table 4.2. Some of the available tokenizers
Factory | Description | parameters |
---|---|---|
StandardTokenizerFactory | Use the Lucene StandardTokenizer | none |
HTMLStripStandardTokenizerFactory | Remove HTML tags, keep the text and pass it to a StandardTokenizer. @Deprecated, use the HTMLStripCharFilterFactory instead | none |
Table 4.3. Some of the available filters
Factory | Description | parameters |
---|---|---|
StandardFilterFactory | Remove dots from acronyms and 's from words | none |
LowerCaseFilterFactory | Lowercase words | none |
StopFilterFactory | remove words (tokens) matching a list of stop words |
ignoreCase: true if
|
SnowballPorterFilterFactory | Reduces a word to it's root in a given language. (eg. protect, protects, protection share the same root). Using such a filter allows searches matching related words. | language : Danish, Dutch, English,
Finnish, French, German, Italian, Norwegian, Portuguese,
Russian, Spanish, Swedish and a few more |
ISOLatin1AccentFilterFactory | remove accents for languages like French | none |
We recommend to check all the implementations of
org.apache.solr.analysis.TokenizerFactory
and
org.apache.solr.analysis.TokenFilterFactory
in
your IDE to see the implementations available.
So far all the introduced ways to specify an analyzer were
static. However, there are use cases where it is useful to select an
analyzer depending on the current state of the entity to be indexed,
for example in multilingual applications. For an
BlogEntry
class for example the analyzer could
depend on the language property of the entry. Depending on this
property the correct language specific stemmer should be chosen to
index the actual text.
To enable this dynamic analyzer selection Hibernate Search
introduces the AnalyzerDiscriminator
annotation. The following example demonstrates the usage of this
annotation:
Example 4.12. Usage of @AnalyzerDiscriminator in order to select an analyzer depending on the entity state
@Entity @Indexed @AnalyzerDefs({ @AnalyzerDef(name = "en", tokenizer = @TokenizerDef(factory = StandardTokenizerFactory.class), filters = { @TokenFilterDef(factory = LowerCaseFilterFactory.class), @TokenFilterDef(factory = EnglishPorterFilterFactory.class ) }), @AnalyzerDef(name = "de", tokenizer = @TokenizerDef(factory = StandardTokenizerFactory.class), filters = { @TokenFilterDef(factory = LowerCaseFilterFactory.class), @TokenFilterDef(factory = GermanStemFilterFactory.class) }) }) public class BlogEntry { @Id @GeneratedValue @DocumentId private Integer id; @Field @AnalyzerDiscriminator(impl = LanguageDiscriminator.class) private String language; @Field private String text; private Set<BlogEntry> references; // standard getter/setter ... }
public class LanguageDiscriminator implements Discriminator { public String getAnalyzerDefinitionName(Object value, Object entity, String field) { if ( value == null || !( entity instanceof Article ) ) { return null; } return (String) value; } }
The prerequisite for using
@AnalyzerDiscriminator
is that all analyzers
which are going to be used are predefined via
@AnalyzerDef
definitions. If this is the case
one can place the @AnalyzerDiscriminator
annotation either on the class or on a specific property of the entity
for which to dynamically select an analyzer. Via the
impl
parameter of the
AnalyzerDiscriminator
you specify a concrete
implementation of the Discriminator
interface.
It is up to you to provide an implementation for this interface. The
only method you have to implement is
getAnalyzerDefinitionName()
which gets called
for each field added to the Lucene document. The entity which is
getting indexed is also passed to the interface method. The
value
parameter is only set if the
AnalyzerDiscriminator
is placed on property
level instead of class level. In this case the value represents the
current value of this property.
An implemention of the Discriminator
interface has to return the name of an existing analyzer definition if
the analyzer should be set dynamically or null
if the default analyzer should not be overridden. The given example
assumes that the language parameter is either 'de' or 'en' which
matches the specified names in the
@AnalyzerDef
s.
The @AnalyzerDiscriminator
is currently
still experimental and the API might still change. We are hoping for
some feedback from the community about the usefulness and usability
of this feature.
During indexing time, Hibernate Search is using analyzers under the hood for you. In some situations, retrieving analyzers can be handy. If your domain model makes use of multiple analyzers (maybe to benefit from stemming, use phonetic approximation and so on), you need to make sure to use the same analyzers when you build your query.
This rule can be broken but you need a good reason for it. If you are unsure, use the same analyzers.
You can retrieve the scoped analyzer for a given entity used at indexing time by Hibernate Search. A scoped analyzer is an analyzer which applies the right analyzers depending on the field indexed: multiple analyzers can be defined on a given entity each one working on an individual field, a scoped analyzer unify all these analyzers into a context-aware analyzer. While the theory seems a bit complex, using the right analyzer in a query is very easy.
Example 4.13. Using the scoped analyzer when building a full-text query
org.apache.lucene.queryParser.QueryParser parser = new QueryParser( "title", fullTextSession.getSearchFactory().getAnalyzer( Song.class ) ); org.apache.lucene.search.Query luceneQuery = parser.parse( "title:sky Or title_stemmed:diamond" ); org.hibernate.Query fullTextQuery = fullTextSession.createFullTextQuery( luceneQuery, Song.class ); List result = fullTextQuery.list(); //return a list of managed objects
In the example above, the song title is indexed in two fields:
the standard analyzer is used in the field title
and a stemming analyzer is used in the field
title_stemmed
. By using the analyzer provided by
the search factory, the query uses the appropriate analyzer depending
on the field targeted.
If your query targets more that one query and you wish to use
your standard analyzer, make sure to describe it using an analyzer
definition. You can retrieve analyzers by their definition name using
searchFactory.getAnalyzer(String)
.
In Lucene all index fields have to be represented as Strings. For
this reason all entity properties annotated with @Field
have to be indexed in a String form. For most of your properties,
Hibernate Search does the translation job for you thanks to a built-in set
of bridges. In some cases, though you need a more fine grain control over
the translation process.
Hibernate Search comes bundled with a set of built-in bridges between a Java property type and its full text representation.
null elements are not indexed. Lucene does not support null elements and this does not make much sense either.
String are indexed as is
Numbers are converted in their String representation. Note that numbers cannot be compared by Lucene (ie used in ranged queries) out of the box: they have to be padded
Using a Range query is debatable and has drawbacks, an alternative approach is to use a Filter query which will filter the result query to the appropriate range.
Hibernate Search will support a padding mechanism
Dates are stored as yyyyMMddHHmmssSSS in GMT time (200611072203012 for Nov 7th of 2006 4:03PM and 12ms EST). You shouldn't really bother with the internal format. What is important is that when using a DateRange Query, you should know that the dates have to be expressed in GMT time.
Usually, storing the date up to the millisecond is not
necessary. @DateBridge
defines the appropriate
resolution you are willing to store in the index (
). The date pattern will then be truncated
accordingly.@DateBridge(resolution=Resolution.DAY)
@Entity
@Indexed
public class Meeting {
@Field(index=Index.UN_TOKENIZED)
@DateBridge(resolution=Resolution.MINUTE)
private Date date;
...
A Date whose resolution is lower than
MILLISECOND
cannot be a
@DocumentId
URI and URL are converted to their string representation
Class are converted to their fully qualified class name. The thread context classloader is used when the class is rehydrated
Sometimes, the built-in bridges of Hibernate Search do not cover some of your property types, or the String representation used by the bridge does not meet your requirements. The following paragraphs describe several solutions to this problem.
The simplest custom solution is to give Hibernate Search an
implementation of your expected
Object
to
String
bridge. To do so you need to implements
the org.hibernate.search.bridge.StringBridge
interface. All implementations have to be thread-safe as they are used
concurrently.
Example 4.14. Implementing your own
StringBridge
/** * Padding Integer bridge. * All numbers will be padded with 0 to match 5 digits * * @author Emmanuel Bernard */ public class PaddedIntegerBridge implements StringBridge { private int PADDING = 5; public String objectToString(Object object) { String rawInteger = ( (Integer) object ).toString(); if (rawInteger.length() > PADDING) throw new IllegalArgumentException( "Try to pad on a number too big" ); StringBuilder paddedInteger = new StringBuilder( ); for ( int padIndex = rawInteger.length() ; padIndex < PADDING ; padIndex++ ) { paddedInteger.append('0'); } return paddedInteger.append( rawInteger ).toString(); } }
Then any property or field can use this bridge thanks to the
@FieldBridge
annotation
@FieldBridge(impl = PaddedIntegerBridge.class)
private Integer length;
Parameters can be passed to the Bridge implementation making it
more flexible. The Bridge implementation implements a
ParameterizedBridge
interface, and the
parameters are passed through the @FieldBridge
annotation.
Example 4.15. Passing parameters to your bridge implementation
public class PaddedIntegerBridge implements StringBridge, ParameterizedBridge { public static String PADDING_PROPERTY = "padding"; private int padding = 5; //default public void setParameterValues(Map parameters) { Object padding = parameters.get( PADDING_PROPERTY ); if (padding != null) this.padding = (Integer) padding; } public String objectToString(Object object) { String rawInteger = ( (Integer) object ).toString(); if (rawInteger.length() > padding) throw new IllegalArgumentException( "Try to pad on a number too big" ); StringBuilder paddedInteger = new StringBuilder( ); for ( int padIndex = rawInteger.length() ; padIndex < padding ; padIndex++ ) { paddedInteger.append('0'); } return paddedInteger.append( rawInteger ).toString(); } } //property @FieldBridge(impl = PaddedIntegerBridge.class, params = @Parameter(name="padding", value="10") ) private Integer length;
The ParameterizedBridge
interface can be
implemented by StringBridge
,
TwoWayStringBridge
,
FieldBridge
implementations.
All implementations have to be thread-safe, but the parameters are set during initialization and no special care is required at this stage.
If you expect to use your bridge implementation on an id
property (ie annotated with @DocumentId
), you need
to use a slightly extended version of StringBridge
named TwoWayStringBridge
. Hibernate Search
needs to read the string representation of the identifier and generate
the object out of it. There is no difference in the way the
@FieldBridge
annotation is used.
Example 4.16. Implementing a TwoWayStringBridge which can for example be used for id properties
public class PaddedIntegerBridge implements TwoWayStringBridge, ParameterizedBridge {
public static String PADDING_PROPERTY = "padding";
private int padding = 5; //default
public void setParameterValues(Map parameters) {
Object padding = parameters.get( PADDING_PROPERTY );
if (padding != null) this.padding = (Integer) padding;
}
public String objectToString(Object object) {
String rawInteger = ( (Integer) object ).toString();
if (rawInteger.length() > padding)
throw new IllegalArgumentException( "Try to pad on a number too big" );
StringBuilder paddedInteger = new StringBuilder( );
for ( int padIndex = rawInteger.length() ; padIndex < padding ; padIndex++ ) {
paddedInteger.append('0');
}
return paddedInteger.append( rawInteger ).toString();
}
public Object stringToObject(String stringValue) {
return new Integer(stringValue);
}
}
//id property
@DocumentId
@FieldBridge(impl = PaddedIntegerBridge.class,
params = @Parameter(name="padding", value="10")
private Integer id;
It is critically important for the two-way process to be idempotent (ie object = stringToObject( objectToString( object ) ) ).
Some use cases require more than a simple object to string
translation when mapping a property to a Lucene index. To give you the
greatest possible flexibility you can also implement a bridge as a
FieldBridge
. This interface gives you a
property value and let you map it the way you want in your Lucene
Document
. The interface is very similar in its
concept to the Hibernate UserType
s.
You can for example store a given property in two different document fields:
Example 4.17. Implementing the FieldBridge interface in order to a given property into multiple document fields
/** * Store the date in 3 different fields - year, month, day - to ease Range Query per * year, month or day (eg get all the elements of December for the last 5 years). * @author Emmanuel Bernard */ public class DateSplitBridge implements FieldBridge { private final static TimeZone GMT = TimeZone.getTimeZone("GMT"); public void set(String name, Object value, Document document, LuceneOptions luceneOptions) { Date date = (Date) value; Calendar cal = GregorianCalendar.getInstance(GMT); cal.setTime(date); int year = cal.get(Calendar.YEAR); int month = cal.get(Calendar.MONTH) + 1; int day = cal.get(Calendar.DAY_OF_MONTH); // set year luceneOptions.addFieldToDocument( name + ".year", String.valueOf( year ), document ); // set month and pad it if needed luceneOptions.addFieldToDocument( name + ".month", month < 10 ? "0" : "" + String.valueOf( month ), document ); // set day and pad it if needed luceneOptions.addFieldToDocument( name + ".day", day < 10 ? "0" : "" + String.valueOf( day ), document ); } } //property @FieldBridge(impl = DateSplitBridge.class) private Date date;
In the previous example the fields where not added directly to Document
but we where delegating this task to the LuceneOptions
helper; this will apply the
options you have selected on @Field
, like Store
or TermVector
options, or apply the choosen @Boost
value. It is especially useful to encapsulate the complexity of COMPRESS
implementations so it's recommended to delegate to LuceneOptions
to add fields to the
Document
, but nothing stops you from editing
the Document
directly and ignore the LuceneOptions
in case you need to.
Classes like LuceneOptions
are created to shield your application from
changes in Lucene API and simplify your code. Use them if you can, but if you need more flexibility
you're not required to.
It is sometimes useful to combine more than one property of a
given entity and index this combination in a specific way into the
Lucene index. The @ClassBridge
and
@ClassBridge
annotations can be defined at the
class level (as opposed to the property level). In this case the
custom field bridge implementation receives the entity instance as the
value parameter instead of a particular property. Though not shown in
this example, @ClassBridge
supports the
termVector
attribute discussed in section
Section 4.1.1, “Basic mapping”.
Example 4.18. Implementing a class bridge
@Entity @Indexed @ClassBridge(name="branchnetwork", index=Index.TOKENIZED, store=Store.YES, impl = CatFieldsClassBridge.class, params = @Parameter( name="sepChar", value=" " ) ) public class Department { private int id; private String network; private String branchHead; private String branch; private Integer maxEmployees ... } public class CatFieldsClassBridge implements FieldBridge, ParameterizedBridge { private String sepChar; public void setParameterValues(Map parameters) { this.sepChar = (String) parameters.get( "sepChar" ); } public void set(String name, Object value, Document document, LuceneOptions luceneOptions) { // In this particular class the name of the new field was passed // from the name field of the ClassBridge Annotation. This is not // a requirement. It just works that way in this instance. The // actual name could be supplied by hard coding it below. Department dep = (Department) value; String fieldValue1 = dep.getBranch(); if ( fieldValue1 == null ) { fieldValue1 = ""; } String fieldValue2 = dep.getNetwork(); if ( fieldValue2 == null ) { fieldValue2 = ""; } String fieldValue = fieldValue1 + sepChar + fieldValue2; Field field = new Field( name, fieldValue, luceneOptions.getStore(), luceneOptions.getIndex(), luceneOptions.getTermVector() ); field.setBoost( luceneOptions.getBoost() ); document.add( field ); } }
In this example, the particular
CatFieldsClassBridge
is applied to the
department
instance, the field bridge then
concatenate both branch and network and index the
concatenation.
This part of the documentation is a work in progress.
You can provide your own id for Hibernate Search if you are extending the internals. You will have to generate a unique value so it can be given to Lucene to be indexed. This will have to be given to Hibernate Search when you create an org.hibernate.search.Work object - the document id is required in the constructor.
Unlike conventional Hibernate Search API and @DocumentId, this annotation is used on the class and not a field. You also can provide your own bridge implementation when you put in this annotation by calling the bridge() which is on @ProvidedId. Also, if you annotate a class with @ProvidedId, your subclasses will also get the annotation - but it is not done by using the java.lang.annotations.@Inherited. Be sure however, to not use this annotation with @DocumentId as your system will break.
Example 4.19. Providing your own id
@ProvidedId (bridge = org.my.own.package.MyCustomBridge) @Indexed public class MyClass{ @Field String MyString; ... }
This feature is considered experimental. While stable code-wise, the API is subject to change in the future.
Although the recommended approach for mapping indexed entities is to use annotations, it is sometimes more convenient to use a different approach:
the same entity is mapped differently depending on deployment needs (customization for clients)
some automatization process requires the dynamic mapping of many entities sharing a common traits
While it has been a popular demand in the past, the Hibernate team never found the idea of an XML alternative to annotations appealing due to it's heavy duplication, lack of code refactoring safety, because it did not cover all the use case spectrum and because we are in the 21st century :)
Th idea of a programmatic API was much more appealing and has now become a reality. You can programmatically and safely define your mapping using a programmatic API: you define entities and fields as indexable by using mapping classes which effectively mirror the annotation concepts in Hibernate Search. Note that fan(s) of XML approach can design their own schema and use the programmatic API to create the mapping while parsing the XML stream.
In order to use the programmatic model you must first construct a
SearchMapping
object. This object is passed to
Hibernate Search via a property set to the Configuration
object. The property key is
hibernate.search.model_mapping
or it's type-safe
representation Environment.MODEL_MAPPING
.
SearchMapping mapping = new SearchMapping(); [...] configuration.setProperty( Environment.MODEL_MAPPING, mapping ); //or in JPA SearchMapping mapping = new SearchMapping(); [...] Map<String,String> properties = new HashMap<String,String)(1); properties.put( Environment.MODEL_MAPPING, mapping ); EntityManagerFactory emf = Persistence.createEntityManagerFactory( "userPU", properties );
The SearchMapping
is the root object which
contains all the necessary indexable entities and fields. From there, the
SearchMapping
object exposes a fluent (and thus
intuitive) API to express your mappings: it contextually exposes the
relevant mapping options in a type-safe way, just let your IDE
autocompletion feature guide you through.
Today, the programmatic API cannot be used on a class annotated with
Hibernate Search annotations, chose one approach or the other. Also note
that the same default values apply in annotations and the programmatic
API. For example, the @Field.name
is defaulted to
the property name and does not have to be set.
Each core concept of the programmatic API has a corresponding example to depict how the same definition would look using annotation. Therefore seeing an annotation example of the programmatic approach should give you a clear picture of what Hibernate Search will build with the marked entities and associated properties.
The first concept of the programmatic API is to define an entity
as indexable. Using the annotation approach a user would mark the entity
as @Indexed
, the following example demonstrates
how to programmatically achieve this.
Example 4.20. Marking an entity indexable
SearchMapping mapping = new SearchMapping(); mapping.entity(Address.class) .indexed() .indexName("Address_Index"); //optional cfg.getProperties().put( "hibernate.search.model_mapping", mapping );
As you can see you must first create a
SearchMapping
object which is the root object
that is then passed to the Configuration
object as property. You must declare an entity and if you wish to
make that entity as indexable then you must call the
indexed()
method. The indexed()
method has an optional indexName(String
indexName)
which can be used to change the default
index name that is created by Hibernate Search. Using the annotation
model the above can be achieved as:
Example 4.21. Annotation example of indexing entity
@Entity @Indexed(index="Address_Index") public class Address { .... }
To set a property as a document id:
Example 4.22. Enabling document id with programmatic model
SearchMapping mapping = new SearchMapping(); mapping.entity(Address.class).indexed() .property("addressId", ElementType.FIELD) //field access .documentId() .name("id"); cfg.getProperties().put( "hibernate.search.model_mapping", mapping);
The above is equivalent to annotating a property in the entity
as @DocumentId
as seen in the following
example:
Example 4.23. DocumentId annotation definition
@Entity @Indexed public class Address { @Id @GeneratedValue @DocumentId(name="id") private Long addressId; .... }
The next section demonstrates how to programmatically define
analyzers.
Analyzers can be programmatically defined using the
analyzerDef(String analyzerDef, Class<? extends
TokenizerFactory> tokenizerFactory)
method. This method
also enables you to define filters for the analyzer definition. Each
filter that you define can optionally take in parameters as seen in the
following example :
Example 4.24. Defining analyzers using programmatic model
SearchMapping mapping = new SearchMapping();
mapping
.analyzerDef( "ngram", StandardTokenizerFactory.class )
.filter( LowerCaseFilterFactory.class )
.filter( NGramFilterFactory.class )
.param( "minGramSize", "3" )
.param( "maxGramSize", "3" )
.analyzerDef( "en", StandardTokenizerFactory.class )
.filter( LowerCaseFilterFactory.class )
.filter( EnglishPorterFilterFactory.class )
.analyzerDef( "de", StandardTokenizerFactory.class )
.filter( LowerCaseFilterFactory.class )
.filter( GermanStemFilterFactory.class )
.entity(Address.class).indexed()
.property("addressId", ElementType.METHOD) //getter access
.documentId()
.name("id");
cfg.getProperties().put( "hibernate.search.model_mapping", mapping );
The analyzer mapping defined above is equivalent to the
annotation model using @AnalyzerDef
in
conjunction with @AnalyzerDefs
:
Example 4.25. Analyzer definition using annotation
@Indexed @Entity @AnalyzerDefs({ @AnalyzerDef(name = "ngram", tokenizer = @TokenizerDef(factory = StandardTokenizerFactory.class), filters = { @TokenFilterDef(factory = LowerCaseFilterFactory.class), @TokenFilterDef(factory = NGramFilterFactory.class, params = { @Parameter(name = "minGramSize",value = "3"), @Parameter(name = "maxGramSize",value = "3") }) }), @AnalyzerDef(name = "en", tokenizer = @TokenizerDef(factory = StandardTokenizerFactory.class), filters = { @TokenFilterDef(factory = LowerCaseFilterFactory.class), @TokenFilterDef(factory = EnglishPorterFilterFactory.class) }), @AnalyzerDef(name = "de", tokenizer = @TokenizerDef(factory = StandardTokenizerFactory.class), filters = { @TokenFilterDef(factory = LowerCaseFilterFactory.class), @TokenFilterDef(factory = GermanStemFilterFactory.class) }) }) public class Address { ... }
The programmatic API provides easy mechanism for defining full
text filter definitions which is available via
@FullTextFilterDef
and
@FullTextFilterDefs
. Note that contrary to the
annotation equivalent, full text filter definitions are a global
construct and are not tied to an entity. The next example depicts the
creation of full text filter definition using the
fullTextFilterDef
method.
Example 4.26. Defining full text definition programmatically
SearchMapping mapping = new SearchMapping();
mapping
.analyzerDef( "en", StandardTokenizerFactory.class )
.filter( LowerCaseFilterFactory.class )
.filter( EnglishPorterFilterFactory.class )
.fullTextFilterDef("security", SecurityFilterFactory.class)
.cache(FilterCacheModeType.INSTANCE_ONLY)
.entity(Address.class)
.indexed()
.property("addressId", ElementType.METHOD)
.documentId()
.name("id")
.property("street1", ElementType.METHOD)
.field()
.analyzer("en")
.store(Store.YES)
.field()
.name("address_data")
.analyzer("en")
.store(Store.NO);
cfg.getProperties().put( "hibernate.search.model_mapping", mapping );
The previous example can effectively been seen as annotating
your entity with @FullTextFilterDef
like
below:
Example 4.27. Using annotation to define full text filter definition
@Entity @Indexed @AnalyzerDefs({ @AnalyzerDef(name = "en", tokenizer = @TokenizerDef(factory = StandardTokenizerFactory.class), filters = { @TokenFilterDef(factory = LowerCaseFilterFactory.class), @TokenFilterDef(factory = EnglishPorterFilterFactory.class) }) }) @FullTextFilterDefs({ @FullTextFilterDef(name = "security", impl = SecurityFilterFactory.class, cache = FilterCacheModeType.INSTANCE_ONLY) }) public class Address { @Id @GeneratedValue @DocumentId(name="id") pubblic Long getAddressId() {...}; @Fields({ @Field(index=Index.TOKENIZED, store=Store.YES, analyzer=@Analyzer(definition="en")), @Field(name="address_data", analyzer=@Analyzer(definition="en")) }) public String getAddress1() {...}; ...... }
When defining fields for indexing using the programmatic API, call
field()
on the property(String
propertyName, ElementType elementType)
method. From
field()
you can specify the name,
index
, store
,
bridge
and analyzer
definitions.
Example 4.28. Indexing fields using programmatic API
SearchMapping mapping = new SearchMapping();
mapping
.analyzerDef( "en", StandardTokenizerFactory.class )
.filter( LowerCaseFilterFactory.class )
.filter( EnglishPorterFilterFactory.class )
.entity(Address.class).indexed()
.property("addressId", ElementType.METHOD)
.documentId()
.name("id")
.property("street1", ElementType.METHOD)
.field()
.analyzer("en")
.store(Store.YES)
.index(Index.TOKENIZED) //no useful here as it's the default
.field()
.name("address_data")
.analyzer("en");
cfg.getProperties().put( "hibernate.search.model_mapping", mapping );
The above example of marking fields as indexable is equivalent
to defining fields using @Field
as seen
below:
Example 4.29. Indexing fields using annotation
@Entity @Indexed @AnalyzerDefs({ @AnalyzerDef(name = "en", tokenizer = @TokenizerDef(factory = StandardTokenizerFactory.class), filters = { @TokenFilterDef(factory = LowerCaseFilterFactory.class), @TokenFilterDef(factory = EnglishPorterFilterFactory.class) }) }) public class Address { @Id @GeneratedValue @DocumentId(name="id") private Long getAddressId() {...}; @Fields({ @Field(index=Index.TOKENIZED, store=Store.YES, analyzer=@Analyzer(definition="en")), @Field(name="address_data", analyzer=@Analyzer(definition="en")) }) public String getAddress1() {...} ...... }
In this section you will see how to programmatically define
entities to be embedded into the indexed entity similar to using the
@IndexEmbedded
model. In order to define this you
must mark the property as indexEmbedded.
The is
the option to add a prefix to the embedded entity definition and this
can be done by calling prefix
as seen in the
example below:
Example 4.30. Programmatically defining embedded entites
SearchMapping mapping = new SearchMapping();
mappping
.entity(ProductCatalog.class)
.indexed()
.property("catalogId", ElementType.METHOD)
.documentId()
.name("id")
.property("title", ElementType.METHOD)
.field()
.index(Index.TOKENIZED)
.store(Store.NO)
.property("description", ElementType.METHOD)
.field()
.index(Index.TOKENIZED)
.store(Store.NO)
.property("items", ElementType.METHOD)
.indexEmbedded()
.prefix("catalog.items"); //optional
cfg.getProperties().put( "hibernate.search.model_mapping", mapping );
The next example shows the same definition using annotation
(@IndexEmbedded
):
Example 4.31. Using @IndexEmbedded
@Entity @Indexed public class ProductCatalog { @Id @GeneratedValue @DocumentId(name="id") public Long getCatalogId() {...} @Field(store=Store.NO, index=Index.TOKENIZED) public String getTitle() {...} @Field(store=Store.NO, index=Index.TOKENIZED) public String getDescription(); @OneToMany(fetch = FetchType.LAZY) @IndexColumn(name = "list_position") @Cascade(org.hibernate.annotations.CascadeType.ALL) @IndexEmbedded(prefix="catalog.items") public List<Item> getItems() {...} ... }
@ContainedIn
can be define as seen in the
example below:
Example 4.32. Programmatically defining ContainedIn
SearchMapping mapping = new SearchMapping();
mappping
.entity(ProductCatalog.class)
.indexed()
.property("catalogId", ElementType.METHOD)
.documentId()
.property("title", ElementType.METHOD)
.field()
.property("description", ElementType.METHOD)
.field()
.property("items", ElementType.METHOD)
.indexEmbedded()
.entity(Item.class)
.property("description", ElementType.METHOD)
.field()
.property("productCatalog", ElementType.METHOD)
.containedIn();
cfg.getProperties().put( "hibernate.search.model_mapping", mapping );
This is equivalent to defining
@ContainedIn
in your entity:
Example 4.33. Annotation approach for ContainedIn
@Entity @Indexed public class ProductCatalog { @Id @GeneratedValue @DocumentId public Long getCatalogId() {...} @Field public String getTitle() {...} @Field public String getDescription() {...} @OneToMany(fetch = FetchType.LAZY) @IndexColumn(name = "list_position") @Cascade(org.hibernate.annotations.CascadeType.ALL) @IndexEmbedded private List<Item> getItems() {...} ... } @Entity public class Item { @Id @GeneratedValue private Long itemId; @Field public String getDescription() {...} @ManyToOne( cascade = { CascadeType.PERSIST, CascadeType.REMOVE } ) @ContainedIn public ProductCatalog getProductCatalog() {...} ... }
In order to define a calendar or date bridge mapping, call the
dateBridge(Resolution resolution)
or
calendarBridge(Resolution resolution)
methods
after you have defined a field()
in the
SearchMapping
hierarchy.
Example 4.34. Programmatic model for defining calendar/date bridge
SearchMapping mapping = new SearchMapping(); mapping .entity(Address.class) .indexed() .property("addressId", ElementType.FIELD) .documentId() .property("street1", ElementType.FIELD() .field() .property("createdOn", ElementType.FIELD) .field() .dateBridge(Resolution.DAY) .property("lastUpdated", ElementType.FIELD) .calendarBridge(Resolution.DAY); cfg.getProperties().put( "hibernate.search.model_mapping", mapping );
See below for defining the above using
@CalendarBridge
and
@DateBridge
:
Example 4.35. @CalendarBridge and @DateBridge definition
@Entity @Indexed public class Address { @Id @GeneratedValue @DocumentId private Long addressId; @Field private String address1; @Field @DateBridge(resolution=Resolution.DAY) private Date createdOn; @CalendarBridge(resolution=Resolution.DAY) private Calendar lastUpdated; ... }
It is possible to associate bridges to programmatically defined
fields. When you define a field()
programmatically you can use the bridge(Class<?>
impl)
to associate a FieldBridge
implementation class. The bridge method also provides
optional methods to include any parameters required for the bridge
class. The below shows an example of programmatically defining a
bridge:
Example 4.36. Defining field bridges programmatically
SearchMapping mapping = new SearchMapping();
mapping
.entity(Address.class)
.indexed()
.property("addressId", ElementType.FIELD)
.documentId()
.property("street1", ElementType.FIELD)
.field()
.field()
.name("street1_abridged")
.bridge( ConcatStringBridge.class )
.param( "size", "4" );
cfg.getProperties().put( "hibernate.search.model_mapping", mapping );
The above can equally be defined using annotations, as seen in the next example.
Example 4.37. Defining field bridges using annotation
@Entity @Indexed public class Address { @Id @GeneratedValue @DocumentId(name="id") private Long addressId; @Fields({ @Field, @Field(name="street1_abridged", bridge = @FieldBridge( impl = ConcatStringBridge.class, params = @Parameter( name="size", value="4" )) }) private String address1; ... }
You can define class bridges on entities programmatically. This is shown in the next example:
Example 4.38. Defining class briges using API
SearchMapping mapping = new SearchMapping();
mapping
.entity(Departments.class)
.classBridge(CatDeptsFieldsClassBridge.class)
.name("branchnetwork")
.index(Index.TOKENIZED)
.store(Store.YES)
.param("sepChar", " ")
.classBridge(EquipmentType.class)
.name("equiptype")
.index(Index.TOKENIZED)
.store(Store.YES)
.param("C", "Cisco")
.param("D", "D-Link")
.param("K", "Kingston")
.param("3", "3Com")
.indexed();
cfg.getProperties().put( "hibernate.search.model_mapping", mapping );
The above is similar to using @ClassBridge
as seen in the next example:
Example 4.39. Using @ClassBridge
@Entity @Indexed @ClassBridges ( { @ClassBridge(name="branchnetwork", index= Index.TOKENIZED, store= Store.YES, impl = CatDeptsFieldsClassBridge.class, params = @Parameter( name="sepChar", value=" " ) ), @ClassBridge(name="equiptype", index= Index.TOKENIZED, store= Store.YES, impl = EquipmentType.class, params = {@Parameter( name="C", value="Cisco" ), @Parameter( name="D", value="D-Link" ), @Parameter( name="K", value="Kingston" ), @Parameter( name="3", value="3Com" ) }) }) public class Departments { .... }
You can apply a dynamic boost factor on either a field or a whole entity:
Example 4.40. DynamicBoost mapping using programmatic model
SearchMapping mapping = new SearchMapping(); mapping .entity(DynamicBoostedDescLibrary.class) .indexed() .dynamicBoost(CustomBoostStrategy.class) .property("libraryId", ElementType.FIELD) .documentId().name("id") .property("name", ElementType.FIELD) .dynamicBoost(CustomFieldBoostStrategy.class); .field() .store(Store.YES) cfg.getProperties().put( "hibernate.search.model_mapping", mapping );
The next example shows the equivalent mapping using the
@DynamicBoost
annotation:
Example 4.41. Using the @DynamicBoost
@Entity @Indexed @DynamicBoost(impl = CustomBoostStrategy.class) public class DynamicBoostedDescriptionLibrary { @Id @GeneratedValue @DocumentId private int id; private float dynScore; @Field(store = Store.YES) @DynamicBoost(impl = CustomFieldBoostStrategy.class) private String name; public DynamicBoostedDescriptionLibrary() { dynScore = 1.0f; } ....... }
The second most important capability of Hibernate Search is the ability to execute a Lucene query and retrieve entities managed by an Hibernate session, providing the power of Lucene without leaving the Hibernate paradigm, and giving another dimension to the Hibernate classic search mechanisms (HQL, Criteria query, native SQL query). Preparing and executing a query consists of four simple steps:
Creating a FullTextSession
Creating a Lucene query
Wrapping the Lucene query using a
org.hibernate.Query
Executing the search by calling for example
list()
or
scroll()
To access the querying facilities, you have to use an
FullTextSession
. This Search specific session wraps a
regular org.hibernate.Session
to provide query and
indexing capabilities.
Example 5.1. Creating a FullTextSession
Session session = sessionFactory.openSession(); ... FullTextSession fullTextSession = Search.getFullTextSession(session);
The actual search facility is built on native Lucene queries which the following example illustrates.
Example 5.2. Creating a Lucene query
org.apache.lucene.queryParser.QueryParser parser =
new QueryParser("title", new StopAnalyzer() );
org.apache.lucene.search.Query luceneQuery = parser.parse( "summary:Festina Or brand:Seiko" );
org.hibernate.Query fullTextQuery = fullTextSession.createFullTextQuery( luceneQuery );
List result = fullTextQuery.list(); //return a list of managed objects
The Hibernate query built on top of the Lucene query is a regular
org.hibernate.Query
, which means you are in the same
paradigm as the other Hibernate query facilities (HQL, Native or Criteria).
The regular list()
, uniqueResult()
,
iterate()
and scroll()
methods can be
used.
In case you are using the Java Persistence APIs of Hibernate (aka EJB 3.0 Persistence), the same extensions exist:
Example 5.3. Creating a Search query using the JPA API
EntityManager em = entityManagerFactory.createEntityManager();
FullTextEntityManager fullTextEntityManager =
org.hibernate.search.jpa.Search.getFullTextEntityManager(em);
...
org.apache.lucene.queryParser.QueryParser parser =
new QueryParser("title", new StopAnalyzer() );
org.apache.lucene.search.Query luceneQuery = parser.parse( "summary:Festina Or brand:Seiko" );
javax.persistence.Query fullTextQuery = fullTextEntityManager.createFullTextQuery( luceneQuery );
List result = fullTextQuery.getResultList(); //return a list of managed objects
The following examples we will use the Hibernate APIs but the same
example can be easily rewritten with the Java Persistence API by just
adjusting the way the FullTextQuery
is
retrieved.
Hibernate Search queries are built on top of Lucene queries which
gives you total freedom on the type of Lucene query you want to execute.
However, once built, Hibernate Search wraps further query processing using
org.hibernate.Query
as your primary query
manipulation API.
It is out of the scope of this documentation on how to exactly build a Lucene query. Please refer to the online Lucene documentation or get hold of a copy of either Lucene In Action or Hibernate Search in Action.
Once the Lucene query is built, it needs to be wrapped into an Hibernate Query.
Example 5.4. Wrapping a Lucene query into a Hibernate Query
FullTextSession fullTextSession = Search.getFullTextSession( session ); org.hibernate.Query fullTextQuery = fullTextSession.createFullTextQuery( luceneQuery );
If not specified otherwise, the query will be executed against all indexed entities, potentially returning all types of indexed classes. It is advised, from a performance point of view, to restrict the returned types:
Example 5.5. Filtering the search result by entity type
org.hibernate.Query fullTextQuery = fullTextSession.createFullTextQuery( luceneQuery, Customer.class ); // or fullTextQuery = fullTextSession.createFullTextQuery( luceneQuery, Item.class, Actor.class );
The first example returns only matching
Customer
s, the second returns matching
Actor
s and Item
s. The
type restriction is fully polymorphic which means that if there are
two indexed subclasses Salesman
and
Customer
of the baseclass
Person
, it is possible to just specify
Person.class
in order to filter on result
types.
Out of performance reasons it is recommended to restrict the number of returned objects per query. In fact is a very common use case anyway that the user navigates from one page to an other. The way to define pagination is exactly the way you would define pagination in a plain HQL or Criteria query.
Example 5.6. Defining pagination for a search query
org.hibernate.Query fullTextQuery = fullTextSession.createFullTextQuery( luceneQuery, Customer.class ); fullTextQuery.setFirstResult(15); //start from the 15th element fullTextQuery.setMaxResults(10); //return 10 elements
It is still possible to get the total number of matching
elements regardless of the pagination via
fulltextQuery.
getResultSize()
Apache Lucene provides a very flexible and powerful way to sort results. While the default sorting (by relevance) is appropriate most of the time, it can be interesting to sort by one or several other properties. In order to do so set the Lucene Sort object to apply a Lucene sorting strategy.
Example 5.7. Specifying a Lucene Sort
in order to
sort the results
org.hibernate.search.FullTextQuery query = s.createFullTextQuery( query, Book.class );
org.apache.lucene.search.Sort sort = new Sort(new SortField("title"));
query.setSort(sort);
List results = query.list();
One can notice the FullTextQuery
interface which is a sub interface of
org.hibernate.Query
. Be aware that fields used
for sorting must not be tokenized.
When you restrict the return types to one class, Hibernate Search loads the objects using a single query. It also respects the static fetching strategy defined in your domain model.
It is often useful, however, to refine the fetching strategy for a specific use case.
Example 5.8. Specifying FetchMode
on a
query
Criteria criteria = s.createCriteria( Book.class ).setFetchMode( "authors", FetchMode.JOIN ); s.createFullTextQuery( luceneQuery ).setCriteriaQuery( criteria );
In this example, the query will return all Books matching the luceneQuery. The authors collection will be loaded from the same query using an SQL outer join.
When defining a criteria query, it is not needed to restrict the entity types returned while creating the Hibernate Search query from the full text session: the type is guessed from the criteria query itself. Only fetch mode can be adjusted, refrain from applying any other restriction.
One cannot use setCriteriaQuery
if more
than one entity type is expected to be returned.
For some use cases, returning the domain object (graph) is overkill. Only a small subset of the properties is necessary. Hibernate Search allows you to return a subset of properties:
Example 5.9. Using projection instead of returning the full domain object
org.hibernate.search.FullTextQuery query = s.createFullTextQuery( luceneQuery, Book.class );
query.setProjection( "id", "summary", "body", "mainAuthor.name" );
List results = query.list();
Object[] firstResult = (Object[]) results.get(0);
Integer id = firstResult[0];
String summary = firstResult[1];
String body = firstResult[2];
String authorName = firstResult[3];
Hibernate Search extracts the properties from the Lucene index
and convert them back to their object representation, returning a list
of Object[]
. Projections avoid a potential
database round trip (useful if the query response time is critical),
but has some constraints:
the properties projected must be stored in the index
(@Field(store=Store.YES)
), which increase the
index size
the properties projected must use a
FieldBridge
implementing
org.hibernate.search.bridge.TwoWayFieldBridge
or
org.hibernate.search.bridge.TwoWayStringBridge
,
the latter being the simpler version. All Hibernate Search
built-in types are two-way.
you can only project simple properties of the indexed entity or its embedded associations. This means you cannot project a whole embedded entity.
projection does not work on collections or maps which are
indexed via @IndexedEmbedded
Projection is useful for another kind of use cases. Lucene provides some metadata information to the user about the results. By using some special placeholders, the projection mechanism can retrieve them:
Example 5.10. Using projection in order to retrieve meta data
org.hibernate.search.FullTextQuery query = s.createFullTextQuery( luceneQuery, Book.class );
query.setProjection( FullTextQuery.SCORE, FullTextQuery.THIS, "mainAuthor.name" );
List results = query.list();
Object[] firstResult = (Object[]) results.get(0);
float score = firstResult[0];
Book book = firstResult[1];
String authorName = firstResult[2];
You can mix and match regular fields and special placeholders. Here is the list of available placeholders:
FullTextQuery.THIS: returns the initialized and managed entity (as a non projected query would have done).
FullTextQuery.DOCUMENT: returns the Lucene Document related to the object projected.
FullTextQuery.OBJECT_CLASS: returns the class of the indexed entity.
FullTextQuery.SCORE: returns the document score in the query. Scores are handy to compare one result against an other for a given query but are useless when comparing the result of different queries.
FullTextQuery.ID: the id property value of the projected object.
FullTextQuery.DOCUMENT_ID: the Lucene document id. Careful, Lucene document id can change overtime between two different IndexReader opening (this feature is experimental).
FullTextQuery.EXPLANATION: returns the Lucene Explanation object for the matching object/document in the given query. Do not use if you retrieve a lot of data. Running explanation typically is as costly as running the whole Lucene query per matching element. Make sure you use projection!
Once the Hibernate Search query is built, executing it is in no way
different than executing a HQL or Criteria query. The same paradigm and
object semantic applies. All the common operations are available:
list()
, uniqueResult()
,
iterate()
,
scroll()
.
If you expect a reasonable number of results (for example using
pagination) and expect to work on all of them,
list()
or
uniqueResult()
are recommended.
list()
work best if the entity
batch-size
is set up properly. Note that Hibernate
Search has to process all Lucene Hits elements (within the pagination)
when using list()
,
uniqueResult()
and
iterate()
.
If you wish to minimize Lucene document loading,
scroll()
is more appropriate. Don't forget to
close the ScrollableResults
object when you're
done, since it keeps Lucene resources. If you expect to use
scroll,
but wish to load objects in batch, you
can use query.setFetchSize()
. When an object is
accessed, and if not already loaded, Hibernate Search will load the next
fetchSize
objects in one pass.
Pagination is a preferred method over scrolling though.
It is sometime useful to know the total number of matching documents:
for the Google-like feature 1-10 of about 888,000,000
to implement a fast pagination navigation
to implement a multi step search engine (adding approximation if the restricted query return no or not enough results)
Of course it would be too costly to retrieve all the matching documents. Hibernate Search allows you to retrieve the total number of matching documents regardless of the pagination parameters. Even more interesting, you can retrieve the number of matching elements without triggering a single object load.
Example 5.11. Determining the result size of a query
org.hibernate.search.FullTextQuery query = s.createFullTextQuery( luceneQuery, Book.class ); assert 3245 == query.getResultSize(); //return the number of matching books without loading a single one org.hibernate.search.FullTextQuery query = s.createFullTextQuery( luceneQuery, Book.class ); query.setMaxResult(10); List results = query.list(); assert 3245 == query.getResultSize(); //return the total number of matching books regardless of pagination
Like Google, the number of results is approximative if the index is not fully up-to-date with the database (asynchronous cluster for example).
Especially when using projection, the data structure returned by a
query (an object array in this case), is not always matching the
application needs. It is possible to apply a
ResultTransformer
operation post query to match
the targeted data structure:
Example 5.12. Using ResultTransformer in conjunction with projections
org.hibernate.search.FullTextQuery query = s.createFullTextQuery( luceneQuery, Book.class );
query.setProjection( "title", "mainAuthor.name" );
query.setResultTransformer(
new StaticAliasToBeanResultTransformer( BookView.class, "title", "author" )
);
List<BookView> results = (List<BookView>) query.list();
for(BookView view : results) {
log.info( "Book: " + view.getTitle() + ", " + view.getAuthor() );
}
Examples of ResultTransformer
implementations can be found in the Hibernate Core codebase.
You will find yourself sometimes puzzled by a result showing up in
a query or a result not showing up in a query. Luke is a great tool to
understand those mysteries. However, Hibernate Search also gives you
access to the Lucene Explanation
object for a
given result (in a given query). This class is considered fairly
advanced to Lucene users but can provide a good understanding of the
scoring of an object. You have two ways to access the Explanation object
for a given result:
Use the fullTextQuery.explain(int)
method
Use projection
The first approach takes a document id as a parameter and return
the Explanation object. The document id can be retrieved using
projection and the FullTextQuery.DOCUMENT_ID
constant.
The Document id has nothing to do with the entity id. Do not mess up these two notions.
The second approach let's you project the
Explanation
object using the
FullTextQuery.EXPLANATION
constant.
Example 5.13. Retrieving the Lucene Explanation object using projection
FullTextQuery ftQuery = s.createFullTextQuery( luceneQuery, Dvd.class )
.setProjection( FullTextQuery.DOCUMENT_ID, FullTextQuery.EXPLANATION, FullTextQuery.THIS );
@SuppressWarnings("unchecked") List<Object[]> results = ftQuery.list();
for (Object[] result : results) {
Explanation e = (Explanation) result[1];
display( e.toString() );
}
Be careful, building the explanation object is quite expensive, it is roughly as expensive as running the Lucene query again. Don't do it if you don't need the object
Apache Lucene has a powerful feature that allows to filter query results according to a custom filtering process. This is a very powerful way to apply additional data restrictions, especially since filters can be cached and reused. Some interesting use cases are:
security
temporal data (eg. view only last month's data)
population filter (eg. search limited to a given category)
and many more
Hibernate Search pushes the concept further by introducing the notion of parameterizable named filters which are transparently cached. For people familiar with the notion of Hibernate Core filters, the API is very similar:
Example 5.14. Enabling fulltext filters for a given query
fullTextQuery = s.createFullTextQuery( query, Driver.class ); fullTextQuery.enableFullTextFilter("bestDriver"); fullTextQuery.enableFullTextFilter("security").setParameter( "login", "andre" ); fullTextQuery.list(); //returns only best drivers where andre has credentials
In this example we enabled two filters on top of the query. You can enable (or disable) as many filters as you like.
Declaring filters is done through the
@FullTextFilterDef
annotation. This annotation can
be on any @Indexed
entity regardless of the query the
filter is later applied to. This implies that filter definitions are
global and their names must be unique. A
SearchException
is thrown in case two different
@FullTextFilterDef
annotations with the same name
are defined. Each named filter has to specify its actual filter
implementation.
Example 5.15. Defining and implementing a Filter
@Entity @Indexed @FullTextFilterDefs( { @FullTextFilterDef(name = "bestDriver", impl = BestDriversFilter.class), @FullTextFilterDef(name = "security", impl = SecurityFilterFactory.class) }) public class Driver { ... }
public class BestDriversFilter extends org.apache.lucene.search.Filter {
public DocIdSet getDocIdSet(IndexReader reader) throws IOException {
OpenBitSet bitSet = new OpenBitSet( reader.maxDoc() );
TermDocs termDocs = reader.termDocs( new Term( "score", "5" ) );
while ( termDocs.next() ) {
bitSet.set( termDocs.doc() );
}
return bitSet;
}
}
BestDriversFilter
is an example of a simple
Lucene filter which reduces the result set to drivers whose score is 5. In
this example the specified filter implements the
org.apache.lucene.search.Filter
directly and contains a
no-arg constructor.
If your Filter creation requires additional steps or if the filter you want to use does not have a no-arg constructor, you can use the factory pattern:
Example 5.16. Creating a filter using the factory pattern
@Entity
@Indexed
@FullTextFilterDef(name = "bestDriver", impl = BestDriversFilterFactory.class)
public class Driver { ... }
public class BestDriversFilterFactory {
@Factory
public Filter getFilter() {
//some additional steps to cache the filter results per IndexReader
Filter bestDriversFilter = new BestDriversFilter();
return new CachingWrapperFilter(bestDriversFilter);
}
}
Hibernate Search will look for a @Factory
annotated method and use it to build the filter instance. The factory must
have a no-arg constructor. For people familiar with JBoss Seam, this is
similar to the component factory pattern, but the annotation is
different!
Named filters come in handy where parameters have to be passed to the filter. For example a security filter might want to know which security level you want to apply:
Example 5.17. Passing parameters to a defined filter
fullTextQuery = s.createFullTextQuery( query, Driver.class );
fullTextQuery.enableFullTextFilter("security").setParameter( "level", 5 );
Each parameter name should have an associated setter on either the filter or filter factory of the targeted named filter definition.
Example 5.18. Using parameters in the actual filter implementation
public class SecurityFilterFactory { private Integer level; /** * injected parameter */ public void setLevel(Integer level) { this.level = level; } @Key public FilterKey getKey() { StandardFilterKey key = new StandardFilterKey(); key.addParameter( level ); return key; } @Factory public Filter getFilter() { Query query = new TermQuery( new Term("level", level.toString() ) ); return new CachingWrapperFilter( new QueryWrapperFilter(query) ); } }
Note the method annotated @Key
returning a
FilterKey
object. The returned object has a special
contract: the key object must implement equals()
/ hashCode()
so that 2 keys are equal if and only
if the given Filter
types are the same and the set
of parameters are the same. In other words, 2 filter keys are equal if and
only if the filters from which the keys are generated can be interchanged.
The key object is used as a key in the cache mechanism.
@Key
methods are needed only if:
you enabled the filter caching system (enabled by default)
your filter has parameters
In most cases, using the StandardFilterKey
implementation will be good enough. It delegates the
equals()
/ hashCode()
implementation to each of the parameters equals and hashcode
methods.
As mentioned before the defined filters are per default cached and
the cache uses a combination of hard and soft references to allow disposal
of memory when needed. The hard reference cache keeps track of the most
recently used filters and transforms the ones least used to
SoftReferences
when needed. Once the limit of the
hard reference cache is reached additional filters are cached as
SoftReferences
. To adjust the size of the hard
reference cache, use
hibernate.search.filter.cache_strategy.size
(defaults
to 128). For advanced use of filter caching, you can implement your own
FilterCachingStrategy
. The classname is defined by
hibernate.search.filter.cache_strategy
.
This filter caching mechanism should not be confused with caching
the actual filter results. In Lucene it is common practice to wrap filters
using the IndexReader
around a
CachingWrapperFilter.
The wrapper will cache the
DocIdSet
returned from the
getDocIdSet(IndexReader reader)
method to avoid
expensive recomputation. It is important to mention that the computed
DocIdSet
is only cachable for the same
IndexReader
instance, because the reader
effectively represents the state of the index at the moment it was opened.
The document list cannot change within an opened
IndexReader
. A different/new
IndexReader
instance, however, works potentially on a
different set of Document
s (either from a different
index or simply because the index has changed), hence the cached
DocIdSet
has to be recomputed.
Hibernate Search also helps with this aspect of caching. Per default
the cache
flag of @FullTextFilterDef
is set to
FilterCacheModeType.INSTANCE_AND_DOCIDSETRESULTS
which
will automatically cache the filter instance as well as wrap the specified
filter around a Hibernate specific implementation of
CachingWrapperFilter
(org.hibernate.search.filter.CachingWrapperFilter
).
In contrast to Lucene's version of this class
SoftReference
s are used together with a hard
reference count (see discussion about filter cache). The hard reference
count can be adjusted using
hibernate.search.filter.cache_docidresults.size
(defaults to 5). The wrapping behaviour can be controlled using the
@FullTextFilterDef.cache
parameter. There are three
different values for this parameter:
Value | Definition |
---|---|
FilterCacheModeType.NONE | No filter instance and no result is cached by Hibernate Search. For every filter call, a new filter instance is created. This setting might be useful for rapidly changing data sets or heavily memory constrained environments. |
FilterCacheModeType.INSTANCE_ONLY | The filter instance is cached and reused across
concurrent Filter.getDocIdSet() calls.
DocIdSet results are not cached. This
setting is useful when a filter uses its own specific caching
mechanism or the filter results change dynamically due to
application specific events making
DocIdSet caching in both cases
unnecessary. |
FilterCacheModeType.INSTANCE_AND_DOCIDSETRESULTS | Both the filter instance and the
DocIdSet results are cached. This is the
default value. |
Last but not least - why should filters be cached? There are two areas where filter caching shines:
the system does not update the targeted entity index often (in other words, the IndexReader is reused a lot)
the Filter's DocIdSet is expensive to compute (compared to the time spent to execute the query)
It is possible, in a sharded environment to execute queries on a subset of the available shards. This can be done in two steps:
create a sharding strategy that does select a subset of
DirectoryProvider
s depending on sone filter
configuration
activate the proper filter at query time
Let's first look at an example of sharding strategy that query on a specific customer shard if the customer filter is activated.
public class CustomerShardingStrategy implements IndexShardingStrategy {
// stored DirectoryProviders in a array indexed by customerID
private DirectoryProvider<?>[] providers;
public void initialize(Properties properties, DirectoryProvider<?>[] providers) {
this.providers = providers;
}
public DirectoryProvider<?>[] getDirectoryProvidersForAllShards() {
return providers;
}
public DirectoryProvider<?> getDirectoryProviderForAddition(Class<?> entity, Serializable id, String idInString, Document document) {
Integer customerID = Integer.parseInt(document.getField("customerID").stringValue());
return providers[customerID];
}
public DirectoryProvider<?>[] getDirectoryProvidersForDeletion(Class<?> entity, Serializable id, String idInString) {
return getDirectoryProvidersForAllShards();
}
/**
* Optimization; don't search ALL shards and union the results; in this case, we
* can be certain that all the data for a particular customer Filter is in a single
* shard; simply return that shard by customerID.
*/
public DirectoryProvider<?>[] getDirectoryProvidersForQuery(FullTextFilterImplementor[] filters) {
FFullTextFilter filter = getCustomerFilter(filters, "customer");
if (filter == null) {
return getDirectoryProvidersForAllShards();
}
else {
return new DirectoryProvider[] { providers[Integer.parseInt(filter.getParameter("customerID").toString())] };
}
}
private FullTextFilter getFilter(FullTextFilterImplementor[] filters, String name) {
for (FullTextFilterImplementor filter: filters) {
if (filter.getName().equals(name)) return filter;
}
return null;
}
}
In this example, if the filter named customer
is present, we make sure to only use the shard dedicated to this
customer. Otherwise, we return all shards. A given Sharding strategy can
react to one or more filters and depends on their parameters.
The second step is simply to activate the filter at query time.
While the filter can be a regular filter (as defined in Section 5.3, “Filters”) which also filters Lucene results after the
query, you can make use of a special filter that will only be passed to
the sharding strategy and otherwise ignored for the rest of the query.
Simply use the ShardSensitiveOnlyFilter
class
when declaring your filter.
@Entity @Indexed @FullTextFilterDef(name="customer", impl=ShardSensitiveOnlyFilter.class) public class Customer { ... } FullTextQuery query = ftEm.createFullTextQuery(luceneQuery, Customer.class); query.enableFulltextFilter("customer").setParameter("CustomerID", 5); @SuppressWarnings("unchecked") List<Customer> results = query.getResultList();
Note that by using the
ShardSensitiveOnlyFilter
, you do not have to
implement any Lucene filter. Using filters and sharding strategy
reacting to these filters is recommended to speed up queries in a
sharded environment.
Query performance depends on several criteria:
the Lucene query itself: read the literature on this subject
the number of object loaded: use pagination (always ;-) ) or index projection (if needed)
the way Hibernate Search interacts with the Lucene readers: defines the appropriate Reader strategy.
If you wish to use some specific features of Lucene, you can always run Lucene specific queries. Check Chapter 8, Advanced features for more information.
As Hibernate core applies changes to the Database, Hibernate Search detects these changes and will update the index automatically (unless the EventListeners are disabled). Sometimes changes are made to the database without using Hibernate, as when backup is restored or your data is otherwise affected; for these cases Hibernate Search exposes the Manual Index APIs to explicitly update or remove a single entity from the index, or rebuild the index for the whole database, or remove all references to a specific type.
All these methods affect the Lucene Index only, no changes are applied to the Database.
Using FullTextSession
.index(T entity)
you can directly add or update a specific object instance to the index.
If this entity was already indexed, then the index will be updated.
Changes to the index are only applied at transaction commit.
Example 6.1. Indexing an entity via
FullTextSession.index(T entity)
FullTextSession fullTextSession = Search.getFullTextSession(session);
Transaction tx = fullTextSession.beginTransaction();
Object customer = fullTextSession.load( Customer.class, 8 );
fullTextSession.index(customer);
tx.commit(); //index only updated at commit time
In case you want to add all instances for a type, or for all indexed types,
the recommended approach is to use a MassIndexer
: see
Section 6.3.2, “Using a MassIndexer” for more details.
It is equally possible to remove an entity or all entities of a
given type from a Lucene index without the need to physically remove them
from the database. This operation is named purging and is also done
through the FullTextSession
.
Example 6.2. Purging a specific instance of an entity from the index
FullTextSession fullTextSession = Search.getFullTextSession(session);
Transaction tx = fullTextSession.beginTransaction();
for (Customer customer : customers) {
fullTextSession.purge( Customer.class, customer.getId() );
}
tx.commit(); //index is updated at commit time
Purging will remove the entity with the given id from the Lucene index but will not touch the database.
If you need to remove all entities of a given type, you can use the
purgeAll
method. This operation removes all
entities of the type passed as a parameter as well as all its
subtypes.
Example 6.3. Purging all instances of an entity from the index
FullTextSession fullTextSession = Search.getFullTextSession(session);
Transaction tx = fullTextSession.beginTransaction();
fullTextSession.purgeAll( Customer.class );
//optionally optimize the index
//fullTextSession.getSearchFactory().optimize( Customer.class );
tx.commit(); //index changes are applied at commit time
It is recommended to optimize the index after such an operation.
Methods index
,
purge
and purgeAll
are
available on FullTextEntityManager
as
well.
All manual indexing methods (index
,
purge
and purgeAll
) only
affect the index, not the database, nevertheless they are
transactional and as such they won't be applied until the transaction
is successfully committed, or you make use of
flushToIndexes
.
If you change the entity mapping to the index, chances are that the whole Index needs to be updated; For example if you decide to index a an existing field using a different analyzer you'll need to rebuild the index for affected types. Also if the Database is replaced (like restored from a backup, imported from a legacy system) you'll want to be able to rebuild the index from existing data. Hibernate Search provides two main strategies to choose from:
Using FullTextSession
.flushToIndexes()
periodically, while using FullTextSession
.index()
on all entities.
Use a MassIndexer
.
This strategy consists in removing the existing index and then adding
all entities back to the index using FullTextSession
.purgeAll()
and FullTextSession
.index()
,
however there are some memory and efficiency contraints.
For maximum efficiency Hibernate Search batches index operations
and executes them at commit time. If you expect to index a lot of data
you need to be careful about memory consumption since all
documents are kept in a queue until the transaction commit. You can
potentially face an OutOfMemoryException
if you
don't empty the queue periodically: to do this you can use
fullTextSession.flushToIndexes()
. Every time
fullTextSession.flushToIndexes()
is called (or if
the transaction is committed), the batch queue is processed
applying all index changes. Be aware that, once flushed, the changes
cannot be rolled back.
Example 6.4. Index rebuilding using index() and flushToIndexes()
fullTextSession.setFlushMode(FlushMode.MANUAL); fullTextSession.setCacheMode(CacheMode.IGNORE); transaction = fullTextSession.beginTransaction(); //Scrollable results will avoid loading too many objects in memory ScrollableResults results = fullTextSession.createCriteria( Email.class ) .setFetchSize(BATCH_SIZE) .scroll( ScrollMode.FORWARD_ONLY ); int index = 0; while( results.next() ) { index++; fullTextSession.index( results.get(0) ); //index each element if (index % BATCH_SIZE == 0) { fullTextSession.flushToIndexes(); //apply changes to indexes fullTextSession.clear(); //free memory since the queue is processed } } transaction.commit();
hibernate.search.worker.batch_size
has been
deprecated in favor of this explicit API which provides better
control
Try to use a batch size that guarantees that your application will not run out of memory: with a bigger batch size objects are fetched faster from database but more memory is needed.
Hibernate Search's MassIndexer
uses several
parallel threads to rebuild the index; you can optionally select which entities
need to be reloaded or have it reindex all entities. This approach is optimized
for best performance but requires to set the application in maintenance mode:
making queries to the index is not recommended when a MassIndexer is busy.
Example 6.5. Index rebuilding using a MassIndexer
fullTextSession.createIndexer().startAndWait();
This will rebuild the index, deleting it and then reloading all entities from the database. Although it's simple to use, some tweaking is recommended to speed up the process: there are several parameters configurable.
During the progress of a MassIndexer the content of the index is undefined, make sure that nobody will try to make some query during index rebuilding! If somebody should query the index it will not corrupt but most results will likely be missing.
Example 6.6. Using a tuned MassIndexer
fullTextSession .createIndexer( User.class ) .batchSizeToLoadObjects( 25 ) .cacheMode( CacheMode.NORMAL ) .threadsToLoadObjects( 5 ) .threadsForSubsequentFetching( 20 ) .startAndWait();
This will rebuild the index of all User instances (and subtypes), and will create 5 parallel threads to load the User instances using batches of 25 objects per query; these loaded User instances are then pipelined to 20 parallel threads to load the attached lazy collections of User containing some information needed for the index.
It is recommended to leave cacheMode to CacheMode.IGNORE
(the default), as
in most reindexing situations the cache will be a useless additional overhead;
it might be useful to enable some other CacheMode
depending on your data: it might increase
performance if the main entity is relating to enum-like data included in the index.
The "sweet spot" of number of threads to achieve best performance is highly dependent on your overall architecture, database design and even data values. To find out the best number of threads for your application it is recommended to use a profiler: all internal thread groups have meaningful names to be easily identified with most tools.
The MassIndexer was designed for speed and is unaware of transactions, so there is no need to begin one or committing. Also because it is not transactional it is not recommended to let users use the system during it's processing, as it is unlikely people will be able to find results and the system load might be too high anyway.
Other parameters which also affect indexing time and memory consumption are:
hibernate.search.[default|<indexname>].exclusive_index_use
hibernate.search.[default|<indexname>].indexwriter.batch.max_buffered_docs
hibernate.search.[default|<indexname>].indexwriter.batch.max_field_length
hibernate.search.[default|<indexname>].indexwriter.batch.max_merge_docs
hibernate.search.[default|<indexname>].indexwriter.batch.merge_factor
hibernate.search.[default|<indexname>].indexwriter.batch.ram_buffer_size
hibernate.search.[default|<indexname>].indexwriter.batch.term_index_interval
All .indexwriter
parameters are Lucene specific and
Hibernate Search is just passing these parameters through - see Section 3.9, “Tuning Lucene indexing performance” for more details.
From time to time, the Lucene index needs to be optimized. The process is essentially a defragmentation. Until an optimization is triggered Lucene only marks deleted documents as such, no physical deletions are applied. During the optimization process the deletions will be applied which also effects the number of files in the Lucene Directory.
Optimizing the Lucene index speeds up searches but has no effect on the indexation (update) performance. During an optimization, searches can be performed, but will most likely be slowed down. All index updates will be stopped. It is recommended to schedule optimization:
on an idle system or when the searches are less frequent
after a lot of index modifications
When using a MassIndexer
(see
Section 6.3.2, “Using a MassIndexer”) it will optimize involved
indexes by default at the start and at the end of processing; you can change
this behavior by using respectively
MassIndexer
.optimizeAfterPurge
and MassIndexer
.optimizeOnFinish
.
Hibernate Search can automatically optimize an index after:
a certain amount of operations (insertion, deletion)
or a certain amount of transactions
The configuration for automatic index optimization can be defined on a global level or per index:
Example 7.1. Defining automatic optimization parameters
hibernate.search.default.optimizer.operation_limit.max = 1000 hibernate.search.default.optimizer.transaction_limit.max = 100 hibernate.search.Animal.optimizer.transaction_limit.max = 50
An optimization will be triggered to the Animal
index as soon as either:
the number of additions and deletions reaches 1000
the number of transactions reaches 50
(hibernate.search.Animal.optimizer.transaction_limit.max
having priority over
hibernate.search.default.optimizer.transaction_limit.max
)
If none of these parameters are defined, no optimization is processed automatically.
You can programmatically optimize (defragment) a Lucene index from
Hibernate Search through the SearchFactory
:
Example 7.2. Programmatic index optimization
FullTextSession fullTextSession = Search.getFullTextSession(regularSession); SearchFactory searchFactory = fullTextSession.getSearchFactory(); searchFactory.optimize(Order.class); // or searchFactory.optimize();
The first example optimizes the Lucene index holding
Order
s; the second, optimizes all indexes.
searchFactory.optimize()
has no effect on a JMS
backend. You must apply the optimize operation on the Master
node.
Apache Lucene has a few parameters to influence how optimization is performed. Hibernate Search exposes those parameters.
Further index optimization parameters include:
hibernate.search.[default|<indexname>].indexwriter.[batch|transaction].max_buffered_docs
hibernate.search.[default|<indexname>].indexwriter.[batch|transaction].max_field_length
hibernate.search.[default|<indexname>].indexwriter.[batch|transaction].max_merge_docs
hibernate.search.[default|<indexname>].indexwriter.[batch|transaction].merge_factor
hibernate.search.[default|<indexname>].indexwriter.[batch|transaction].ram_buffer_size
hibernate.search.[default|<indexname>].indexwriter.[batch|transaction].term_index_interval
See Section 3.9, “Tuning Lucene indexing performance” for more details.
The SearchFactory
object keeps track of the
underlying Lucene resources for Hibernate Search, it's also a convenient
way to access Lucene natively. The SearchFactory
can be accessed from a FullTextSession
:
Example 8.1. Accessing the SearchFactory
FullTextSession fullTextSession = Search.getFullTextSession(regularSession); SearchFactory searchFactory = fullTextSession.getSearchFactory();
You can always access the Lucene directories through plain Lucene,
the Directory structure is in no way different with or without Hibernate
Search. However there are some more convenient ways to access a given
Directory. The SearchFactory
keeps track of the
DirectoryProvider
s per indexed class. One directory
provider can be shared amongst several indexed classes if the classes
share the same underlying index directory. While usually not the case, a
given entity can have several DirectoryProvider
s if
the index is sharded (see Section 3.2, “Sharding indexes”).
Example 8.2. Accessing the Lucene Directory
DirectoryProvider[] provider = searchFactory.getDirectoryProviders(Order.class); org.apache.lucene.store.Directory directory = provider[0].getDirectory();
In this example, directory points to the lucene index storing
Order
s information. Note that the obtained Lucene
directory must not be closed (this is Hibernate Search
responsibility).
Queries in Lucene are executed on an IndexReader
.
Hibernate Search caches all index readers to maximize performance. Your
code can access this cached resources, but you have to follow some "good
citizen" rules.
Example 8.3. Accessing an IndexReader
DirectoryProvider orderProvider = searchFactory.getDirectoryProviders(Order.class)[0]; DirectoryProvider clientProvider = searchFactory.getDirectoryProviders(Client.class)[0]; ReaderProvider readerProvider = searchFactory.getReaderProvider(); IndexReader reader = readerProvider.openReader(orderProvider, clientProvider); try { //do read-only operations on the reader } finally { readerProvider.closeReader(reader); }
The ReaderProvider (described in Reader strategy), will open an IndexReader
on top of the index(es) referenced by the directory providers. Because
this IndexReader
is shared amongst several clients,
you must adhere to the following rules:
Never call indexReader.close(), but always call readerProvider.closeReader(reader), preferably in a finally block.
Don't use this IndexReader
for
modification operations (you would get an exception). If you want to
use a read/write index reader, open one from the Lucene Directory
object.
Aside from those rules, you can use the IndexReader freely,
especially to do native queries. Using the shared
IndexReader
s will make most queries more
efficient.
Lucene allows the user to customize its scoring formula by extending
org.apache.lucene.search.Similarity
. The abstract
methods defined in this class match the factors of the following formula
calculating the score of query q for document d:
score(q,d) = coord(q,d) · queryNorm(q) · ∑t in q ( tf(t in d) · idf(t)2 · t.getBoost() · norm(t,d) )
Factor | Description |
---|---|
tf(t ind) | Term frequency factor for the term (t) in the document (d). |
idf(t) | Inverse document frequency of the term. |
coord(q,d) | Score factor based on how many of the query terms are found in the specified document. |
queryNorm(q) | Normalizing factor used to make scores between queries comparable. |
t.getBoost() | Field boost. |
norm(t,d) | Encapsulates a few (indexing time) boost and length factors. |
It is beyond the scope of this manual to explain this
formula in more detail. Please refer to
Similarity
's Javadocs for more information.
Hibernate Search provides two ways to modify Lucene's similarity
calculation. First you can set the default similarity by specifying the
fully specified classname of your Similarity
implementation using the property
hibernate.search.similarity
. The default value is
org.apache.lucene.search.DefaultSimilarity
.
Additionally you can override the default similarity on class level using
the @Similarity
annotation.
@Entity
@Indexed
@Similarity(impl = DummySimilarity.class)
public class Book {
...
}
As an example, let's assume it is not important how often a
term appears in a document. Documents with a single occurrence of the term
should be scored the same as documents with multiple occurrences. In this
case your custom implementation of the method tf(float
freq)
should return 1.0.
When two entities share the same index they must declare the
same Similarity
implementation. Classes in the same
class hierarchy always share the index, so it's not allowed to override the
Similarity
implementation in a subtype.