Hibernate.orgCommunity Documentation

Chapter 5. Querying

5.1. Building queries
5.1.1. Building a Lucene query using the Lucene API
5.1.2. Building a Lucene query with the Hibernate Search query DSL
5.1.3. Building a Hibernate Search query
5.2. Retrieving the results
5.2.1. Performance considerations
5.2.2. Result size
5.2.3. ResultTransformer
5.2.4. Understanding results
5.3. Filters
5.3.1. Using filters in a sharded environment
5.4. Faceting
5.4.1. Creating a faceting request
5.4.2. Applying a faceting request
5.4.3. Restricting query results
5.5. Optimizing the query process
5.5.1. Caching index values: FieldCache

The second most important capability of Hibernate Search is the ability to execute Lucene queries and retrieve entities managed by a Hibernate session. The search provides the power of Lucene without leaving the Hibernate paradigm, giving another dimension to the Hibernate classic search mechanisms (HQL, Criteria query, native SQL query).

Preparing and executing a query consists of four simple steps:

To access the querying facilities, you have to use a FullTextSession. This Search specific session wraps a regular org.hibernate.Session in order to provide query and indexing capabilities.

Example 5.1. Creating a FullTextSession

Session session = sessionFactory.openSession();

...
FullTextSession fullTextSession = Search.getFullTextSession(session);    

Once you have a FullTextSession you have two options to build the full-text query: the Hibernate Search query DSL or the native Lucene query.

If you use the Hibernate Search query DSL, it will look like this:

final QueryBuilder b = fullTextSession.getSearchFactory()
    .buildQueryBuilder().forEntity( Myth.class ).get();

org.apache.lucene.search.Query luceneQuery =
    b.keyword()
        .onField("history").boostedTo(3)
        .matching("storm")
        .createQuery();


org.hibernate.Query fullTextQuery = fullTextSession.createFullTextQuery( luceneQuery );
List result = fullTextQuery.list(); //return a list of managed objects    

You can alternatively write your Lucene query either using the Lucene query parser or Lucene programmatic API.

Example 5.2. Creating a Lucene query via the QueryParser

SearchFactory searchFactory = fullTextSession.getSearchFactory();
org.apache.lucene.queryParser.QueryParser parser = 
    new QueryParser("title", searchFactory.getAnalyzer(Myth.class) );
try {
    org.apache.lucene.search.Query luceneQuery = parser.parse( "history:storm^3" );
}
catch (ParseException e) {
    //handle parsing failure
}


org.hibernate.Query fullTextQuery = fullTextSession.createFullTextQuery(luceneQuery);
List result = fullTextQuery.list(); //return a list of managed objects    

Note

The Hibernate query built on top of the Lucene query is a regular org.hibernate.Query, which means you are in the same paradigm as the other Hibernate query facilities (HQL, Native or Criteria). The regular list() , uniqueResult(), iterate() and scroll() methods can be used.

In case you are using the Java Persistence APIs of Hibernate, the same extensions exist:

Example 5.3. Creating a Search query using the JPA API

EntityManager em = entityManagerFactory.createEntityManager();


FullTextEntityManager fullTextEntityManager = 
    org.hibernate.search.jpa.Search.getFullTextEntityManager(em);
...
final QueryBuilder b = fullTextEntityManager.getSearchFactory()
    .buildQueryBuilder().forEntity( Myth.class ).get();
org.apache.lucene.search.Query luceneQuery =
    b.keyword()
        .onField("history").boostedTo(3)
        .matching("storm")
        .createQuery(); javax.persistence.Query fullTextQuery = fullTextEntityManager.createFullTextQuery( luceneQuery );
List result = fullTextQuery.getResultList(); //return a list of managed objects  

Note

The following examples we will use the Hibernate APIs but the same example can be easily rewritten with the Java Persistence API by just adjusting the way the FullTextQuery is retrieved.

Hibernate Search queries are built on top of Lucene queries which gives you total freedom on the type of Lucene query you want to execute. However, once built, Hibernate Search wraps further query processing using org.hibernate.Query as your primary query manipulation API.

Writing full-text queries with the Lucene programmatic API is quite complex. It's even more complex to understand the code once written. Besides the inherent API complexity, you have to remember to convert your parameters to their string equivalent as well as make sure to apply the correct analyzer to the right field (a ngram analyzer will for example use several ngrams as the tokens for a given word and should be searched as such).

The Hibernate Search query DSL makes use of a style of API called a fluent API. This API has a few key characteristics:

Let's see how to use the API. You first need to create a query builder that is attached to a given indexed entity type. This QueryBuilder will know what analyzer to use and what field bridge to apply. You can create several QueryBuilders (one for each entity type involved in the root of your query). You get the QueryBuilder from the SearchFactory.

QueryBuilder mythQB = searchFactory.buildQueryBuilder().forEntity( Myth.class ).get();

You can also override the analyzer used for a given field or fields. This is rarely needed and should be avoided unless you know what you are doing.

QueryBuilder mythQB = searchFactory.buildQueryBuilder()

    .forEntity( Myth.class )
        .overridesForField("history","stem_analyzer_definition")
    .get();

Using the query builder, you can then build queries. It is important to realize that the end result of a QueryBuilder is a Lucene query. For this reason you can easily mix and match queries generated via Lucene's query parser or Query objects you have assembled with the Lucene programmatic API and use them with the Hibernate Search DSL. Just in case the DSL is missing some features.

Let's start with the most basic use case - searching for a specific word:

Query luceneQuery = mythQB.keyword().onField("history").matching("storm").createQuery();

keyword() means that you are trying to find a specific word. onField() specifies in which Lucene field to look. matching() tells what to look for. And finally createQuery() creates the Lucene query object. A lot is going on with this line of code.

Let's see how you can search a property that is not of type string.

@Entity 

@Indexed 
public class Myth {
  @Field(index = Index.UN_TOKENIZED) 
  @DateBridge(resolution = Resolution.YEAR)
  public Date getCreationDate() { return creationDate; }
  public Date setCreationDate(Date creationDate) { this.creationDate = creationDate; }
  private Date creationDate;
  
  ...
}
Date birthdate = ...;
Query luceneQuery = mythQb.keyword().onField("creationDate").matching(birthdate).createQuery();

This conversion works for any object, not just Date, provided that the FieldBridge has an objectToString method (and all built-in FieldBridge implementations do).

We make the example a little more advanced now and have a look at how to search a field that uses ngram analyzers. ngram analyzers index succession of ngrams of your words which helps to recover from user typos. For example the 3-grams of the word hibernate are hib, ibe, ber, rna, nat, ate.

@AnalyzerDef(name = "ngram",

  tokenizer = @TokenizerDef(factory = StandardTokenizerFactory.class ),
  filters = {
    @TokenFilterDef(factory = StandardFilterFactory.class),
    @TokenFilterDef(factory = LowerCaseFilterFactory.class),
    @TokenFilterDef(factory = StopFilterFactory.class),
    @TokenFilterDef(factory = NGramFilterFactory.class,
      params = { 
        @Parameter(name = "minGramSize", value = "3"),
        @Parameter(name = "maxGramSize", value = "3") } )
  }
)
@Entity 
@Indexed 
public class Myth {
  @Field(analyzer=@Analyzer(definition="ngram") 
  @DateBridge(resolution = Resolution.YEAR)
  public String getName() { return name; }
  public String setName(Date name) { this.name = name; }
  private String name;
  
  ...
}
Date birthdate = ...;
Query luceneQuery = mythQb.keyword().onField("name").matching("Sisiphus")
   .createQuery();

The matching word "Sisiphus" will be lower-cased and then split into 3-grams: sis, isi, sip, phu, hus. Each of these n-gram will be part of the query. We will then be able to find the Sysiphus myth (with a y). All that is transparently done for you.

To search for multiple possible words in the same field, simply add them all in the matching clause.

//search document with storm or lightning in their history

Query luceneQuery = 
    mythQB.keyword().onField("history").matching("storm lightning").createQuery();

To search the same word on multiple fields, use the onFields method.

Query luceneQuery = mythQB

    .keyword()
    .onFields("history","description","name")
    .matching("storm")
    .createQuery();

Sometimes, one field should be treated differently from another field even if searching the same term, you can use the andField() method for that.

Query luceneQuery = mythQB.keyword()

    .onField("history")
    .andField("name")
      .boostedTo(5)
    .andField("description")
    .matching("storm")
    .createQuery();

In the previous example, only field name is boosted to 5.

Finally, you can aggregate (combine) queries to create more complex queries. The following aggregation operators are available:

The subqueries can be any Lucene query including a boolean query itself. Let's look at a few examples:

//look for popular modern myths that are not urban

Date twentiethCentury = ...;
Query luceneQuery = mythQB
    .bool()
      .must( mythQB.keyword().onField("description").matching("urban").createQuery() )
        .not()
      .must( mythQB.range().onField("starred").above(4).createQuery() )
      .must( mythQB
        .range()
        .onField("creationDate")
        .above(twentiethCentury)
        .createQuery() )
    .createQuery();
//look for popular myths that are preferably urban
Query luceneQuery = mythQB
    .bool()
      .should( mythQB.keyword().onField("description").matching("urban").createQuery() )
      .must( mythQB.range().onField("starred").above(4).createQuery() )
    .createQuery();
//look for all myths except religious ones
Query luceneQuery = mythQB
    .all()
      .except( monthQb
        .keyword()
        .onField( "description_stem" 
        .matching( "religion" )
        .createQuery() 
      )
    .createQuery();

So far we only covered the process of how to create your Lucene query (see Section 5.1, “Building queries”). However, this is only the first step in the chain of actions. Let's now see how to build the Hibernate Search query from the Lucene query.

For some use cases, returning the domain object (including its associations) is overkill. Only a small subset of the properties is necessary. Hibernate Search allows you to return a subset of properties:


Hibernate Search extracts the properties from the Lucene index and convert them back to their object representation, returning a list of Object[]. Projections avoid a potential database round trip (useful if the query response time is critical). However, it also has several constraints:

  • the properties projected must be stored in the index (@Field(store=Store.YES)), which increases the index size

  • the properties projected must use a FieldBridge implementing org.hibernate.search.bridge.TwoWayFieldBridge or org.hibernate.search.bridge.TwoWayStringBridge, the latter being the simpler version.

    Note

    All Hibernate Search built-in types are two-way.

  • you can only project simple properties of the indexed entity or its embedded associations. This means you cannot project a whole embedded entity.

  • projection does not work on collections or maps which are indexed via @IndexedEmbedded

Projection is also useful for another kind of use case. Lucene can provide metadata information about the results. By using some special projection constants, the projection mechanism can retrieve this metadata:


You can mix and match regular fields and projection constants. Here is the list of the available constants:

  • FullTextQuery.THIS: returns the initialized and managed entity (as a non projected query would have done).

  • FullTextQuery.DOCUMENT: returns the Lucene Document related to the object projected.

  • FullTextQuery.OBJECT_CLASS: returns the class of the indexed entity.

  • FullTextQuery.SCORE: returns the document score in the query. Scores are handy to compare one result against an other for a given query but are useless when comparing the result of different queries.

  • FullTextQuery.ID: the id property value of the projected object.

  • FullTextQuery.DOCUMENT_ID: the Lucene document id. Careful, Lucene document id can change overtime between two different IndexReader opening (this feature is experimental).

  • FullTextQuery.EXPLANATION: returns the Lucene Explanation object for the matching object/document in the given query. Do not use if you retrieve a lot of data. Running explanation typically is as costly as running the whole Lucene query per matching element. Make sure you use projection!

By default, Hibernate Search uses the most appropriate strategy to initialize entities matching your full text query. It executes one (or several) queries to retrieve the required entities. This is the best approach to minimize database round trips in a scenario where none / few of the retrieved entities are present in the persistence context (ie the session) or the second level cache.

If most of your entities are present in the second level cache, you can force Hibernate Search to look into the cache before retrieving an object from the database.


ObjectLookupMethod defines the strategy used to check if an object is easily accessible (without database round trip). Other options are:

  • ObjectLookupMethod.PERSISTENCE_CONTEXT: useful if most of the matching entities are already in the persistence context (ie loaded in the Session or EntityManager)

  • ObjectLookupMethod.SECOND_LEVEL_CACHE: check first the persistence context and then the second-level cache.

Note

Note that to search in the second-level cache, several settings must be in place:

  • the second level cache must be properly configured and active

  • the entity must have enabled second-level cache (eg via @Cacheable)

  • the Session, EntityManager or Query must allow access to the second-level cache for read access (ie CacheMode.NORMAL in Hibernate native APIs or CacheRetrieveMode.USE in JPA 2 APIs).

Warning

Avoid using ObjectLookupMethod.SECOND_LEVEL_CACHE unless your second level cache implementation is either EHCache or Infinispan; other second level cache providers don't currently implement this operation efficiently.

You can also customize how objects are loaded from the database (if not found before). Use DatabaseRetrievalMethod for that:

  • QUERY (default): use a (set of) queries to load several objects in batch. This is usually the best approach.

  • FIND_BY_ID: load objects one by one using the Session.get or EntityManager.find semantic. This might be useful if batch-size is set on the entity (in which case, entities will be loaded in batch by Hibernate Core). QUERY should be preferred almost all the time.

You can limit the time a query takes in Hibernate Search in two ways:

You can decide to stop a query if when it takes more than a predefined amount of time. Note that this is a best effort basis but if Hibernate Search still has significant work to do and if we are beyond the time limit, a QueryTimeoutException will be raised (org.hibernate.QueryTimeoutException or javax.persistence.QueryTimeoutException depending on your programmatic API).

To define the limit when using the native Hibernate APIs, use one of the following approaches


Likewise getResultSize(), iterate() and scroll() honor the timeout but only until the end of the method call. That simply means that the methods of Iterable or the ScrollableResults ignore the timeout.

Note

explain() does not honor the timeout: this method is used for debug purposes and in particular to find out why a query is slow

When using JPA, simply use the standard way of limiting query execution time.


Important

Remember, this is a best effort approach and does not guarantee to stop exactly on the specified timeout.

Alternatively, you can return the number of results which have already been fetched by the time the limit is reached. Note that only the Lucene part of the query is influenced by this limit. It is possible that, if you retrieve managed object, it takes longer to fetch these objects.

To define this soft limit, use the following approach


Likewise getResultSize(), iterate() and scroll() honor the time limit but only until the end of the method call. That simply means that the methods of Iterable or the ScrollableResults ignore the timeout.

You can determine if the results have been partially loaded by invoking the hasPartialResults method.


If you use the JPA API, limitExecutionTimeTo and hasPartialResults are also available to you.

Warning

This approach is considered experimental

Once the Hibernate Search query is built, executing it is in no way different than executing a HQL or Criteria query. The same paradigm and object semantic applies. All the common operations are available: list(), uniqueResult(), iterate(), scroll().

You will find yourself sometimes puzzled by a result showing up in a query or a result not showing up in a query. Luke is a great tool to understand those mysteries. However, Hibernate Search also gives you access to the Lucene Explanation object for a given result (in a given query). This class is considered fairly advanced to Lucene users but can provide a good understanding of the scoring of an object. You have two ways to access the Explanation object for a given result:

The first approach takes a document id as a parameter and return the Explanation object. The document id can be retrieved using projection and the FullTextQuery.DOCUMENT_ID constant.

The second approach let's you project the Explanation object using the FullTextQuery.EXPLANATION constant.


Be careful, building the explanation object is quite expensive, it is roughly as expensive as running the Lucene query again. Don't do it if you don't need the object

Apache Lucene has a powerful feature that allows to filter query results according to a custom filtering process. This is a very powerful way to apply additional data restrictions, especially since filters can be cached and reused. Some interesting use cases are:

Hibernate Search pushes the concept further by introducing the notion of parameterizable named filters which are transparently cached. For people familiar with the notion of Hibernate Core filters, the API is very similar:


In this example we enabled two filters on top of the query. You can enable (or disable) as many filters as you like.

Declaring filters is done through the @FullTextFilterDef annotation. This annotation can be on any @Indexed entity regardless of the query the filter is later applied to. This implies that filter definitions are global and their names must be unique. A SearchException is thrown in case two different @FullTextFilterDef annotations with the same name are defined. Each named filter has to specify its actual filter implementation.


BestDriversFilter is an example of a simple Lucene filter which reduces the result set to drivers whose score is 5. In this example the specified filter implements the org.apache.lucene.search.Filter directly and contains a no-arg constructor.

If your Filter creation requires additional steps or if the filter you want to use does not have a no-arg constructor, you can use the factory pattern:


Hibernate Search will look for a @Factory annotated method and use it to build the filter instance. The factory must have a no-arg constructor.

Named filters come in handy where parameters have to be passed to the filter. For example a security filter might want to know which security level you want to apply:


Each parameter name should have an associated setter on either the filter or filter factory of the targeted named filter definition.


Note the method annotated @Key returning a FilterKey object. The returned object has a special contract: the key object must implement equals() / hashCode() so that 2 keys are equal if and only if the given Filter types are the same and the set of parameters are the same. In other words, 2 filter keys are equal if and only if the filters from which the keys are generated can be interchanged. The key object is used as a key in the cache mechanism.

@Key methods are needed only if:

  • you enabled the filter caching system (enabled by default)

  • your filter has parameters

In most cases, using the StandardFilterKey implementation will be good enough. It delegates the equals() / hashCode() implementation to each of the parameters equals and hashcode methods.

As mentioned before the defined filters are per default cached and the cache uses a combination of hard and soft references to allow disposal of memory when needed. The hard reference cache keeps track of the most recently used filters and transforms the ones least used to SoftReferences when needed. Once the limit of the hard reference cache is reached additional filters are cached as SoftReferences. To adjust the size of the hard reference cache, use hibernate.search.filter.cache_strategy.size (defaults to 128). For advanced use of filter caching, you can implement your own FilterCachingStrategy. The classname is defined by hibernate.search.filter.cache_strategy.

This filter caching mechanism should not be confused with caching the actual filter results. In Lucene it is common practice to wrap filters using the IndexReader around a CachingWrapperFilter. The wrapper will cache the DocIdSet returned from the getDocIdSet(IndexReader reader) method to avoid expensive recomputation. It is important to mention that the computed DocIdSet is only cachable for the same IndexReader instance, because the reader effectively represents the state of the index at the moment it was opened. The document list cannot change within an opened IndexReader. A different/new IndexReader instance, however, works potentially on a different set of Documents (either from a different index or simply because the index has changed), hence the cached DocIdSet has to be recomputed.

Hibernate Search also helps with this aspect of caching. Per default the cache flag of @FullTextFilterDef is set to FilterCacheModeType.INSTANCE_AND_DOCIDSETRESULTS which will automatically cache the filter instance as well as wrap the specified filter around a Hibernate specific implementation of CachingWrapperFilter (org.hibernate.search.filter.CachingWrapperFilter). In contrast to Lucene's version of this class SoftReferences are used together with a hard reference count (see discussion about filter cache). The hard reference count can be adjusted using hibernate.search.filter.cache_docidresults.size (defaults to 5). The wrapping behaviour can be controlled using the @FullTextFilterDef.cache parameter. There are three different values for this parameter:

ValueDefinition
FilterCacheModeType.NONENo filter instance and no result is cached by Hibernate Search. For every filter call, a new filter instance is created. This setting might be useful for rapidly changing data sets or heavily memory constrained environments.
FilterCacheModeType.INSTANCE_ONLYThe filter instance is cached and reused across concurrent Filter.getDocIdSet() calls. DocIdSet results are not cached. This setting is useful when a filter uses its own specific caching mechanism or the filter results change dynamically due to application specific events making DocIdSet caching in both cases unnecessary.
FilterCacheModeType.INSTANCE_AND_DOCIDSETRESULTSBoth the filter instance and the DocIdSet results are cached. This is the default value.

Last but not least - why should filters be cached? There are two areas where filter caching shines:

  • the system does not update the targeted entity index often (in other words, the IndexReader is reused a lot)

  • the Filter's DocIdSet is expensive to compute (compared to the time spent to execute the query)

It is possible, in a sharded environment to execute queries on a subset of the available shards. This can be done in two steps:

Let's first look at an example of sharding strategy that query on a specific customer shard if the customer filter is activated.

public class CustomerShardingStrategy implements IndexShardingStrategy {


 // stored DirectoryProviders in a array indexed by customerID
 private DirectoryProvider<?>[] providers;
 
 public void initialize(Properties properties, DirectoryProvider<?>[] providers) {
  this.providers = providers;
 }
 public DirectoryProvider<?>[] getDirectoryProvidersForAllShards() {
  return providers;
 }
 public DirectoryProvider<?> getDirectoryProviderForAddition(
     Class<?> entity, Serializable id, String idInString, Document document) {
  Integer customerID = Integer.parseInt(document.getField("customerID").stringValue());
  return providers[customerID];
 }
 public DirectoryProvider<?>[] getDirectoryProvidersForDeletion(
     Class<?> entity, Serializable id, String idInString) {
  return getDirectoryProvidersForAllShards();
 }
  /**
  * Optimization; don't search ALL shards and union the results; in this case, we 
  * can be certain that all the data for a particular customer Filter is in a single
  * shard; simply return that shard by customerID.
  */
 public DirectoryProvider<?>[] getDirectoryProvidersForQuery(
     FullTextFilterImplementor[] filters) {
  FFullTextFilter filter = getCustomerFilter(filters, "customer");
  if (filter == null) {
   return getDirectoryProvidersForAllShards();
  }
  else {
   return new DirectoryProvider[] { providers[Integer.parseInt(
       filter.getParameter("customerID").toString())] };
  }
 }
 private FullTextFilter getFilter(FullTextFilterImplementor[] filters, String name) {
  for (FullTextFilterImplementor filter: filters) {
   if (filter.getName().equals(name)) return filter;
  }
  return null;
 }
}

In this example, if the filter named customer is present, we make sure to only use the shard dedicated to this customer. Otherwise, we return all shards. A given Sharding strategy can react to one or more filters and depends on their parameters.

The second step is simply to activate the filter at query time. While the filter can be a regular filter (as defined in Section 5.3, “Filters”) which also filters Lucene results after the query, you can make use of a special filter that will only be passed to the sharding strategy and otherwise ignored for the rest of the query. Simply use the ShardSensitiveOnlyFilter class when declaring your filter.

@Entity @Indexed
@FullTextFilterDef(name="customer", impl=ShardSensitiveOnlyFilter.class)

public class Customer {
   ...
}
FullTextQuery query = ftEm.createFullTextQuery(luceneQuery, Customer.class); query.enableFulltextFilter("customer").setParameter("CustomerID", 5);
@SuppressWarnings("unchecked")
List<Customer> results = query.getResultList();

Note that by using the ShardSensitiveOnlyFilter, you do not have to implement any Lucene filter. Using filters and sharding strategy reacting to these filters is recommended to speed up queries in a sharded environment.

Faceted search is a technique which allows to divide the results of a query into multiple categories. This categorisation includes the calculation of hit counts for each category and the ability to further restrict search results based on these facets (categories). Example 5.24, “Search for 'Hibernate Search' on Amazon” shows a faceting example. The search results in fifteen hits which are displayed on the main part of the page. The navigation bar on the left, however, shows the category Computers & Internet with its subcategories Programming, Computer Science, Databases, Software, Web Development, Networking and Home Computing. For each of these subcategories the number of books is shown matching the main search criteria and belonging to the respective subcategory. This division of the category Computers & Internet is one concrete search facet. Another one is for example the average customer review.


In Hibernate Search the classes QueryBuilder and FullTextQuery are the entry point into the faceting API. The former allows to create faceting requests whereas the latter gives access to the so called FacetManager. With the help of the FacetManager faceting requests can be applied on a query and selected facets can be added to an existing query in order to refine search results. The following sections will describe the faceting process in more detail. The examples will use the entity Cd as shown in Example 5.25, “Entity Cd”:


The first step towards a faceted search is to create the FacetingRequest. Currently two types of faceting requests are supported. The first type is called discrete faceting and the second type range faceting request. In the case of a discrete faceting request you specify on which index field you want to facet (categorize) and which faceting options to apply. An example for a discrete faceting request can be seen in Example 5.26, “Creating a discrete faceting request”:


When executing this faceting request a Facet instace will be created for each discrete value for the indexed field label. The Facet instance will record the actual field value including how often this particular field value occurs within the orginial query results. orderedBy, includeZeroCounts and maxFacetCount are optional parameters which can be applied on any faceting request. orderedBy allows to specify in which order the created facets will be returned. The default is FacetSortOrder.COUNT_DESC, but you can also sort on the field value or the order in which ranges were specified. includeZeroCount determines whether facets with a count of 0 will be included in the result (per default they are) and maxFacetCount allows to limit the maximum amount of facets returned.

Tip

At the moment there are several preconditions an indexed field has to meet in order to apply faceting on it. The indexed property must be of type String, Date or a subtype of Number and null values should be avoided. Furthermore the property has to be indexed with Index.UN_TOKENIZED and in case of a numeric property @NumericField needs to be specified.

The creation of a range faceting request is quite similar except that we have to specify ranges for the field values we are faceting on. A range faceting request can be seen in Example 5.27, “Creating a range faceting request” where three different price ranges are specified. below and above can only be specified once, but you can specify as many from - to ranges as you want. For each range boundary you can also specify via excludeLimit whether it is included into the range or not.


Last but not least, you can apply any of the returned Facets as additional criteria on your original query in order to implement a "drill-down" functionality. For this purpose FacetSelection can be utilized. FacetSelections are available via the FacetManager and allow you to select a facet as query criteria (selectFacets), remove a facet restriction (deselectFacets), remove all facet restrictions (clearSelectedFacets) and retrieve all currently selected facets (getSelectedFacets). Example 5.29, “Restricting query results via the application of a FacetSelection” shows an example.


Query performance depends on several criteria:

The primary function of a Lucene index is to identify matches to your queries, still after the query is performed the results must be analyzed to extract useful information: typically Hibernate Search might need to extract the Class type and the primary key.

Extracting the needed values from the index has a performance cost, which in some cases might be very low and not noticeable, but in some other cases might be a good candidate for caching.

What is exactly needed depends on the kind of Projections being used (see Section 5.1.3.5, “Projection”), and in some cases the Class type is not needed as it can be inferred from the query context or other means.

Using the @CacheFromIndex annotation you can experiment different kinds of caching of the main metadata fields required by Hibernate Search:



import static org.hibernate.search.annotations.FieldCacheType.CLASS;
import static org.hibernate.search.annotations.FieldCacheType.ID;
@Indexed
@CacheFromIndex( { CLASS, ID } )
public class Essay {
    ...
         

It is currently possible to cache Class types and IDs using this annotation:

  • CLASS: Hibernate Search will use a Lucene FieldCache to improve peformance of the Class type extraction from the index.

    This value is enabled by default, and is what Hibernate Search will apply if you don't specify the @CacheFromIndex annotation.

  • ID: Extracting the primary identifier will use a cache. This is likely providing the best performing queries, but will consume much more memory which in turn might reduce performance.

Note

Measure the performance and memory consumption impact after warmup (executing some queries): enabling Field Caches is likely to improve performance but this is not always the case.

Using a FieldCache has two downsides to consider:

  • Memory usage: these caches can be quite memory hungry. Typically the CLASS cache has lower requirements than the ID cache.

  • Index warmup: when using field caches, the first query on a new index or segment will be slower than when you don't have caching enabled.

With some queries the classtype won't be needed at all, in that case even if you enabled the CLASS field cache, this might not be used; for example if you are targeting a single class, obviously all returned values will be of that type (this is evaluated at each Query execution).

For the ID FieldCache to be used, the ids of targeted entities must be using a TwoWayFieldBridge (as all builting bridges), and all types being loaded in a specific query must use the fieldname for the id, and have ids of the same type (this is evaluated at each Query execution).