Preface
Full text search engines like Apache Lucene are very powerful technologies to add efficient free text search capabilities to applications. However, Lucene suffers several mismatches when dealing with object domain models. Amongst other things indexes have to be kept up to date and mismatches between index structure and domain model as well as query mismatches have to be avoided.
Hibernate Search addresses these shortcomings: it indexes your domain model with the help of a few annotations, takes care of database/index synchronization and brings back regular managed objects from free text queries.
To achieve this, Hibernate Search combines the power of Hibernate ORM and Apache Lucene/Elasticsearch/OpenSearch.
1. Compatibility
1.1. Dependencies
Version |
Note |
|
---|---|---|
Java Runtime |
11, 17 or 21 |
|
Hibernate ORM (for the Hibernate ORM mapper |
6.6.3.Final |
|
Jakarta Persistence (for the Hibernate ORM mapper |
3.1 |
|
Apache Lucene (for the Lucene backend) |
9.11.1 |
|
Elasticsearch server (for the Elasticsearch backend) |
7.10+ or 8.x |
Most of older minor versions (e.g. 7.11 or 8.0) are not given priority for bugfixes and new features. |
OpenSearch server (for the Elasticsearch backend) |
1.3 or 2.x |
Other minor versions may work but are not given priority for bugfixes and new features. |
Find more information for all versions of Hibernate Search on our compatibility matrix. The compatibility policy may also be of interest. |
If you get Hibernate Search from Maven, it is recommended to import Hibernate Search BOM as part of your dependency management to keep all its artifact versions aligned:
|
Elasticsearch 7.11+ licensing
While Elasticsearch up to 7.10 was distributed under the Apache License 2.0, be aware that Elasticsearch 7.11 and later are distributed under the Elastic License and the SSPL, which are not considered open-source by the Open Source Initiative. Only the low-level Java REST client, which Hibernate Search depends on, remains open-source. |
OpenSearch
While it historically targeted Elastic’s Elasticsearch distribution, Hibernate Search is also compatible with OpenSearch and regularly tested against it; see Compatibility for more information. Every section of this documentation referring to Elasticsearch is also relevant for the OpenSearch distribution. |
1.2. Framework support
1.2.1. Quarkus
Quarkus has an official extension for Hibernate Search with Hibernate ORM using the Elasticsearch backend, which is a tight integration with additional features, different dependencies, and different configuration properties.
As your first step to using Hibernate Search within Quarkus, we recommend you follow Quarkus’s Hibernate Search Guide: it is a great hands-on introduction to Hibernate Search, and it covers the specifics of Quarkus.
1.2.2. WildFly
WildFly includes modules for Hibernate Search with Hibernate ORM using either the Lucene backend or the Elasticsearch backend.
To start using Hibernate Search within WildFly, see the Hibernate Search section in the WildFly Developer Guide: it covers all the specifics of WildFly.
1.2.3. Spring Boot
Hibernate Search can easily be integrated into a Spring Boot application. Just read about Spring Boot’s specifics below, then follow the getting started guide.
Configuration properties
application.properties
/application.yaml
are Spring Boot configuration files,
not JPA or Hibernate Search configuration files.
Adding Hibernate Search properties starting with hibernate.search.
directly in that file will not work.
- When integrating Hibernate Search with Hibernate ORM
-
Prefix your Hibernate Search properties with
spring.jpa.properties.
, so that Spring Boot passes along the properties to Hibernate ORM, which will pass them along to Hibernate Search.For example:
spring.jpa.properties.hibernate.search.backend.hosts = elasticsearch.mycompany.com
- When using the Standalone POJO mapper
-
You can pass properties programmatically to
SearchMappingBuilder#property
.
Dependency versions
Spring Boot automatically sets the version of dependencies without your knowledge. While this is ordinarily a good thing, from time to time Spring Boot dependencies will be a little out of date. Thus, it is recommended to override Spring Boot’s defaults at least for some key dependencies.
With Maven, there are a few ways to override these versions depending on how Spring is added to the application.
If your application’s POM file is using spring-boot-starter-parent
as its parent POM
then simply adding version properties to your POM’s <properties>
should help:
<properties>
<hibernate.version>6.6.3.Final</hibernate.version>
<elasticsearch-client.version>8.15.4</elasticsearch-client.version>
<!-- ... plus any other properties of yours ... -->
</properties>
If, after setting the properties above, you still are getting the same version of the libraries, check if property names in the Spring Boot’s BOM have changed, and if so use the new property name. |
Alternatively, if either the spring-boot-dependencies
or the spring-boot-starter-parent
is imported into the dependency management (<dependencyManagement>
)
then overriding the versions can be done either by importing a BOM listing the dependencies we want to override,
or by explicitly listing a dependency with its version that we want to be used:
<dependencyManagement>
<dependencies>
<!--
Overriding Hibernate ORM version by importing the BOM.
Alternatively, can be done by adding specific dependencies
as shown below for Elasticsearch dependencies.
-->
<dependency>
<groupId>org.hibernate.orm</groupId>
<artifactId>hibernate-platform</artifactId>
<version>${version.org.hibernate.orm}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-dependencies</artifactId>
<version>3.3.0</version>
<type>pom</type>
<scope>import</scope>
</dependency>
<!--
Since there is no BOM for the Elasticsearch REST client,
these dependencies have to be listed explicitly:
-->
<dependency>
<groupId>org.elasticsearch.client</groupId>
<artifactId>elasticsearch-rest-client</artifactId>
<version>8.15.4</version>
</dependency>
<dependency>
<groupId>org.elasticsearch.client</groupId>
<artifactId>elasticsearch-rest-client-sniffer</artifactId>
<version>8.15.4</version>
</dependency>
<!-- Other dependency management entries -->
</dependencies>
</dependencyManagement>
For other build tools refer to their documentation for details.
Maven’s
|
If, after setting the properties above,
you still have problems (e.g. |
Application hanging on startup
Spring Boot 2.3.x and above is affected by a bug that causes the application to hang on startup when using Hibernate Search, particularly when using custom components (custom bridges, analysis configurers, …).
The problem, which is not limited to just Hibernate Search, has been reported, but hasn’t been fixed yet in Spring Boot 2.5.1.
As a workaround, you can set the property spring.data.jpa.repositories.bootstrap-mode
to deferred
or,
if that doesn’t work, default
.
Interestingly, using @EnableJpaRepositories(bootstrapMode = BootstrapMode.DEFERRED)
has been reported to work
even in situations where setting spring.data.jpa.repositories.bootstrap-mode
to deferred
didn’t work.
Alternatively, if you do not need dependency injection in your custom components,
you can refer to those components with the prefix constructor:
so that Hibernate Search doesn’t even try to use Spring to retrieve the components,
and thus avoids the deadlock in Spring.
See this section for more information.
Spring Boot’s Elasticsearch client and auto-configuration
As you may know, Spring Boot includes "auto-configuration" that triggers as soon as a dependency is detected in the classpath.
This may lead to problems in some cases when dependencies are used by the application, but not through Spring Boot.
In particular, Hibernate Search transitively brings in a dependency to Elasticsearch’s low-level REST Client.
Spring Boot, through ElasticsearchRestClientAutoConfiguration
,
will automatically set up an Elasticsearch REST client targeting (by default) http://localhost:9200
as soon as it detects that dependency to the Elasticsearch REST Client JAR.
If your Elasticsearch cluster is not reachable at http://localhost:9200
,
this might lead to errors on startup.
To get rid of these errors, either configure Spring’s Elasticsearch client manually, or disable this specific auto-configuration.
Spring Boot’s Elasticsearch client is completely separate from Hibernate Search: the configuration of one won’t affect the other. |
1.2.4. Other
If your framework of choice is not mentioned in the previous sections, don’t worry: Hibernate Search works just fine with plenty of other frameworks.
Just follow the getting started guide to try it out.
2. Getting started with Hibernate Search
To get started with Hibernate Search, check out the following guides:
-
If your entities are defined in Hibernate ORM, see Getting started with Hibernate Search in Hibernate ORM.
-
If your entities are not defined in Hibernate ORM, see Getting started with Hibernate Search’s Standalone POJO Mapper instead.
3. Migrating
If you are upgrading an existing application from an earlier version of Hibernate Search to the latest release, make sure to check out the migration guide.
To Hibernate Search 5 users If you pull our artifacts from a Maven repository, and you come from Hibernate Search 5, be aware that just bumping the version number will not be enough. In particular, the group IDs changed from Additionally, be aware that a lot of APIs have changed, some only because of a package change, others because of more fundamental changes (like moving away from using Lucene types in Hibernate Search APIs). For that reason, you are encouraged to migrate first to Hibernate Search 6.0 using the 6.0 migration guide, and only then to later versions (which will be significantly easier). |
4. Concepts
4.1. Full-text search
Full-text search is a set of techniques for searching, in a corpus of text documents, the documents that best match a given query.
The main difference with traditional search — for example in an SQL database — is that the stored text is not considered as a single block of text, but as a collection of tokens (words).
Hibernate Search relies on either Apache Lucene or Elasticsearch to implement full-text search. Since Elasticsearch uses Lucene internally, they share a lot of characteristics and their general approach to full-text search.
To simplify, these search engines are based on the concept of inverted indexes: a dictionary where the key is a token (word) found in a document, and the value is the list of identifiers of every document containing this token.
Still simplifying, once all documents are indexed, searching for documents involves three steps:
-
extracting tokens (words) from the query;
-
looking up these tokens in the index to find matching documents;
-
aggregating the results of the lookups to produce a list of matching documents.
Lucene and Elasticsearch are not limited to just text search: numeric data is also supported, enabling support for integers, doubles, longs, dates, etc. These types are indexed and queried using a slightly different approach, which obviously does not involve text processing. |
4.2. Entity types
When it comes to the domain model of applications, Hibernate Search distinguishes between types (Java classes) that are considered entities, and those that are not.
The defining characteristic of entity types in Hibernate Search is that their instances have a distinct lifecycle: an entity instance may be saved into a datastore, or retrieved from it, without requiring the saving or retrieval of an instance of another type. For that purpose, each entity instance is assumed to carry an immutable, unique identifier.
These characteristics allow Hibernate Search to map entity types to indexes, but only entity types. "Embeddable" types that are referenced from or contained within an entity, but whose lifecycle is completely tied to that entity, cannot be mapped to an index.
Multiple aspects of Hibernate Search involve the concept of entity types:
-
Each entity type has an entity name, distinct from the type name. E.g. for a class named
com.acme.Book
, the entity name could beBook
(the default), or any arbitrarily chosen string. -
Properties pointing to an entity type (called associations) have specific mechanics; in particular, in order to handle reindexing, Hibernate Search needs to know about the inverse side of associations.
-
For the purposes of change tracking when reindexing, (e.g. in indexing plans), entity types represent the smallest scope Hibernate Search considers.
This means the paths representing "changed properties" in Hibernate Search always have an entity as their starting point, and the components within these paths never reach into another entity (but may point to one, when an association changes.
-
Hibernate Search may need additional configuration to enable loading of entity types from an external datastore, be it to load entities matching a query from an external source or to load all entity instances from an external source for full reindexing.
4.3. Mapping
Applications targeted by Hibernate search use an entity-based model to represent data.
In this model, each entity is a single object with a few properties of atomic type
(String
, Integer
, LocalDate
, …).
Each entity can contain non-root aggregates ("embeddable" types),
and each can also have multiple associations to one or even many other entities.
By contrast, Lucene and Elasticsearch work with documents. Each document is a collection of "fields", each field being assigned a name — a unique string — and a value — which can be text, but also numeric data such as an integer or a date. Fields also have a type, which not only determines the type of values (text/numeric), but more importantly the way this value will be stored: indexed, stored, with doc values, etc. Each document can contain nested aggregates ("objects"/"nested documents"), but there cannot really be associations between top-level documents.
Thus:
-
Entities are organized as a graph, where each node is an entity and each association is an edge.
-
Documents are organized, at best, as a collection of trees, where each tree is a document, optionally with nested documents.
There are multiple mismatches between the entity model and the document model: simple property types vs. more complex field types, associations vs. no associations, graph vs. collection of trees.
The goal of mapping, in Hibernate search, is to resolve these mismatches by defining how to transform one or more entities into a document, and how to resolve a search hit back into the original entity. This is the main added value of Hibernate Search, the basis for everything else from indexing to the various search DSLs.
Mapping is usually configured using annotations in the entity model, but this can also be achieved using a programmatic API. To learn more about how to configure mapping, see Mapping entities to indexes.
To learn how to index the resulting documents, see Indexing entities (hint: for the Hibernate ORM integration, it’s automatic).
To learn how to search with an API that takes advantage of the mapping to be closer to the entity model, in particular by returning hits as entities instead of just document identifiers, see Searching.
4.4. Binding
While the mapping definition is declarative, these declarations need to be interpreted and actually applied to the domain model.
That’s what Hibernate Search calls "binding":
during startup, a given mapping instruction (e.g. @GenericField
) will result in a "binder"
being instantiated and called, giving it an opportunity to inspect the part of the domain model it’s applied to
and to "bind" (assign) a component to that part of the model — for example a "bridge",
responsible for extracting data from an entity during indexing.
Hibernate Search comes with binders and bridges for many common use cases, and also provides the ability to plug in custom binders and bridges.
For more information, in particular on how to plug in custom binders and bridges, see Binding and bridges.
4.5. Analysis
As mentioned in Full-text search, the full-text engine works on tokens, which means text has to be processed both when indexing (document processing, to build the token → document index) and when searching (query processing, to generate a list of tokens to look up).
However, the processing is not just about "tokenizing".
Index lookups are exact lookups,
which means that looking up Great
(capitalized) will not return documents containing only great
(all lowercase).
An extra step is performed when processing text to address this caveat:
token filtering, which normalizes tokens.
Thanks to that "normalization",
Great
will be indexed as great
,
so that an index lookup for the query great
will match as expected.
In the Lucene world (Lucene, Elasticsearch, Solr, …), text processing during both the indexing and searching phases is called "analysis" and is performed by an "analyzer".
The analyzer is made up of three types of components, which will each process the text successively in the following order:
-
Character filter: transforms the input characters. Replaces, adds or removes characters.
-
Tokenizer: splits the text into several words, called "tokens".
-
Token filter: transforms the tokens. Replaces, add or removes characters in a token, derives new tokens from the existing ones, removes tokens based on some condition, …
The tokenizer usually splits on whitespaces (though there are other options).
Token filters are usually where customization takes place.
They can remove accented characters,
remove meaningless suffixes (-ing
, -s
, …)
or tokens (a
, the
, …),
replace tokens with a chosen spelling (wi-fi
⇒ wifi
),
etc.
Character filters, though useful, are rarely used, because they have no knowledge of token boundaries. Unless you know what you are doing, you should generally favor token filters. |
In some cases, it is necessary to index text in one block, without any tokenization:
-
For some types of text, such as SKUs or other business codes, tokenization simply does not make sense: the text is a single "keyword".
-
For sorts by field value, tokenization is not necessary. It is also forbidden in Hibernate Search due to performance issues; only non-tokenized fields can be sorted on.
To address these use cases, a special type of analyzer, called "normalizer", is available. Normalizers are simply analyzers that are guaranteed not to use a tokenizer: they can only use character filters and token filters.
In Hibernate Search, analyzers and normalizers are referenced by their name, for example when defining a full-text field. Analyzers and normalizers have two separate namespaces.
Some names are already assigned to built-in analyzers (in Elasticsearch in particular), but it is possible (and recommended) to assign names to custom analyzers and normalizers, assembled using built-in components (tokenizers, filters) to address your specific needs.
Each backend exposes its own APIs to define analyzers and normalizers, and generally to configure analysis. See the documentation of each backend for more information:
4.6. Commit and refresh
In order to get the best throughput when indexing and when searching, both Elasticsearch and Lucene rely on "buffers" when writing to and reading from the index:
-
When writing, changes are not directly written to the index, but to an "index writer" that buffers changes in-memory or in temporary files.
The changes are "pushed" to the actual index when the writer is committed. Until the commit happens, uncommitted changes are in an "unsafe" state: if the application crashes or if the server suffers from a power loss, uncommitted changes will be lost.
-
When reading, e.g. when executing a search query, data is not read directly from the index, but from an "index reader" that exposes a view of the index as it was at some point in the past.
The view is updated when the reader is refreshed. Until the refresh happens, results of search queries might be slightly out of date: documents added since the last refresh will be missing, documents delete since the last refresh will still be there, etc.
Unsafe changes and out-of-sync indexes are obviously undesirable, but they are a trade-off that improves performance.
Different factors influence when refreshes and commit happen:
-
Listener-triggered indexing and explicit indexing will, by default, require that a commit of the index writer is performed after each set of changes, meaning the changes are safe after the Hibernate ORM transaction commit returns (for the Hibernate ORM integration) or the
SearchSession
'sclose()
method returns (for the Standalone POJO Mapper). However, no refresh is requested by default, meaning the changes may only be visible at a later time, when the backend decides to refresh the index reader. This behavior can be customized by setting a different synchronization strategy. -
The mass indexer will not require any commit or refresh until the very end of mass indexing, to maximize indexing throughput.
-
Whenever there are no particular commit or refresh requirements, backend defaults will apply:
-
See here for Lucene.
-
A commit may be forced explicitly through the
flush()
API. -
A refresh may be forced explicitly though the
refresh()
API.
Even though we use the word "commit", this is not the same concept as a commit in relational database transactions: there is no transaction and no "rollback" is possible. There is no concept of isolation, either. After a refresh, all changes to the index are taken into account: those committed to the index, but also those that are still buffered in the index writer. For this reason, commits and refreshes can be treated as completely orthogonal concepts: certain setups will occasionally lead to committed changes not being visible in search queries, while others will allow even uncommitted changes to be visible in search queries. |
4.7. Sharding and routing
Sharding consists in splitting index data into multiple "smaller indexes", called shards, in order to improve performance when dealing with large amounts of data.
In Hibernate Search, similarly to Elasticsearch, another concept is closely related to sharding: routing. Routing consists in resolving a document identifier, or generally any string called a "routing key", into the corresponding shard.
When indexing:
-
A document identifier and optionally a routing key are generated from the indexed entity.
-
The document, along with its identifier and optionally its routing key, is passed to the backend.
-
The backend "routes" the document to the correct shard, and adds the routing key (if any) to a special field in the document (so that it’s indexed).
-
The document is indexed in that shard.
When searching:
-
The search query can optionally be passed one or more routing keys.
-
If no routing key is passed, the query will be executed on all shards.
-
If one or more routing keys are passed:
-
The backend resolves these routing keys into a set of shards, and the query will only be executed on all shards, ignoring the other shards.
-
A filter is added to the query so that only documents indexed with one of the given routing keys are matched.
-
Sharding, then, can be leveraged to boost performance in two ways:
-
When indexing: a sharded index can spread the "stress" onto multiple shards, which can be located on different disks (Lucene) or different servers (Elasticsearch).
-
When searching: if one property, let’s call it
category
, is often used to select a subset of documents, this property can be defined as a routing key in the mapping, so that it’s used to route documents instead of the document ID. As a result, documents with the same value forcategory
will be indexed in the same shard. Then when searching, if a query already filters documents so that it is known that the hits will all have the same value forcategory
, the query can be manually routed to the shards containing documents with this value, and the other shards can be ignored.
To enable sharding, some configuration is required:
-
The backends require explicit configuration: see here for Lucene and here for Elasticsearch.
-
In most cases, document IDs are used to route documents to shards by default. This does not allow taking advantage of routing when searching, which requires multiple documents to share the same routing key. Applying routing to a search query in that case will return at most one result. To explicitly define the routing key to assign to each document, assign routing bridges to your entities.
Sharding is static by nature: each index is expected to have the same shards, with the same identifiers, from one boot to the other. Changing the number of shards or their identifiers will require full reindexing. |
5. Architecture
5.1. Components of Hibernate Search
From the user’s perspective, Hibernate Search consists of two components:
- Mapper
-
The mapper "maps" the user model to an index model, and provide APIs consistent with the user model to perform indexing and searching.
Most applications rely on the Hibernate ORM mapper, which offers the ability to index properties of Hibernate ORM entities, but there is also a Standalone POJO mapper that can be used without Hibernate ORM.
The mapper is configured partly through annotations on the domain model, and partly through configuration properties.
- Backend
-
The backend is the abstraction over the full-text engines, where "things get done". It implements generic indexing and searching interfaces for use by the mapper through "index managers", each providing access to one index.
For instance the Lucene backend delegates to the Lucene library, and the Elasticsearch backend delegates to a remote Elasticsearch cluster.
The backend is configured partly by the mapper, which tells the backend which indexes must exist and what fields they must have, and partly through configuration properties.
The mapper and the backend work together to provide three main features:
- Mass indexing
-
This is how Hibernate Search rebuilds indexes from zero based on the content of a database.
The mapper queries the database to retrieve the identifier of every entity, then processes these identifiers in batches, loading the entities then processing them to generate documents that are sent to the backend for indexing. The backend puts the document in an internal queue, and will index documents in batches, in background processes, notifying the mapper when it’s done.
See Indexing a large amount of data with the
MassIndexer
for details. - Explicit and listener-triggered indexing
-
Explicit and listener-triggered indexing rely on indexing plans (
SearchIndexingPlan
) to index specific entities as a result of limited changes.With explicit indexing, the caller explicitly passes information about changes on entities to an indexing plan; with listener-triggered indexing, entity changes are detected transparently by the Hibernate ORM integration (with a few exceptions) and added to the indexing plan automatically.
Listener-triggered indexing only makes sense in the context of the Hibernate ORM integration; there is no such feature available for the Standalone POJO Mapper. In both cases, the indexing plan will deduce from those changes whether entities need to be reindexed, be it the changed entity itself or other entities that embed the changed entity in their index.
Upon transaction commit, changes in the indexing plan are processed (either in the same thread or in a background process, depending on the coordination strategy), and documents are generated, then sent to the backend for indexing. The backend puts the documents in an internal queue, and will index documents in batches, in background processes, notifying the mapper when it’s done.
See Implicit, listener-triggered indexing for details.
- Searching
-
This is how Hibernate Search provides ways to query an index.
The mapper exposes entry points to the search DSL, allowing selection of entity types to query. When one or more entity types are selected, the mapper delegates to the corresponding index managers to provide a Search DSL and ultimately create the search query. Upon query execution, the backend submits a list of entity references to the mapper, which loads the corresponding entities. The entities are then returned by the query.
See Searching for details.
5.2. Examples of architectures
5.2.1. Overview
Architecture | Single-node with Lucene | No coordination with Elasticsearch | Outbox polling with Elasticsearch |
---|---|---|---|
Compatible mappers |
|||
Application topology |
Single-node |
Single-node or multi-node |
|
Extra bits to maintain |
Indexes on filesystem |
Elasticsearch cluster |
|
Guarantee of index updates |
Non-transactional, after the database transaction / |
||
Visibility of index updates |
|||
Native features |
Mostly for experts |
For anyone |
|
Overhead for application threads |
|||
Overhead for the database |
|||
Impact on database schema |
None |
||
Listener-triggered indexing ignores: JPQL/SQL queries, asymmetric association updates |
|||
Out-of-sync indexes in rare situations: concurrent |
No other known limitation |
5.2.2. Single-node application with the Lucene backend
Description
With the Lucene backend, indexes are local to a given application node (JVM). They are accessed through direct calls to the Lucene library, without going through the network.
This mode is only relevant to single-node applications.
Pros and cons
Pros:
-
Simplicity: no external services are required, everything lives on the same server.
-
Immediate visibility (~milliseconds) of index updates. While other architectures can perform comparably well for most use cases, a single-node, Lucene backend is the best way to implement indexing if you need changes to be visible immediately after the database changes.
Cons:
-
Without coordination, backend errors during indexing may lead to out-of sync indexes.
-
Not so easy to extend: experienced developers can access a lot of Lucene features, even those that are not exposed by Hibernate Search, by providing native Lucene objects; however, Lucene APIs are not very easy to figure out for developers unfamiliar with Lucene. If you’re interested, see for example
Query
-based predicates. -
Overhead for application threads: reindexing is done directly in application threads, and it may require additional time to load data that must be indexed from the database. Depending on the amount of data to load, this may increase the application’s latency and/or decrease its throughput.
-
No horizontal scalability: there can only be one application node, and all indexes need to live on the same server.
Getting started
To implement this architecture, use the following Maven dependencies:
<dependency>
<groupId>org.hibernate.search</groupId>
<artifactId>hibernate-search-mapper-orm</artifactId>
<version>7.2.2.Final</version>
</dependency>
<dependency>
<groupId>org.hibernate.search</groupId>
<artifactId>hibernate-search-backend-lucene</artifactId>
<version>7.2.2.Final</version>
</dependency>
- With the Standalone POJO Mapper (no Hibernate ORM)
<dependency>
<groupId>org.hibernate.search</groupId>
<artifactId>hibernate-search-mapper-pojo-standalone</artifactId>
<version>7.2.2.Final</version>
</dependency>
<dependency>
<groupId>org.hibernate.search</groupId>
<artifactId>hibernate-search-backend-lucene</artifactId>
<version>7.2.2.Final</version>
</dependency>
5.2.3. Single-node or multi-node application, without coordination and with the Elasticsearch backend
Description
With the Elasticsearch backend, indexes are not tied to the application node. They are managed by a separate cluster of Elasticsearch nodes, and accessed through calls to REST APIs.
Thus, it is possible to set up multiple application nodes in such a way that they all perform index updates and search queries independently, without coordinating with each other.
The Elasticsearch cluster may be a single node living on the same server as the application. |
Pros and cons
Pros:
-
Easy to extend: you can easily access most Elasticsearch features, even those that are not exposed by Hibernate Search, by providing your own JSON. See for example JSON-defined predicates, or JSON-defined aggregations, or leveraging advanced features with JSON manipulation.
-
Horizontal scalability of the indexes: you can size the Elasticsearch cluster according to your needs. See "Scalability and resilience" in the Elasticsearch documentation.
-
Horizontal scalability of the application: you can have as many instances of the application as you need (though high concurrency increases the likeliness of some problems with this architecture, see "Cons" below).
Cons:
-
Without coordination, backend errors during indexing may lead to out-of sync indexes.
-
Need to manage an additional service: the Elasticsearch cluster.
-
Overhead for application threads: reindexing is done directly in application threads, and it may require additional time to load data that must be indexed from the database. Depending on the amount of data to load, this may increase the application’s latency and/or decrease its throughput.
-
Delayed visibility (~1 second) of index updates (near-real-time). While changes can be made visible as soon as possible after the database changes, Elasticsearch is near-real-time by nature, and won’t perform very well if you need changes to be visible immediately after the database changes.
Getting started
To implement this architecture, use the following Maven dependencies:
<dependency>
<groupId>org.hibernate.search</groupId>
<artifactId>hibernate-search-mapper-orm</artifactId>
<version>7.2.2.Final</version>
</dependency>
<dependency>
<groupId>org.hibernate.search</groupId>
<artifactId>hibernate-search-backend-elasticsearch</artifactId>
<version>7.2.2.Final</version>
</dependency>
- With the Standalone POJO Mapper (no Hibernate ORM)
<dependency>
<groupId>org.hibernate.search</groupId>
<artifactId>hibernate-search-mapper-pojo-standalone</artifactId>
<version>7.2.2.Final</version>
</dependency>
<dependency>
<groupId>org.hibernate.search</groupId>
<artifactId>hibernate-search-backend-elasticsearch</artifactId>
<version>7.2.2.Final</version>
</dependency>
5.2.4. Multi-node application with outbox polling and Elasticsearch backend
Features detailed below are incubating: they are still under active development. The usual compatibility policy does not apply: the contract of incubating elements (e.g. types, methods, configuration properties, etc.) may be altered in a backward-incompatible way — or even removed — in subsequent releases. You are encouraged to use incubating features so the development team can get feedback and improve them, but you should be prepared to update code which relies on them as needed. |
Description
With Hibernate Search’s outbox-polling
coordination strategy,
entity change events are not processed immediately in the ORM session where they arise,
but are pushed to an outbox table in the database.
A background process polls that outbox table for new events, and processes them asynchronously, updating the indexes as necessary. Since that queue can be sharded, multiple application nodes can share the workload of indexing.
This requires the Elasticsearch backend so that indexes are not tied to a single application node and can be updated or queried from multiple application nodes.
Pros and cons
Pros:
-
Safest:
-
the possibility of out-of-sync indexes caused by indexing errors in the backend that affects other architectures is eliminated here, because entity change events are persisted in the same transaction as the entity changes allowing retries for as long as necessary.
-
the possibility of out-of-sync indexes caused by concurrent updates that affects other architectures is eliminated here, because each entity instance is reloaded from the database within a new transaction before being re-indexed.
-
-
Easy to extend: you can easily access most Elasticsearch features, even those that are not exposed by Hibernate Search, by providing your own JSON. See for example JSON-defined predicates, or JSON-defined aggregations, or leveraging advanced features with JSON manipulation.
-
Minimal overhead for application threads: application threads only need to append events to the queue, they don’t perform reindexing themselves.
-
Horizontal scalability of the indexes: you can size the Elasticsearch cluster according to your needs. See "Scalability and resilience" in the Elasticsearch documentation.
-
Horizontal scalability of the application: you can have as many instances of the application as you need.
Cons:
-
Need to manage an additional service: the Elasticsearch cluster.
-
Delayed visibility (~1 second or more, depending on load and hardware) of index updates. First because Elasticsearch is near-real-time by nature, but also because the event queue introduces additional delays.
-
Impact on the database schema: additional tables must be created in the database to hold the data necessary for coordination.
-
Overhead for the database: the background process that reads entity changes and performs reindexing needs to read changed entities from the database.
Getting started
The outbox-polling
coordination strategy requires an extra dependency.
To implement this architecture, use the following Maven dependencies:
<dependency>
<groupId>org.hibernate.search</groupId>
<artifactId>hibernate-search-mapper-orm</artifactId>
<version>7.2.2.Final</version>
</dependency>
<dependency>
<groupId>org.hibernate.search</groupId>
<artifactId>hibernate-search-mapper-orm-outbox-polling</artifactId>
<version>7.2.2.Final</version>
</dependency>
<dependency>
<groupId>org.hibernate.search</groupId>
<artifactId>hibernate-search-backend-elasticsearch</artifactId>
<version>7.2.2.Final</version>
</dependency>
- With the Standalone POJO Mapper (no Hibernate ORM)
-
This architecture cannot be implemented with the Standalone POJO Mapper at the moment, because this mapper does not support coordination.
Also, configure coordination as explained in outbox-polling
: additional event tables and polling in background processors.
6. Hibernate ORM integration
6.1. Basics
The Hibernate ORM "mapper" is an integration of Hibernate Search into Hibernate ORM.
Its key features include:
-
Listener-triggered indexing of Hibernate ORM entities as they are modified in the Hibernate ORM
EntityManager
/Session
. -
Loading of managed entities as hits in the result of a search query.
6.2. Startup
The Hibernate Search integration into Hibernate ORM will start automatically, at the same time as Hibernate ORM, as soon as it is present in the classpath.
If for some reason you need to prevent Hibernate Search from starting,
set the boolean property hibernate.search.enabled
to false
.
6.3. Shutdown
The Hibernate Search integration into Hibernate ORM will stop automatically, at the same time as Hibernate ORM.
On shutdown, Hibernate Search will stop accepting new indexing requests: new indexing attempts will throw exceptions. The Hibernate ORM shutdown will block until all ongoing indexing operations complete.
6.4. Mapping Map
-based models
"Dynamic-map" entity models,
i.e. models based on java.util.Map
instead of custom classes,
cannot be mapped using annotations.
However, they can be mapped using the programmatic mapping API.
You just need to refer to the types by their name using context.programmaticMapping().type("thename")
:
-
Pass the entity name for dynamic entity types.
-
Pass the "role" for dynamic embedded/component types, i.e. the name of the owning entity, followed by a dot ("."), followed by the dot-separated path to the component in that entity. For example
MyEntity.myEmbedded
orMyEntity.myEmbedded.myNestedEmbedded
.
6.5. Multi-tenancy with non-string tenant identifiers
While working with string tenant identifiers in Hibernate ORM has built-in support in Hibernate Search,
using non-string tenant identifiers requires configuring a custom tenant identifier converter.
This can be done by passing a bean reference of TenantIdentifierConverter
type to
the hibernate.search.multi_tenancy.tenant_identifier_converter
configuration property.
6.6. Other configuration
Other configuration properties are mentioned in the relevant parts of this documentation. You can find a full reference of available properties in the ORM integration configuration properties appendix.
7. Standalone POJO Mapper
Features detailed below are incubating: they are still under active development. The usual compatibility policy does not apply: the contract of incubating elements (e.g. types, methods, configuration properties, etc.) may be altered in a backward-incompatible way — or even removed — in subsequent releases. You are encouraged to use incubating features so the development team can get feedback and improve them, but you should be prepared to update code which relies on them as needed. |
7.1. Basics
The Standalone POJO Mapper enables mapping arbitrary POJOs to indexes.
Its key feature compared to the Hibernate ORM integration is its ability to run without Hibernate ORM or a relational database.
It can be used to index entities coming from an arbitrary datastore or even (though that’s not recommended in general) to use Lucene or Elasticsearch as a primary datastore.
Because the Standalone POJO Mapper does not assume anything about the entities being mapped, beyond the fact they are represented as POJOs, it can be more complex to use than the Hibernate ORM integration. In particular:
-
This mapper cannot detect entity changes on its own: all indexing must be explicit.
-
Loading of entities as hits in the result of a search query must be implemented in the application.
-
Loading of identifiers and entities for mass indexing must be implemented in the application.
-
This mapper does not provide coordination between nodes at the moment.
7.2. Startup
Starting up Hibernate Search with the Standalone POJO Mapper is explicit and involves a builder:
CloseableSearchMapping searchMapping = SearchMapping.builder( AnnotatedTypeSource.fromClasses( (1) Book.class, Associate.class, Manager.class ) ) .property( "hibernate.search.backend.hosts", (2) "elasticsearch.mycompany.com" ) .build(); (3)
1 | Create a builder, passing an AnnotatedTypeSource to let Hibernate Search know where to look for annotations. |
2 | Set additional configuration properties (see also Configuration). |
3 | Build the SearchMapping . |
Thanks to classpath scanning,
your See also this section to troubleshoot or improve performance of classpath scanning. |
7.3. Shutdown
You can shut down Hibernate Search with the Standalone POJO Mapper by calling the close()
method on the mapping:
CloseableSearchMapping searchMapping = /* ... */ (1)
searchMapping.close(); (2)
1 | Retrieve the SearchMapping that was returned when Hibernate Search started. |
2 | Call close() to shut down Hibernate Search. |
On shutdown, Hibernate Search will stop accepting new indexing requests:
new indexing attempts will throw exceptions.
The close()
method will only return once all ongoing indexing operations complete.
7.4. Bean provider
Features detailed below are incubating: they are still under active development. The usual compatibility policy does not apply: the contract of incubating elements (e.g. types, methods, configuration properties, etc.) may be altered in a backward-incompatible way — or even removed — in subsequent releases. You are encouraged to use incubating features so the development team can get feedback and improve them, but you should be prepared to update code which relies on them as needed. |
The Standalone POJO Mapper can retrieve beans from CDI/Spring, but that support needs to be implemented explicitly through a bean provider.
You can plug in your own bean provider in two steps:
-
Define a class that implements the
org.hibernate.search.engine.environment.bean.spi.BeanProvider
interface. -
Configure Hibernate Search to use that implementation by setting the configuration property
hibernate.search.bean_provider
to a bean reference pointing to the implementation, for exampleclass:com.mycompany.MyMappingConfigurer
. Obviously, the reference to the bean provider cannot be resolved using the bean provider.
7.5. Multi-tenancy
Multi-tenancy needs to be enabled explicitly when starting the Standalone POJO Mapper:
CloseableSearchMapping searchMapping = SearchMapping.builder( AnnotatedTypeSource.fromClasses( (1)
Book.class
) )
// ...
.property( "hibernate.search.mapping.multi_tenancy.enabled", true ) (2)
.build(); (3)
1 | Create a builder. |
2 | Enable multi-tenancy. |
3 | Build the SearchMapping . |
Once multi-tenancy is enabled, a tenant ID will have to be provided when creating a SearchSession
and in some other cases
(creating a mass indexer, a workspace, …).
SearchSession
with a tenant identifierSearchMapping searchMapping = /* ... */ (1)
Object tenantId = "myTenantId";
try ( SearchSession searchSession = searchMapping.createSessionWithOptions() (2)
.tenantId( tenantId ) (3)
.build() ) { (4)
// ...
}
1 | Retrieve the SearchMapping . |
2 | Start creating a new session. |
3 | Set the tenant identifier for the new session. |
4 | Build the new session. |
When using non-string tenant identifiers, a custom
|
7.6. Mapping
While the Hibernate ORM integration can infer parts of the mapping from the Hibernate ORM mapping, the Standalone POJO Mapper cannot. As a result, the Standalone POJO Mapper needs more explicit configuration for its mapping:
-
Entity types must be defined explicitly.
-
Document identifiers must be mapped explicitly.
-
The inverse side of associations must be mapped explicitly.
7.7. Indexing
7.7.1. Listener-triggered indexing
The Standalone POJO Mapper does not provide "implicit" indexing similar to the listener-triggered indexing in the Hibernate ORM integration.
Instead, you must index explicitly with an indexing plan.
7.7.2. Explicitly indexing on entity change events
The Standalone POJO Mapper can process entity change events (add, update, delete) and perform indexing accordingly, though events must necessarily be passed to Hibernate Search explicitly. See Indexing plans for more information about the API.
One major difference with the Hibernate ORM integration is that transactions (JTA or otherwise are not supported), so indexing is executed on session closing rather than on transaction commit.
7.7.3. Mass indexing
Because by default, the Standalone POJO Mapper does not know anything about where the entity data comes from, mass indexing requires plugging in a way to load entities en masse from the other datastore: a mass loading strategy.
Mass loading strategies are assigned to entity types as part of the entity definition: see Mass loading strategy for more information.
7.7.4. Entity loading in search queries
Because by default, the Standalone POJO Mapper does not know anything about where the entity data comes from, entity loading in search queries requires plugging in a way to load a selection of entities from the other datastore: a selection loading strategy.
Selection loading strategies are assigned to entity types as part of the entity definition: see Selection loading strategy for more information.
With the Standalone POJO Mapper, if you want entities to be loaded from the index, instead of an external datasource, add a projection constructor to your entity type. This will automatically result in your entity being loaded from the index
when the configuration described in this section is missing and loading is required
(for example when not using |
7.8. Coordination
The Standalone POJO Mapper does not provide any way to coordinate between nodes at the moment, so its behavior is roughly similar to that described in No coordination, except entity data extracting happens on session closing instead of happening on Hibernate ORM session flushes, and indexing happens immediately after that instead of happening on transaction commit.
7.9. Reading configuration properties from a file
The Standalone POJO Mapper SearchMappingBuilder
can also take properties from a Reader
compatible with java.util.Properties#load(Reader)
:
Reader
try (
Reader propertyFileReader = /* ... */ (1)
) {
CloseableSearchMapping searchMapping = SearchMapping.builder( AnnotatedTypeSource.empty() ) (2)
.properties( propertyFileReader ) (3)
.build();
}
1 | Get a reader representing a property file with configuration properties. |
2 | Start configuring the Standalone POJO Mapper. |
3 | Pass the property file reader to the builder. |
7.10. Other configuration
Other configuration properties are mentioned in the relevant parts of this documentation. You can find a full reference of available properties in the Standalone POJO Mapper configuration properties appendix.
8. Configuration
8.1. Configuration sources
8.1.1. Configuration sources when integrating into Hibernate ORM
When using Hibernate Search within Hibernate ORM, configuration properties are retrieved from Hibernate ORM.
This means that wherever you set Hibernate ORM properties, you can set Hibernate Search properties:
-
In a
hibernate.properties
file at the root of your classpath. -
In
persistence.xml
, if you bootstrap Hibernate ORM with the JPA APIs -
In JVM system properties (
-DmyProperty=myValue
passed to thejava
command) -
In the configuration file of your framework, for example
application.yaml
/application.properties
.
When setting properties through the configuration file of your framework, the keys of configuration properties will likely be different from the keys mentioned in this documentation. For example See Framework support for more information. |
8.1.2. Configuration sources with the Standalone POJO mapper
When using Hibernate Search in the Standalone POJO Mapper (without Hibernate ORM), configuration properties must be set programmatically as you build the mapping.
See this section for more information.
8.2. Configuration properties
8.2.1. Structure of configuration properties
Configuration properties are all grouped under a common root.
In the ORM integration, this root is hibernate.search
,
but other integrations (Infinispan, …) may use a different one.
This documentation will use hibernate.search
in all examples.
Under that root, we can distinguish between three categories of properties.
- Global properties
-
These properties potentially affect all Hibernate Search. They are generally located just under the
hibernate.search
root.Global properties are explained in the relevant parts of this documentation:
-
And many more.
- Backend properties
-
These properties affect a single backend. They are grouped under a common root:
-
hibernate.search.backend
for the default backend (most common usage). -
hibernate.search.backends.<backend-name>
for a named backend (advanced usage).
Backend properties are explained in the relevant parts of this documentation:
-
- Index properties
-
These properties affect either one or multiple indexes, depending on the root.
With the root
hibernate.search.backend
, they set defaults for all indexes of the backend.With the root
hibernate.search.backend.indexes.<index-name>
, they set the value for a specific index, overriding the defaults (if any). The backend and index names must match the names defined in the mapping. For Hibernate ORM entities, the default index name is the name of the indexed class, without the package:org.mycompany.Book
will haveBook
as its default index name. Index names can be customized in the mapping.Alternatively, the backend can also be referenced by name, i.e. the roots above can also be
hibernate.search.backends.<backend-name>
orhibernate.search.backends.<backend-name>.indexes.<index-name>
.Examples:
-
hibernate.search.backend.io.commit_interval = 500
sets theio.commit_interval
property for all indexes of the default backend. -
hibernate.search.backend.indexes.Product.io.commit_interval = 2000
sets theio.commit_interval
property for theProduct
index of the default backend. -
hibernate.search.backends.myBackend.io.commit_interval = 500
sets theio.commit_interval
property for all indexes of backendmyBackend
. -
hibernate.search.backends.myBackend.indexes.Product.io.commit_interval = 2000
sets theio.commit_interval
property for theProduct
index of backendmyBackend
.
Other index properties are explained in the relevant parts of this documentation:
-
8.2.2. Building property keys programmatically
Both BackendSettings
and IndexSettings
provide tools to help build the configuration property keys.
- BackendSettings
-
BackendSettings.backendKey(ElasticsearchBackendSettings.HOSTS)
is equivalent tohibernate.search.backend.hosts
.BackendSettings.backendKey("myBackend", ElasticsearchBackendSettings.HOSTS)
is equivalent tohibernate.search.backends.myBackend.hosts
.For a list of available property keys, see the Elasticsearch backend configuration properties appendix or the Lucene backend configuration properties appendix.
- IndexSettings
-
IndexSettings.indexKey("myIndex", ElasticsearchIndexSettings.INDEXING_QUEUE_SIZE)
is equivalent tohibernate.search.backend.indexes.myIndex.indexing.queue_size
.IndexSettings.indexKey("myBackend", "myIndex", ElasticsearchIndexSettings.INDEXING_QUEUE_SIZE)
is equivalent tohibernate.search.backends.myBackend.indexes.myIndex.indexing.queue_size
.For a list of available property keys, see the Elasticsearch backend configuration properties appendix or the Lucene backend configuration properties appendix. Look for properties having a variant starting with a
hibernate.search.backend.indexes
.
private Properties buildHibernateConfiguration() {
Properties config = new Properties();
// backend configuration
config.put( BackendSettings.backendKey( ElasticsearchBackendSettings.HOSTS ), "127.0.0.1:9200" );
config.put( BackendSettings.backendKey( ElasticsearchBackendSettings.PROTOCOL ), "http" );
// index configuration
config.put(
IndexSettings.indexKey( "myIndex", ElasticsearchIndexSettings.INDEXING_MAX_BULK_SIZE ),
20
);
// orm configuration
config.put(
HibernateOrmMapperSettings.INDEXING_PLAN_SYNCHRONIZATION_STRATEGY,
IndexingPlanSynchronizationStrategyNames.ASYNC
);
// engine configuration
config.put( EngineSettings.BACKGROUND_FAILURE_HANDLER, "myFailureHandler" );
return config;
}
8.2.3. Type of configuration properties
Property values can be set programmatically as Java objects, or through a configuration file as a string that will have to be parsed.
Each configuration property in Hibernate Search has an assigned type, and this type defines the accepted values in both cases.
Here are the definitions of all property types.
Designation | Accepted Java objects | Accepted String format |
---|---|---|
String |
|
Any string |
Boolean |
|
|
Integer |
|
Any string that can be parsed by |
Long |
|
Any string that can be parsed by |
Bean reference of type T |
An instance of |
When a configuration property of any type above is documented as multivalued, that property accepts either:
-
A
java.util.Collection
containing any Java object that would be accepted for a single-valued property of the same type (see above); -
or a comma-separated string containing strings that would be accepted for a single-valued property of the same type (see above);
-
or a single Java object that would be accepted for a single-valued property of the same type (see above).
8.3. Configuration property checking
Hibernate Search will track the parts of the provided configuration that are actually used and will log a warning if any configuration property starting with "hibernate.search." is never used, because that might indicate a configuration issue.
To disable this warning, set the hibernate.search.configuration_property_checking.strategy
property to ignore
.
8.4. Beans
Hibernate Search allows plugging in references to custom beans in various places: configuration properties, mapping annotations, arguments to APIs, …
This section describes the supported frameworks, how to reference beans, how the beans are resolved and how the beans can get injected with other beans.
8.4.1. Supported frameworks
Supported frameworks when integrating into Hibernate ORM
When using the Hibernate Search integration into Hibernate ORM, all dependency injection frameworks integrated into Hibernate ORM are automatically integrated into Hibernate Search.
This includes, but may not be limited to:
When not using a dependency injection framework, or when it is not integrated into Hibernate ORM, beans can only be retrieved using reflection by calling the public, no-arg constructor of the referenced type; see Bean resolution.
Supported frameworks when using the Standalone POJO Mapper
When using the Standalone POJO Mapper dependency injection support must be plugged in manually.
Failing that, beans can only be retrieved using reflection by calling the public, no-arg constructor of the referenced type; see Bean resolution.
8.4.2. Bean references
Bean references are composed of two parts:
-
The type, i.e. a
java.lang.Class
. -
Optionally, the name, as a
String
.
When referencing beans using a string value in configuration properties, the type is implicitly set to whatever interface Hibernate Search expects for that configuration property.
For experienced users, Hibernate Search also provides the |
8.4.3. Parsing of bean references
When referencing beans using a string value in configuration properties, that string is parsed.
Here are the most common formats:
-
bean:
followed by the name of a Spring or CDI bean. For examplebean:myBean
. -
class:
followed by the fully-qualified name of a class, to be instantiated through Spring/CDI if available, or through its public, no-argument constructor otherwise. For exampleclass:com.mycompany.MyClass
. -
An arbitrary string that doesn’t contain a colon: it will be interpreted as explained in Bean resolution. In short:
-
first, look for a built-in bean with the given name;
-
then try to retrieve a bean with the given name from Spring/CDI (if available);
-
then try to interpret the string as a fully-qualified class name and to retrieve the corresponding bean from Spring/CDI (if available);
-
then try to interpret the string as a fully-qualified class name and to instantiate it through its public, no-argument constructor.
-
The following formats are also accepted, but are only useful for advanced use cases:
-
any:
followed by an arbitrary string. Equivalent to leaving out the prefix in most cases. Only useful if the arbitrary string contains a colon. -
builtin:
followed by the name of a built-in bean, e.g.simple
for the Elasticsearch index layout strategies. This will not fall back to Spring/CDI or a direct constructor call. -
constructor:
followed by the fully-qualified name of a class, to be instantiated through its public, no-argument constructor. This will ignore built-in beans and will not try to instantiate the class through Spring/CDI.
8.4.4. Bean resolution
Bean resolution (i.e. the process of turning this reference into an object instance) happens as follows by default:
-
If the given reference matches a built-in bean, that bean is used.
Example: the name
simple
, when used as the value of the propertyhibernate.search.backend.layout.strategy
to configure the Elasticsearch index layout strategy, resolves to the built-insimple
strategy. -
Otherwise, if a supported Dependency Injection (DI) framework is available, the reference is resolved using the DI framework.
-
If a managed bean with the given type (and if provided, name) exists, that bean is used.
Example: the name
myLayoutStrategy
, when used as the value of the propertyhibernate.search.backend.layout.strategy
to configure the Elasticsearch index layout strategy, resolves to any bean known from CDI/Spring of typeIndexLayoutStrategy
and annotated with@Named("myLayoutStrategy")
. -
Otherwise, if a name is given, and that name is a fully-qualified class name, and a managed bean of that type exists, that bean is used.
Example: the name
com.mycompany.MyLayoutStrategy
, when used as the value of the propertyhibernate.search.backend.layout.strategy
to configure the Elasticsearch index layout strategy, resolves to any bean known from CDI/Spring and extendingcom.mycompany.MyLayoutStrategy
.
-
-
Otherwise, reflection is used to resolve the bean.
-
If a name is given, and that name is a fully-qualified class name, and that class extends the type reference, an instance is created by invoking the public, no-argument constructor of that class.
Example: the name
com.mycompany.MyLayoutStrategy
, when used as the value of the propertyhibernate.search.backend.layout.strategy
to configure the Elasticsearch index layout strategy, resolves to an instance ofcom.mycompany.MyLayoutStrategy
. -
If no name is given, an instance is created by invoking the public, no-argument constructor of the referenced type.
Example: the class
com.mycompany.MyLayoutStrategy.class
(ajava.lang.Class
, not aString
), when used as the value of the propertyhibernate.search.backend.layout.strategy
to configure the Elasticsearch index layout strategy, resolves to an instance ofcom.mycompany.MyLayoutStrategy
.
-
It is possible to control bean retrieval more finely by selecting a |
8.4.5. Bean injection
All beans resolved by Hibernate Search using a supported framework can take advantage of injection features of this framework.
For example a bean can be injected with another bean
by annotating one of its fields in the bridge with @Inject
.
Lifecycle annotations such as @PostConstruct
should also work as expected.
Even when not using any framework,
it is still possible to take advantage of the BeanResolver
.
This component, passed to several methods during bootstrap,
exposes several methods to resolve
a reference into a bean,
exposing programmatically what would usually be achieved with an @Inject
annotation.
See the javadoc of BeanResolver
for more information.
8.4.6. Bean lifecycle
As soon as beans are no longer needed,
Hibernate Search will release them and let the dependency injection framework
call the appropriate methods (@PreDestroy
, …).
Some beans are only necessary during bootstrap,
such as ElasticsearchAnalysisConfigurer
s,
so they will be released just after bootstrap.
Other beans are necessary at runtime, such as ValueBridge
s,
so they will be released on shutdown.
Be careful to define the scope of your beans as appropriate. Immutable beans or beans used only once such as However, some beans are expected to be mutable and instantiated multiple times,
such as for example |
Beans resolved by Hibernate Search using a supported framework can take advantage of injection features of this framework.
8.5. Global configuration
8.5.1. Background failure handling
Hibernate Search generally propagates exceptions occurring in background threads to the user thread,
but in some cases, such as Lucene segment merging failures,
or some indexing plan synchronization failures,
the exception in background threads cannot be propagated.
By default, when that happens, the failure is logged at the ERROR
level.
To customize background failure handling, you will need to:
-
Define a class that implements the
org.hibernate.search.engine.reporting.FailureHandler
interface. -
Configure the backend to use that implementation by setting the configuration property
hibernate.search.background_failure_handler
to a bean reference pointing to the implementation, for exampleclass:com.mycompany.MyFailureHandler
.
Hibernate Search will call the handle
methods whenever a failure occurs.
FailureHandler
package org.hibernate.search.documentation.reporting.failurehandler;
import org.hibernate.search.engine.common.EntityReference;
import org.hibernate.search.engine.reporting.EntityIndexingFailureContext;
import org.hibernate.search.engine.reporting.FailureContext;
import org.hibernate.search.engine.reporting.FailureHandler;
import org.hibernate.search.util.impl.test.extension.StaticCounters;
public class MyFailureHandler implements FailureHandler {
@Override
public void handle(FailureContext context) { (1)
String failingOperationDescription = context.failingOperation().toString(); (2)
Throwable throwable = context.throwable(); (3)
// ... report the failure ... (4)
}
@Override
public void handle(EntityIndexingFailureContext context) { (5)
String failingOperationDescription = context.failingOperation().toString();
Throwable throwable = context.throwable();
for ( EntityReference entityReference : context.failingEntityReferences() ) { (6)
Class<?> entityType = entityReference.type(); (7)
String entityName = entityReference.name(); (7)
Object entityId = entityReference.id(); (7)
String entityReferenceAsString = entityReference.toString(); (8)
// ... process the entity reference ... (9)
}
}
}
1 | handle(FailureContext) is called for generic failures that do not fit any other specialized handle method. |
2 | Get a description of the failing operation from the context. |
3 | Get the throwable thrown when the operation failed from the context. |
4 | Use the context-provided information to report the failure in any relevant way. |
5 | handle(EntityIndexingFailureContext) is called for failures occurring when indexing entities. |
6 | On top of the failing operation and throwable, the context also lists references to entities that could not be indexed correctly because of the failure. |
7 | Entity references expose the entity type, name and identifier. |
8 | Entity references may be converted to a human-readable string using toString() . |
9 | Use the context-provided information to report the failure in any relevant way. |
hibernate.search.background_failure_handler = class:org.hibernate.search.documentation.reporting.failurehandler.MyFailureHandler
Assign the background failure handler using a Hibernate Search configuration property.
When a failure handler’s |
8.5.2. Multi-tenancy
If your application uses Hibernate ORM’s multi-tenancy support, or if you configured multi-tenancy explicitly in the Standalone POJO Mapper, Hibernate Search should detect that and configure your backends transparently. For details, see:
In some cases, in particular when using the outbox-polling
coordination strategy
or when expecting the mass indexer to implicitly target all tenants,
you will need to list explicitly all tenant identifiers that your application might use.
This information is used by Hibernate Search when spawning background processes
that should apply an operation to every tenant.
The list of identifiers is defined through the following configuration property:
hibernate.search.multi_tenancy.tenant_ids = mytenant1,mytenant2,mytenant3
This property may be set to a String containing multiple tenant identifiers separated by commas,
or a Collection<String>
containing tenant identifiers.
9. Main API Entry Points
This section details the main entry points to Hibernate Search APIs at runtime, i.e. APIs to index, search, look up metadata, …
9.1. SearchMapping
9.1.1. Basics
The SearchMapping
is the top-most entrypoint to Hibernate Search APIs:
it represents the whole mapping from entities to indexes.
The SearchMapping
is thread-safe: it can safely be used concurrently from multiple threads.
However, that does not mean the objects it returns (SearchWorkspace
, …) are themselves thread-safe.
The |
Some frameworks, such as Quarkus,
allow you to simply |
9.1.2. Retrieving the SearchMapping
with the Hibernate ORM integration
With the Hibernate ORM integration,
the SearchMapping
is created automatically when Hibernate ORM starts.
To retrieve the SearchMapping
,
call Search.mapping(…)
and pass the EntityManagerFactory
/SessionFactory
:
SearchMapping
from a Hibernate ORM SessionFactory
SessionFactory sessionFactory = /* ... */ (1)
SearchMapping searchMapping = Search.mapping( sessionFactory ); (2)
1 | Retrieve the SessionFactory .
Details depend on your framework, but this is generally achieved by injecting it into your own class,
e.g. by annotating a field of that type with @Inject or @PersistenceUnit . |
2 | Call Search.mapping(…) , passing the SessionFactory as an argument.
This will return the SearchMapping . |
Still with the Hibernate ORM integration, the same can be done from a JPA EntityManagerFactory
:
SearchMapping
from a JPA EntityManagerFactory
EntityManagerFactory entityManagerFactory = /* ... */ (1)
SearchMapping searchMapping = Search.mapping( entityManagerFactory ); (2)
1 | Retrieve the EntityManagerFactory .
Details depend on your framework, but this is generally achieved by injecting it into your own class,
e.g. by annotating a field of that type with @Inject or @PersistenceUnit . |
2 | Call Search.mapping(…) , passing the EntityManagerFactory as an argument.
This will return the SearchMapping . |
9.1.3. Retrieving the SearchMapping
with the Standalone POJO Mapper
With the Standalone POJO Mapper,
the SearchMapping
is the result of starting Hibernate Search.
See this section for more information about starting Hibernate Search with the Standalone POJO Mapper.
9.2. SearchSession
9.2.1. Basics
The SearchSession
represents the context in which a sequence of related operations are executed.
It should generally be used for a very short time, for example to process a single web request.
The SearchSession
is not thread-safe: it must not be used concurrently from multiple threads.
The |
Some frameworks, such as Quarkus,
allow you to simply |
9.2.2. Retrieving the SearchSession
with the Hibernate ORM integration
To retrieve the SearchSession
with the Hibernate ORM integration,
call Search.session(…)
and pass the EntityManager
/Session
:
SearchSession
from a Hibernate ORM Session
Session session = /* ... */ (1)
SearchSession searchSession = Search.session( session ); (2)
1 | Retrieve the Session .
Details depend on your framework, but this is generally achieved by injecting it into your own class,
e.g. by annotating a field of that type with @Inject or @PersistenceContext . |
2 | Call Search.session(…) , passing the Session as an argument.
This will return the SearchSession . |
Still with the Hibernate ORM integration, the same can be done from a JPA EntityManager
:
SearchSession
from a JPA EntityManager
EntityManager entityManager = /* ... */ (1)
SearchSession searchSession = Search.session( entityManager ); (2)
1 | Retrieve the EntityManager .
Details depend on your framework, but this is generally achieved by injecting it into your own class,
e.g. by annotating a field of that type with @Inject or @PersistenceContext . |
2 | Call Search.mapping(…) , passing the EntityManager as an argument.
This will return the SearchSession . |
9.2.3. Retrieving the SearchSession
with the Standalone POJO Mapper
With the Standalone POJO Mapper,
the SearchSession
should be created and closed explicitly:
SearchSession
SearchMapping searchMapping = /* ... */ (1)
try ( SearchSession searchSession = searchMapping.createSession() ) { (2)
// ...
}
1 | Retrieve the SearchMapping . |
2 | Create a new session. Note we’re using a try-with-resources block, so that the session will automatically be closed when we’re done with it, which will in particular trigger the execution of the indexing plan. |
Forgetting to close the |
The SearchSession
can also be configured with a few options:
SearchSession
with optionsSearchMapping searchMapping = /* ... */ (1)
Object tenantId = "myTenant";
try ( SearchSession searchSession = searchMapping.createSessionWithOptions() (2)
.indexingPlanSynchronizationStrategy( IndexingPlanSynchronizationStrategy.sync() )(3)
.tenantId( tenantId )
.build() ) { (4)
// ...
}
1 | Retrieve the SearchMapping . |
2 | Start creating a new session. Note we’re using a try-with-resources block, so that the session will automatically be closed when we’re done with it, which will in particular trigger the execution of the indexing plan. |
3 | Pass options to the new session. |
4 | Build the new session. |
9.3. SearchScope
The SearchScope
represents a set of indexed entities and their indexes.
The SearchScope
is thread-safe: it can safely be used concurrently from multiple threads.
However, that does not mean the objects it returns (SearchWorkspace
, …) are themselves thread-safe.
A SearchScope
can be retrieved from a SearchMapping
as well as from a SearchSession
.
SearchScope
from a SearchMapping
SearchMapping searchMapping = /* ... */ (1)
SearchScope<Book> bookScope = searchMapping.scope( Book.class ); (2)
SearchScope<Person> associateAndManagerScope = searchMapping.scope( Arrays.asList( Associate.class, Manager.class ) ); (3)
SearchScope<Person> personScope = searchMapping.scope( Person.class ); (4)
SearchScope<Person> personSubTypesScope = searchMapping.scope( Person.class,
Arrays.asList( "Manager", "Associate" ) ); (5)
SearchScope<Object> allScope = searchMapping.scope( Object.class ); (6)
1 | Retrieve the SearchMapping . |
2 | Create a SearchScope targeting the Book entity type only. |
3 | Create a SearchScope targeting both the Associate entity type and the Manager entity type.
The scope’s generic type parameter can be any common supertype of those entity types. |
4 | A scope will always target all subtypes of the given classes, and the given classes do not need to be indexed entity types themselves.
This creates a SearchScope targeting all (indexed entity) subtypes of the Person interface;
in our case this will target both the Associate entity type and the Manager entity type. |
5 | For advanced use cases, it is possible to target entity types by their name. For Hibernate ORM this would be the JPA entity name, and for the Standalone POJO Mapper this would be the name assigned to the entity type upon entity definition. In both cases, the entity name is the simple name of the Java class by default. |
6 | Passing Object.class will create a scope targeting every single indexed entity types. |
SearchScope
from a SearchSession
SearchSession searchSession = /* ... */ (1)
SearchScope<Book> bookScope = searchSession.scope( Book.class ); (2)
SearchScope<Person> associateAndManagerScope =
searchSession.scope( Arrays.asList( Associate.class, Manager.class ) ); (3)
SearchScope<Person> personScope = searchSession.scope( Person.class ); (4)
SearchScope<Person> personSubTypesScope = searchSession.scope( Person.class,
Arrays.asList( "Manager", "Associate" ) ); (5)
SearchScope<Object> allScope = searchSession.scope( Object.class ); (6)
1 | Retrieve the SearchSession . |
2 | Create a SearchScope targeting the Book entity type only. |
3 | Create a SearchScope targeting both the Associate entity type and the Manager entity type.
The scope’s generic type parameter can be any common supertype of those entity types. |
4 | A scope will always target all subtypes of the given classes, and the given classes do not need to be indexed entity types themselves.
This creates a SearchScope targeting all (indexed entity) subtypes of the Person interface;
in our case this will target both the Associate entity type and the Manager entity type. |
5 | For advanced use cases, it is possible to target entity types by their name. For Hibernate ORM this would be the JPA entity name, and for the Standalone POJO Mapper this would be the name assigned to the entity type upon entity definition. In both cases, the entity name is the simple name of the Java class by default. |
6 | Passing Object.class will create a scope targeting every single indexed entity types. |
10. Mapping entities to indexes
10.1. Configuring the mapping
10.1.1. Annotation-based mapping
The main way to map entities to indexes is through annotations, as explained in Entity definition, Entity/index mapping and the following sections.
By default, Hibernate Search will automatically process mapping annotations for entity types, as well as nested types in those entity types, for instance embedded types.
Annotation-based mapping can be disabled by setting hibernate.search.mapping.process_annotations
to false
for the Hibernate ORM integration,
or through AnnotationMappingConfigurationContext
for any mapper:
see Mapping configurer to access that context,
and see the javadoc of AnnotationMappingConfigurationContext
for available options.
If you disable annotation-based mapping, you will probably need to configure the mapping programmatically: see Programmatic mapping. |
Hibernate Search will also try to find some annotated types through classpath scanning.
See Entity definition, Entity/index mapping and Mapping a property to an index field with |
10.1.2. Classpath scanning
Basics
Hibernate Search will automatically scan the JARs of entity types on startup, looking for types annotated with "root mapping annotations" so that it can automatically add those types to the list of types whose annotations should be processed.
Root mapping annotations are mapping annotations that serve as the entrypoint to a mapping,
for example @ProjectionConstructor
or custom root mapping annotations.
Without this scanning, Hibernate Search would learn about e.g. projection constructors too late
(when the projection is actually executed) and would fail due to a lack of metadata.
The scanning is backed by Jandex, a library that indexes the content of JARs.
Scanning dependencies of the application
By default, Hibernate Search will only scan the JARs containing your Hibernate ORM entities.
If you want Hibernate Search to detect types annotated
with root mapping annotations in other JARs,
you will first need to access an AnnotationMappingConfigurationContext
.
From that context, either:
-
call
annotationMappingContext.add( MyType.class )
to explicitly tell Hibernate Search to process annotation onMyType
, and to discover other types annotated with root mapping annotations in the JAR containingMyType
. -
OR (advanced usage, incubating) call
annotationMappingContext.addJandexIndex( <an IndexView instance> )
to explicitly tell Hibernate Search to look for types annotated with root mapping annotations in the given Jandex index.
Configuring scanning
Hibernate Search’s scanning may trigger the indexing of JARs through Jandex on application startup. In some of the more complicated environments, this indexing may not be able to get access to classes to index, or may unnecessarily slow down startup.
Running Hibernate Search within Quarkus or Wildfly has its benefits as:
-
With the Quarkus framework, scanning part of the Hibernate Search’s startup is executed at build time and the indexes are provided to it automatically.
-
With the WildFly application server, this part of Hibernate Search’s startup is executed in an optimized way and the indexes are provided to it automatically as well.
In other cases, depending on the application needs, the Jandex Maven Plugin can be used during the building stage of the application, so that indexes are already built and ready when the application starts.
Alternatively, If your application does not use @ProjectionConstructor
or custom root mapping annotations,
you may want to disable this feature entirely or partially.
This is not recommended in general as it may lead to bootstrap failures or ignored mapping annotations because Hibernate Search will no longer be able to automatically discover types annotated with root annotations in JARs that do not have an embedded Jandex index.
Two options are available for this:
-
Setting
hibernate.search.mapping.discover_annotated_types_from_root_mapping_annotations
tofalse
will disable any attempts of automatic discovery, even if there is a Jandex index available, partial or full, which may help if there are no types annotated with root mapping annotations at all, or if they are listed explicitly through a mapping configurer or through anAnnotatedTypeSource
. -
Setting
hibernate.search.mapping.build_missing_discovered_jandex_indexes
tofalse
will disable Jandex index building on startup, but will still use any pre-built Jandex indexes available. This may help if partial automatic discovery is required, i.e. available indexes will be used for discovery, but sources that do not have an index available will be ignored unless their@ProjectionConstructor
-annotated types are listed explicitly through a mapping configurer or through anAnnotatedTypeSource
.
10.1.3. Programmatic mapping
Most examples in this documentation use annotation-based mapping, which is generally enough for most applications. However, some applications have needs that go beyond what annotations can offer:
-
a single entity type must be mapped differently for different deployments — e.g. for different customers.
-
many entity types must be mapped similarly, without code duplication.
To address those needs, you can use programmatic mapping: define the mapping through code that will get executed on startup.
Programmatic mapping is configured through ProgrammaticMappingConfigurationContext
:
see Mapping configurer to access that context.
By default, programmatic mapping will be merged with annotation mapping (if any). To disable annotation mapping, see Annotation-based mapping. |
Programmatic mapping is declarative and exposes the exact same features as annotation-based mapping. In order to implement more complex, "imperative" mapping, for example to combine two entity properties into a single index field, use custom bridges. |
Alternatively, if you only need to repeat the same mapping for several types or properties, you can apply a custom annotation on those types or properties, and have Hibernate Search execute some programmatic mapping code when it encounters that annotation. This solution doesn’t require mapper-specific configuration. See Custom mapping annotations for more information. |
10.1.4. Mapping configurer
Hibernate ORM integration
With the Hibernate ORM integration, a custom HibernateOrmSearchMappingConfigurer
can be plugged into Hibernate Search in order to configure
annotation mapping (AnnotationMappingConfigurationContext
),
programmatic mapping (ProgrammaticMappingConfigurationContext
), and more.
Plugging in a custom configurer requires two steps:
-
Define a class that implements the
org.hibernate.search.mapper.orm.mapping.HibernateOrmSearchMappingConfigurer
interface. -
Configure Hibernate Search to use that implementation by setting the configuration property
hibernate.search.mapping.configurer
to a bean reference pointing to the implementation, for exampleclass:com.mycompany.MyMappingConfigurer
.
You can pass multiple bean references separated by commas. See Type of configuration properties. |
Hibernate Search will call the configure
method of this implementation on startup,
and the configurer will be able to take advantage of a DSL to
configure annotation mapping or
define the programmatic mapping, for example:
public class MySearchMappingConfigurer implements HibernateOrmSearchMappingConfigurer {
@Override
public void configure(HibernateOrmMappingConfigurationContext context) {
ProgrammaticMappingConfigurationContext mapping = context.programmaticMapping(); (1)
TypeMappingStep bookMapping = mapping.type( Book.class ); (2)
bookMapping.indexed(); (3)
bookMapping.property( "title" ) (4)
.fullTextField().analyzer( "english" ); (5)
}
}
1 | Access the programmatic mapping. |
2 | Access the programmatic mapping of type Book . |
3 | Define Book as indexed. |
4 | Access the programmatic mapping of property title of type Book . |
5 | Define an index field based on property title of type Book . |
Standalone POJO Mapper
The Standalone POJO Mapper does not offer a "mapping configurer" at the moment
(HSEARCH-4615).
However, AnnotationMappingConfigurationContext
and ProgrammaticMappingConfigurationContext
can be accessed when building the SearchMapping
:
With the Hibernate ORM integration, a custom StandalonePojoMappingConfigurer
can be plugged into Hibernate Search in order to configure
annotation mapping (AnnotationMappingConfigurationContext
),
programmatic mapping (ProgrammaticMappingConfigurationContext
), and more.
Plugging in a custom configurer requires two steps:
-
Define a class that implements the
org.hibernate.search.mapper.pojo.standalone.mapping.StandalonePojoMappingConfigurer
interface. -
Configure Hibernate Search to use that implementation by setting the configuration property
hibernate.search.mapping.configurer
to a bean reference pointing to the implementation, for exampleclass:com.mycompany.MyMappingConfigurer
.
You can pass multiple bean references separated by commas. See Type of configuration properties. |
Hibernate Search will call the configure
method of this implementation on startup,
and the configurer will be able to take advantage of a DSL to
configure annotation mapping or
define the programmatic mapping, for example:
public class MySearchMappingConfigurer implements StandalonePojoMappingConfigurer {
@Override
public void configure(StandalonePojoMappingConfigurationContext context) {
context.annotationMapping() (1)
.discoverAnnotationsFromReferencedTypes( false )
.discoverAnnotatedTypesFromRootMappingAnnotations( false );
ProgrammaticMappingConfigurationContext mappingContext = context.programmaticMapping(); (2)
TypeMappingStep bookMapping = mappingContext.type( Book.class ); (3)
bookMapping.searchEntity(); (4)
bookMapping.indexed(); (5)
bookMapping.property( "id" ) (6)
.documentId(); (7)
bookMapping.property( "title" ) (8)
.fullTextField().analyzer( "english" ); (9)
}
}
1 | Access the annotation mapping context to configure annotation mapping. |
2 | Access the programmatic mapping context to configure programmatic mapping. |
3 | Access the programmatic mapping of type Book . |
4 | Define Book as an entity type. |
5 | Define Book as indexed. |
6 | Access the programmatic mapping of property id of type Book . |
7 | Define the identifier of type Book as its property id . |
8 | Access the programmatic mapping of property title of type Book . |
9 | Define an index field based on property title of type Book . |
10.2. Entity definition
10.2.1. Basics
Before a type can be mapped to indexes, Hibernate Search needs to be aware of which types in the application domain model are entity types.
When indexing Hibernate ORM entities,
the entity types are fully defined by Hibernate ORM (generally through Jakarta’s @Entity
annotation),
and no explicit definition is necessary: you can safely skip this entire section.
When using the Standalone POJO Mapper, entity types need to be defined explicitly.
10.2.2. Explicit entity definition
Features detailed below are incubating: they are still under active development. The usual compatibility policy does not apply: the contract of incubating elements (e.g. types, methods, configuration properties, etc.) may be altered in a backward-incompatible way — or even removed — in subsequent releases. You are encouraged to use incubating features so the development team can get feedback and improve them, but you should be prepared to update code which relies on them as needed. |
See HSEARCH-5076
to track progress on allowing the use of |
With the Standalone POJO Mapper,
entity types must be marked explicitly with the @SearchEntity
annotation.
@SearchEntity
@SearchEntity (1)
@Indexed (2)
public class Book {
1 | Annotate the type with @SearchEntity |
2 | @Indexed is optional: it is only necessary if you intend to map this type to an index. |
Not all types are entity types, even if they have a composite structure. Incorrectly marking types as entity types may force you to add unnecessary complexity to your domain model, such as defining identifiers or an inverse side for "associations" to such types that won’t get used. Make sure to read this section for more information on what entity types are and why they are necessary. |
Subclasses do not inherit the Each subclass must be annotated with However, for subclasses that are also annotated with |
By default, with the Standalone POJO Mapper:
-
The entity name will be equal to the class' simple name (
java.lang.Class#getSimpleName
). -
The entity will not be configured for loading, be it to return entities as hits in search queries or for mass indexing.
See the following sections to override these defaults.
10.2.3. Entity name
Features detailed below are incubating: they are still under active development. The usual compatibility policy does not apply: the contract of incubating elements (e.g. types, methods, configuration properties, etc.) may be altered in a backward-incompatible way — or even removed — in subsequent releases. You are encouraged to use incubating features so the development team can get feedback and improve them, but you should be prepared to update code which relies on them as needed. |
The entity name, distinct from the name of the corresponding class, is involved in various places, including but not limited to:
-
as the default index name for
@Indexed
;
The entity name defaults to the class' simple name (java.lang.Class#getSimpleName
).
Changing the entity name of an indexed entity may require full reindexing, in particular when using the Elasticsearch/OpenSearch backend. See this section for more information. |
With the Hibernate ORM integration,
this name may be overridden through various means,
but usually is through Jakarta Persistence’s @Entity
annotation,
i.e. with @Entity(name = …)
.
With the Standalone POJO Mapper,
entity types are defined with @SearchEntity
,
and the entity name may be overridden with @SearchEntity(name = …)
.
See HSEARCH-5076
to track progress on allowing the use of |
@SearchEntity(name = …)
@SearchEntity(name = "MyAuthorName")
@Indexed
public class Author {
10.2.4. Mass loading strategy
A "mass loading strategy" gives Hibernate Search the ability to load entities of a given type for mass indexing.
With the Hibernate ORM integration, a mass loading strategy gets configured automatically for every single Hibernate ORM entity, and no further configuration is required.
With the Standalone POJO Mapper,
entity types are defined with @SearchEntity
,
and, in order to take advantage of mass indexing,
a mass loading strategy must be applied explicitly
with @SearchEntity(loadingBinder = …)
.
Features detailed below are incubating: they are still under active development. The usual compatibility policy does not apply: the contract of incubating elements (e.g. types, methods, configuration properties, etc.) may be altered in a backward-incompatible way — or even removed — in subsequent releases. You are encouraged to use incubating features so the development team can get feedback and improve them, but you should be prepared to update code which relies on them as needed. |
See HSEARCH-5076
to track progress on allowing the use of |
@SearchEntity(loadingBinder = @EntityLoadingBinderRef(type = MyLoadingBinder.class)) (1)
@Indexed
public class Book {
1 | Assign a loading binder to the entity. |
Subclasses inherit the loading binder of their parent class, unless they override it with a loading binder of their own. |
@Singleton
public class MyLoadingBinder implements EntityLoadingBinder { (1)
private final MyDatastore datastore;
@Inject (2)
public MyLoadingBinder(MyDatastore datastore) {
this.datastore = datastore;
}
@Override
public void bind(EntityLoadingBindingContext context) { (3)
context.massLoadingStrategy( (4)
Book.class, (5)
new MyMassLoadingStrategy<>( datastore, Book.class ) (6)
);
}
}
1 | The binder must implement the EntityLoadingBinder interface. |
2 | Inject the implementation-specific datastore into the loading binder,
for example here using CDI (or @Autowired on Spring, or …). |
3 | Implement the bind method. |
4 | Call context.massLoadingStrategy(…) to define the loading strategy to use. |
5 | Pass the expected supertype of loaded entities. |
6 | Pass the loading strategy. |
Using injection in the loading binder
with the Standalone POJO Mapper
requires providing a BeanProvider through additional configuration.
|
Below is an example of MassLoadingStrategy
implementation for an imaginary datastore.
MassLoadingStrategy
public class MyMassLoadingStrategy<E>
implements MassLoadingStrategy<E, String> {
private final MyDatastore datastore; (1)
private final Class<E> rootEntityType;
public MyMassLoadingStrategy(MyDatastore datastore, Class<E> rootEntityType) {
this.datastore = datastore;
this.rootEntityType = rootEntityType;
}
@Override
public MassIdentifierLoader createIdentifierLoader(
LoadingTypeGroup<E> includedTypes, (2)
MassIdentifierSink<String> sink, MassLoadingOptions options) {
int batchSize = options.batchSize(); (3)
Collection<Class<? extends E>> typeFilter =
includedTypes.includedTypesMap().values(); (4)
return new MassIdentifierLoader() {
private final MyDatastoreConnection connection =
datastore.connect(); (5)
private final MyDatastoreCursor<String> identifierCursor =
connection.scrollIdentifiers( typeFilter );
@Override
public void close() {
connection.close(); (5)
}
@Override
public long totalCount() { (6)
return connection.countEntities( typeFilter );
}
@Override
public void loadNext() throws InterruptedException {
List<String> batch = identifierCursor.next( batchSize );
if ( batch != null ) {
sink.accept( batch ); (7)
}
else {
sink.complete(); (8)
}
}
};
}
@Override
public MassEntityLoader<String> createEntityLoader(
LoadingTypeGroup<E> includedTypes, (9)
MassEntitySink<E> sink, MassLoadingOptions options) {
return new MassEntityLoader<String>() {
private final MyDatastoreConnection connection =
datastore.connect(); (10)
@Override
public void close() { (8)
connection.close();
}
@Override
public void load(List<String> identifiers)
throws InterruptedException {
sink.accept( (11)
connection.loadEntitiesById( rootEntityType, identifiers )
);
}
};
}
}
1 | The strategy must have access to the datastore to be able to open connections, but it should not generally have any open connection. |
2 | Implement an identifier loader to retrieve the identifiers of all entities that will have to be indexed. Hibernate Search will only call this method once per mass indexing. |
3 | Retrieve the batch size configured on the MassIndexer .
This defines how many IDs (at most) must be returned in each List passed to the sink. |
4 | Retrieve the list of entity types to be loaded: Hibernate Search may request loading of multiple types from a single loader if those types share similar mass loading strategies (see tips/warnings below). |
5 | The identifier loader owns a connection exclusively: it should create one when it’s created, and close it when it’s closed. Related: the identifier loader always executes in the same thread. |
6 | Count the number of entities to index. This is just an estimate: it can be off to some extent, but that will lead to incorrect reporting in the monitor (by default, the logs). |
7 | Retrieve identifiers in successive batches, one per call to loadNext() , and pass them to the sink. |
8 | When there are no more identifiers to load, let the sink know by calling complete() . |
9 | Implement an entity loader to actually load entities from the identifiers retrieved above. Hibernate Search will call this method multiple times for a single mass indexing, to create multiple loaders that execute in parallel. |
10 | Each entity loader owns a connection exclusively: it should create one when it’s created, and close it when it’s closed. Related: each entity loader always executes in the same thread. |
11 | Load the entities corresponding to the identifiers passed in argument and pass them to the sink. Entities passed to the sink do not need to be in the same order as the identifiers passed in argument. |
Hibernate Search will optimize loading by grouping together
types that have the same When grouping types together, only one of the strategies will be called, and it will get passed a "type group" that includes all types that should be loaded. This happens in particular when configuring the loading binder from a "parent" entity type is inherited by subtypes, and sets the same strategy on subtypes. |
Be careful of non-abstract (instantiable) parent classes in inheritance trees:
when the "type group" passed to the |
Once all types to reindex have their mass loading strategy implemented and assigned, they can be reindexed using the mass indexer:
SearchMapping searchMapping = /* ... */ (1)
searchMapping.scope( Object.class ).massIndexer() (2)
.startAndWait(); (3)
1 | Retrieve the SearchMapping . |
2 | Create a MassIndexer targeting every indexed entity type. |
3 | Start the mass indexing process and return when it is over. |
10.2.5. Selection loading strategy
A "selection loading strategy" gives Hibernate Search the ability to load entities of a given type to return entities loaded from an external source as hits in search queries.
With the Hibernate ORM integration, a selection loading strategy gets configured automatically for every single Hibernate ORM entity, and no further configuration is required.
With the Standalone POJO Mapper,
entity types are defined with @SearchEntity
,
and, in order to return entities loaded from an external source in search queries,
a selection loading strategy must be applied explicitly
with @SearchEntity(loadingBinder = …)
.
Features detailed below are incubating: they are still under active development. The usual compatibility policy does not apply: the contract of incubating elements (e.g. types, methods, configuration properties, etc.) may be altered in a backward-incompatible way — or even removed — in subsequent releases. You are encouraged to use incubating features so the development team can get feedback and improve them, but you should be prepared to update code which relies on them as needed. |
See HSEARCH-5076
to track progress on allowing the use of |
@SearchEntity(loadingBinder = @EntityLoadingBinderRef(type = MyLoadingBinder.class)) (1)
@Indexed
public class Book {
1 | Assign a loading binder to the entity. |
Subclasses inherit the loading binder of their parent class, unless they override it with a loading binder of their own. |
@Singleton
public class MyLoadingBinder implements EntityLoadingBinder { (1)
@Override
public void bind(EntityLoadingBindingContext context) { (2)
context.selectionLoadingStrategy( (3)
Book.class, (4)
new MySelectionLoadingStrategy<>( Book.class ) (5)
);
}
}
1 | The binder must implement the EntityLoadingBinder interface. |
2 | Implement the bind method. |
3 | Call context.selectionLoadingStrategy(…) to define the loading strategy to use. |
4 | Pass the expected supertype of loaded entities. |
5 | Pass the loading strategy. |
Below is an example of SelectionLoadingStrategy
implementation for an imaginary datastore.
SelectionLoadingStrategy
public class MySelectionLoadingStrategy<E>
implements SelectionLoadingStrategy<E> {
private final Class<E> rootEntityType;
public MySelectionLoadingStrategy(Class<E> rootEntityType) {
this.rootEntityType = rootEntityType;
}
@Override
public SelectionEntityLoader<E> createEntityLoader(
LoadingTypeGroup<E> includedTypes, (1)
SelectionLoadingOptions options) {
MyDatastoreConnection connection =
options.context( MyDatastoreConnection.class ); (2)
return new SelectionEntityLoader<E>() {
@Override
public List<E> load(List<?> identifiers, Deadline deadline) {
return connection.loadEntitiesByIdInSameOrder( (3)
rootEntityType, identifiers );
}
};
}
}
1 | Implement an entity loader to actually load entities from the identifiers returned by Lucene/Elasticsearch. Hibernate Search will call this method multiple times for a single mass indexing, |
2 | The entity loader does not own a connection, but retrieves it from the context passed to the SearchSession (see next example). |
3 | Load the entities corresponding to the identifiers passed in argument and return them. Returned entities must be in the same order as the identifiers passed in argument. |
Hibernate Search will optimize loading by grouping together
types that have the same When grouping types together, only one of the strategies will be called, and it will get passed a "type group" that includes all types that should be loaded. This happens in particular when configuring the loading binder from a "parent" entity type is inherited by subtypes, and sets the same strategy on subtypes. |
Once all types to search for have their selection loading strategy implemented and assigned, they can be loaded as hits when querying:
MyDatastore datastore = /* ... */ (1)
SearchMapping searchMapping = /* ... */ (2)
try ( MyDatastoreConnection connection = datastore.connect(); (3)
SearchSession searchSession = searchMapping.createSessionWithOptions() (4)
.loading( o -> o.context( MyDatastoreConnection.class, connection ) ) (5)
.build() ) { (6)
List<Book> hits = searchSession.search( Book.class ) (7)
.where( f -> f.matchAll() )
.fetchHits( 20 ); (8)
}
1 | Retrieve a reference to an implementation-specific datastore. |
2 | Retrieve the SearchMapping . |
3 | Open a connection to the datastore (this is just an imaginary API, for the purpose of this example). Note we’re using a try-with-resources block, so that the connection will automatically be closed when we’re done with it. |
4 | Start creating a new session. Note we’re using a try-with-resources block, so that the session will automatically be closed when we’re done with it. |
5 | Pass the connection to the new session. |
6 | Build the new session. |
7 | Create a search query: since we don’t use select() ,
hits will have their default representations: entities loaded from the datastore. |
8 | Retrieve the search hits as entities loaded from the datastore. |
10.2.6. Programmatic mapping
Features detailed below are incubating: they are still under active development. The usual compatibility policy does not apply: the contract of incubating elements (e.g. types, methods, configuration properties, etc.) may be altered in a backward-incompatible way — or even removed — in subsequent releases. You are encouraged to use incubating features so the development team can get feedback and improve them, but you should be prepared to update code which relies on them as needed. |
See HSEARCH-5076
to track progress on allowing the use of |
You can mark a type as an entity type through the programmatic mapping too. Behavior and options are identical to annotation-based mapping.
.searchEntity()
TypeMappingStep bookMapping = mapping.type( Book.class );
bookMapping.searchEntity();
TypeMappingStep authorMapping = mapping.type( Author.class );
authorMapping.searchEntity().name( "MyAuthorName" );
10.3. Entity/index mapping
10.3.1. Basics
In order to index an entity, it must be annotated with @Indexed
.
@Indexed
@Entity
@Indexed
public class Book {
Subclasses inherit the If the fact that |
By default:
-
The index name will be equal to the entity name, which in Hibernate ORM is set using the
@Entity
annotation and defaults to the simple class name. -
With the Hibernate ORM integration, the identifier of indexed documents will be generated from the entity identifier. Most types commonly used for entity identifiers are supported out of the box, but for more exotic types you may need specific configuration.
With the Standalone POJO Mapper, the identifier of indexed documents needs to be mapped explicitly.
See Mapping the document identifier for details.
-
The index won’t have any field. Fields must be mapped to properties explicitly. See Mapping a property to an index field with
@GenericField
,@FullTextField
, … for details.
10.3.2. Explicit index/backend
You can change the name of the index by setting @Indexed(index = …)
.
Note that index names must be unique in a given application.
@Indexed.index
@Entity
@Indexed(index = "AuthorIndex")
public class Author {
If you defined named backends, you can map entities to another backend than the default one.
By setting @Indexed(backend = "backend2")
you inform Hibernate Search that the index
for your entity must be created in the backend named "backend2".
This may be useful if your model has clearly defined sub-parts with very different indexing requirements.
@Indexed.backend
@Entity
@Table(name = "\"user\"")
@Indexed(backend = "backend2")
public class User {
Entities indexed in different backends cannot be targeted by the same query.
For example, with the mappings defined above,
the following code will throw an exception
because
|
10.3.3. Conditional indexing and routing
The mapping of an entity to an index is not always as straightforward as "this entity type goes to this index". For many reasons, but mainly for performance reasons, you may want to customize when and where a given entity is indexed:
-
You may not want to index all entities of a given type: for example, prevent indexing of entities when their
status
property is set toDRAFT
orARCHIVED
, because users are not supposed to search for those entities. -
You may want to route entities to a specific shard of the index: for example, route entities based on their
language
property, because each user has a specific language and only searches for entities in their language.
These behaviors can be implemented in Hibernate Search by assigning a routing bridge
to the indexed entity type through @Indexed(routingBinder = …)
.
For more information about routing bridges, see Routing bridge.
10.3.4. Programmatic mapping
You can mark an entity as indexed through the programmatic mapping too. Behavior and options are identical to annotation-based mapping.
.indexed()
TypeMappingStep bookMapping = mapping.type( Book.class );
bookMapping.indexed();
TypeMappingStep authorMapping = mapping.type( Author.class );
authorMapping.indexed().index( "AuthorIndex" );
TypeMappingStep userMapping = mapping.type( User.class );
userMapping.indexed().backend( "backend2" );
10.4. Mapping the document identifier
10.4.1. Basics
Index documents, much like entities, need to be assigned an identifier so that Hibernate Search can handle updates and deletion.
When indexing Hibernate ORM entities, the entity identifier is used as a document identifier by default. Provided the entity identifier has a supported type, identifier mapping will work out of the box and no explicit mapping is necessary.
When using the Standalone POJO Mapper, document identifiers need to be mapped explicitly.
10.4.2. Explicit identifier mapping
Explicit identifier mapping is required in the following cases:
-
Hibernate Search doesn’t know about the entity identifier (e.g. when using the Standalone POJO Mapper).
-
OR the document identifier is not the entity identifier.
-
OR the entity identifier has a type that is not supported by default. This is the case of composite identifiers (Hibernate ORM’s
@EmbeddedId
,@IdClass
), in particular.
To select a property to map to the document identifier,
just apply the @DocumentId
annotation to that property:
@DocumentId
@Entity
@Indexed
public class Book {
@Id
@GeneratedValue
private Integer id;
@NaturalId
@DocumentId
private String isbn;
public Book() {
}
// Getters and setters
// ...
}
When the property type is not supported,
it is also necessary to implement a custom identifier bridge,
then refer to it in the @DocumentId
annotation:
@DocumentId
@Entity
@Indexed
public class Book {
@Id
@Convert(converter = ISBNAttributeConverter.class)
@DocumentId(identifierBridge = @IdentifierBridgeRef(type = ISBNIdentifierBridge.class))
private ISBN isbn;
public Book() {
}
// Getters and setters
// ...
}
10.4.3. Supported identifier property types
Below is a table listing all types with built-in identifier bridges, i.e. property types that are supported out of the box when mapping a property to a document identifier.
The table also explains the value assigned to the document identifier, i.e. the value passed to the underlying backend.
Property type | Value of document identifiers | Limitations |
---|---|---|
|
|
- |
|
Unchanged |
- |
|
A single-character |
- |
|
|
- |
|
|
- |
|
|
- |
|
|
- |
|
|
- |
|
|
- |
|
|
- |
|
|
- |
|
|
- |
|
|
- |
|
|
- |
|
Formatted according to |
- |
|
Formatted according to |
- |
|
Formatted according to |
- |
|
Formatted according to |
- |
|
Formatted according to |
- |
|
Formatted according to |
- |
|
Formatted according to |
- |
|
|
- |
|
|
- |
|
Formatted according to the ISO 8601 format for a duration
(e.g. |
- |
|
Formatted according to the ISO 8601 format for a duration,
using seconds and nanoseconds only (e.g. |
- |
|
Formatted according to the ISO 8601 format for a Year
(e.g. |
- |
|
Formatted according to the ISO 8601 format for a Year-Month
(e.g. |
- |
|
Formatted according to the ISO 8601 format for a Month-Day
(e.g. |
- |
|
|
- |
|
A |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Latitude as double and longitude as double, separated by a comma
(e.g. |
- |
10.4.4. Programmatic mapping
You can map the document identifier through the programmatic mapping too. Behavior and options are identical to annotation-based mapping.
.documentId()
TypeMappingStep bookMapping = mapping.type( Book.class );
bookMapping.indexed();
bookMapping.property( "isbn" ).documentId();
10.5. Mapping a property to an index field with @GenericField
, @FullTextField
, …
10.5.1. Basics
Properties of an entity can be mapped to an index field directly: you just need to add an annotation, configure the field through the annotation attributes, and Hibernate Search will take care of extracting the property value and populating the index field when necessary.
Mapping a property to an index field looks like this:
@FullTextField(analyzer = "english", projectable = Projectable.YES) (1)
@KeywordField(name = "title_sort", normalizer = "english", sortable = Sortable.YES) (2)
private String title;
@GenericField(projectable = Projectable.YES, sortable = Sortable.YES) (3)
private Integer pageCount;
1 | Map the title property to a full-text field with the same name.
Some options can be set to customize the fields' behavior, in this case the analyzer (for full-text indexing)
and the fact that this field is projectable (its value can be retrieved from the index). |
2 | Map the title property to another field, configured differently:
it is not analyzed, but simply normalized (i.e. it’s not split into multiple tokens),
and it is stored in such a way that it can be used in sorts.
Mapping a single property to multiple fields is particularly useful when doing full-text search: at query time, you can use a different field depending on what you need. You can map a property to as many fields as you want, but each must have a unique name. |
3 | Map another property to its own field. |
Before you map a property, you must consider two things:
- The
@*Field
annotation -
In its simplest form, property/field mapping is achieved by applying the
@GenericField
annotation to a property. This annotation will work for every supported property type, but is rather limited: it does not allow full-text search in particular. To go further, you will need to rely on different, more specific annotations, which offer specific attributes. The available annotations are described in details in Available field annotations. - The type of the property
-
In order for the
@*Field
annotation to work correctly, the type of the mapped property must be supported by Hibernate Search. See Supported property types for a list of all types that are supported out of the box, and Mapping custom property types for indications on how to handle more complex types, be it simply containers (List<String>
,Map<String, Integer>
, …) or custom types.
10.5.2. Available field annotations
Various field annotations exist, each offering its own set of attributes.
This section lists the different annotations and their use. For more details about available attributes, see Field annotation attributes.
@GenericField
-
A good default choice that will work for every property type with built-in support.
Fields mapped using this annotation do not provide any advanced features such as full-text search: matches on a generic field are exact matches.
-
@FullTextField
-
A text field whose value is considered as multiple words. Only works for
String
fields.Matches on a full-text field can be more subtle than exact matches: match fields which contains a given word, match fields regardless of case, match fields ignoring diacritics, …
Full-text fields also allow highlighting.
Full-text fields should be assigned an analyzer, referenced by its name. By default, the analyzer named
default
will be used. See Analysis for more details about analyzers and full-text analysis. For instructions on how to change the default analyzer, see the dedicated section in the documentation of your backend: Lucene or ElasticsearchNote you can also define a search analyzer to analyze searched terms differently.
Full-text fields cannot be sorted on nor aggregated. If you need to sort on, or aggregate on, the value of a property, it is recommended to use @KeywordField
, with a normalizer if necessary (see below). Note that multiple fields can be added to the same property, so you can use both@FullTextField
and@KeywordField
if you need both full-text search and sorting: you will just need to use a distinct name for each of those two fields. -
@KeywordField
-
A text field whose value is considered as a single keyword. Only works for
String
fields.Keyword fields allow more subtle matches, similarly to full-text fields, with the limitation that keyword fields only contain one token. On the other hand, this limitation allows keyword fields to be sorted on and aggregated.
Keyword fields may be assigned a normalizer, referenced by its name. See Analysis for more details about normalizers and full-text analysis.
-
@ScaledNumberField
-
A numeric field for integer or floating-point values that require a higher precision than doubles but always have roughly the same scale. Only works for either
java.math.BigDecimal
orjava.math.BigInteger
fields.Scaled numbers are indexed as integers, typically a long (64 bits), with a fixed scale that is consistent for all values of the field across all documents. Because scaled numbers are indexed with a fixed precision, they cannot represent all
BigDecimal
orBigInteger
values. Values that are too large to be indexed will trigger a runtime exception. Values that have trailing decimal digits will be rounded to the nearest integer.This annotation allows to set the decimalScale attribute.
@NonStandardField
-
An annotation for advanced use cases where a value binder is used and that binder is expected to define an index field type that does not support any of the standard options:
searchable
,sortable
, …This annotation is very useful for cases when a field type native to the backend is necessary: defining the mapping directly as JSON for Elasticsearch, or manipulating
IndexableField
directly for Lucene.Fields mapped using this annotation have very limited configuration options from the annotation (no
searchable
/sortable
/etc.), but the value binder will be able to pick a non-standard field type, which generally gives much more flexibility. -
@VectorField
-
Features detailed below are incubating: they are still under active development.
The usual compatibility policy does not apply: the contract of incubating elements (e.g. types, methods, configuration properties, etc.) may be altered in a backward-incompatible way — or even removed — in subsequent releases.
You are encouraged to use incubating features so the development team can get feedback and improve them, but you should be prepared to update code which relies on them as needed.
Specific field type for vector fields to be used in a vector search.
Vector fields accept values of type
float[]
orbyte[]
and require that the dimension of stored vectors is specified upfront and that the indexed vectors size match this dimension.Besides that, vector fields allow optionally configuring the similarity function used during search,
efConstruction
andm
used during indexing.Vector fields, on the contrary to the other field types, disable the container extraction by default Manually setting the extraction to DEFAULT
will result in an exception. Only explicitly configured extractors are allowed for vector fields.It is not allowed to index multiple vectors within the same field, i.e. vector fields cannot be multivalued.
10.5.3. Field annotation attributes
Various field mapping annotations exist, each offering its own set of attributes.
This section lists the different annotation attributes and their use. For more details about available annotations, see Available field annotations.
-
name
-
The name of the index field. By default, it is the same as the property name. You may want to change it in particular when mapping a single property to multiple fields.
Value:
String
. The name must not contain the dot character (.
). Defaults to the name of the property. -
sortable
-
Whether the field can be sorted on, i.e. whether a specific data structure is added to the index to allow efficient sorts when querying.
Value:
Sortable.YES
,Sortable.NO
,Sortable.DEFAULT
.This option is not available for
@FullTextField
. See here for an explanation and some solutions. -
projectable
-
Whether the field can be projected on, i.e. whether the field value is stored in the index to allow retrieval later when querying.
Value:
Projectable.YES
,Projectable.NO
,Projectable.DEFAULT
.The defaults are different for the Lucene and Elasticsearch backends: with Lucene, the default is
Projectable.NO
, while with Elasticsearch it’sProjectable.YES
.For Elasticsearch if any of
projectable
orsortable
properties are resolved toYES
on aGeoPoint
field then this field automatically becomes bothprojectable
andsortable
even if one of them was explicitly set toNO
. -
aggregable
-
Whether the field can be aggregated, i.e. whether the field value is stored in a specific data structure in the index to allow aggregations later when querying.
Value:
Aggregable.YES
,Aggregable.NO
,Aggregable.DEFAULT
.This option is not available for
@FullTextField
. See here for an explanation and some solutions. searchable
-
Whether the field can be searched on. i.e. whether the field is indexed in order to allow applying predicates later when querying.
Value:
Searchable.YES
,Searchable.NO
,Searchable.DEFAULT
. -
indexNullAs
-
The value to use as a replacement anytime the property value is null.
Disabled by default.
The replacement is defined as a String. Thus, its value has to be parsed. Look up the column Parsing method for 'indexNullAs' in Supported property types to find out the format used when parsing.
-
extraction
-
How elements to index should be extracted from the property in the case of container types (
List
,Optional
,Map
, …).By default, for properties that have a container type, the innermost elements will be indexed. For example for a property of type
List<String>
, elements of typeString
will be indexed.Vector fields disable the extraction by default.
This default behavior and ways to override it are described in the section Mapping container types with container extractors.
-
analyzer
-
The analyzer to apply to field values when indexing and querying. Only available on
@FullTextField
.By default, the analyzer named
default
will be used.See Analysis for more details about analyzers and full-text analysis.
-
searchAnalyzer
-
An optional different analyzer, overriding the one defined with the
analyzer
attribute, to use only when analyzing searched terms.If not defined, the analyzer assigned to
analyzer
will be used.See Analysis for more details about analyzers and full-text analysis.
-
normalizer
-
The normalizer to apply to field values when indexing and querying. Only available on
@KeywordField
.See Analysis for more details about normalizers and full-text analysis.
norms
-
Whether index-time scoring information for the field should be stored or not. Only available on
@KeywordField
and@FullTextField
.Enabling norms will improve the quality of scoring. Disabling norms will reduce the disk space used by the index.
Value:
Norms.YES
,Norms.NO
,Norms.DEFAULT
. termVector
-
The term vector storing strategy. Only available on
@FullTextField
.The different values of this attribute are:
Value Definition TermVector.YES
Store the term vectors of each document. This produces two synchronized arrays, one contains document terms and the other contains the term’s frequency.
TermVector.NO
Do not store term vectors.
TermVector.WITH_POSITIONS
Store the term vector and token position information. This is the same as
TermVector.YES
plus it contains the ordinal positions of each occurrence of a term in a document.TermVector.WITH_OFFSETS
Store the term vector and token offset information. This is the same as
TermVector.YES
plus it contains the starting and ending offset position information for the terms.TermVector.WITH_POSITION_OFFSETS
Store the term vector, token position and offset information. This is a combination of the
YES
,WITH_OFFSETS
andWITH_POSITIONS
.TermVector.WITH_POSITIONS_PAYLOADS
Store the term vector, token position and token payloads. This is the same as
TermVector.WITH_POSITIONS
plus it contains the payload of each occurrence of a term in a document.TermVector.WITH_POSITIONS_OFFSETS_PAYLOADS
Store the term vector, token position, offset information and token payloads. This is the same as
TermVector.WITH_POSITION_OFFSETS
plus it contains the payload of each occurrence of a term in a document.Note that highlighter types requested by the full-text field might affect the finally resolved term vector storing strategy. Since the fast vector highlighter type has specific requirements regarding the term vector storing strategy, if it is requested explicitly or implicitly through the usage of
Highlightable.ANY
, it will set the strategy toTermVector.WITH_POSITIONS_OFFSETS
unless a strategy was already specified. An exception will be thrown if a non-default strategy that is not compatible with the fast vector highlighter is used. -
decimalScale
-
How the scale of a large number (
BigInteger
orBigDecimal
) should be adjusted before it is indexed as a fixed-precision integer. Only available on@ScaledNumberField
.To index numbers that have significant digits after the decimal point, set the
decimalScale
to the number of digits you need indexed. The decimal point will be shifted that many times to the right before indexing, preserving that many digits from the decimal part. To index very large numbers that cannot fit in a long, set the decimal point to a negative value. The decimal point will be shifted that many times to the left before indexing, dropping all digits from the decimal part.decimalScale
with strictly positive values is allowed only forBigDecimal
, sinceBigInteger
values have no decimal digits.Note that shifting of the decimal points is completely transparent and will not affect how you use the search DSL: you be expected to provide "normal"
BigDecimal
orBigInteger
values, and Hibernate Search will apply thedecimalScale
and rounding transparently.As a result of the rounding, search predicates and sorts will only be as precise as what the
decimalScale
allows.Note that rounding does not affect projections, which will return the original value without any loss of precision.
A typical use case is monetary amounts, with a decimal scale of 2 because only two digits are generally needed beyond the decimal point. With the Hibernate ORM integration, a default decimalScale
is taken automatically from the underlyingscale
value of the relative SQL@Column
, using the Hibernate ORM metadata. The value could be overridden explicitly using thedecimalScale
attribute. -
highlightable
-
Whether the field can be highlighted and if so which highlighter types can be applied to it. I.e. whether the field value is indexed/stored in a specific format to allow highlighting later when querying. Only available on
@FullTextField
.While for most cases picking one highlighter type should be enough, this attribute can accept multiple, non contradicting values. Please refer to highlighter types section to see which highlighter to select. Available values are:
Value Definition Highlightable.NO
Highlightable.ANY
Allow any highlighter type be applied for highlighting the field.
Highlightable.PLAIN
Allow the plain highlighter type be applied for highlighting the field.
Highlightable.UNIFIED
Allow the unified highlighter type be applied for highlighting the field.
Highlightable.FAST_VECTOR
Allow the fast vector highlighter type be applied for highlighting the field. This highlighter type requires a term vector storage strategy to be set to
WITH_POSITIONS_OFFSETS
orWITH_POSITIONS_OFFSETS_PAYLOADS
.Highlightable.DEFAULT
Use the backend-specific default that is dependent on an overall field configuration. Elasticsearch’s default value is
[Highlightable.PLAIN, Highlightable.UNIFIED]
. Lucene’s default value is dependent on the projectable value configured for the field. If the field is projectable then[PLAIN, UNIFIED]
highlighters are supported. Otherwise, highlighting is not supported (Highlightable.NO
). Additionally, if the term vector storing strategy is set toWITH_POSITIONS_OFFSETS
orWITH_POSITIONS_OFFSETS_PAYLOADS
, both backends would support theFAST_VECTOR
highlighter, if they already support the other two ([PLAIN, UNIFIED]
). -
dimension
-
Features detailed below are incubating: they are still under active development.
The usual compatibility policy does not apply: the contract of incubating elements (e.g. types, methods, configuration properties, etc.) may be altered in a backward-incompatible way — or even removed — in subsequent releases.
You are encouraged to use incubating features so the development team can get feedback and improve them, but you should be prepared to update code which relies on them as needed.
The size of the stored vectors. This is a required field. This size should match the vector size of the vectors produced by the model used to convert the data into vector representation. It is expected to be a positive integer value. Maximum accepted value is backend-specific. For the Lucene backend the dimension must be in
[1, 4096]
range. As for the Elasticsearch backend the range depends on the distribution. See the Elasticsearch/OpenSearch specific documentation to learn about the vector types of these distributions.Only available on
@VectorField
. -
vectorSimilarity
-
Features detailed below are incubating: they are still under active development.
The usual compatibility policy does not apply: the contract of incubating elements (e.g. types, methods, configuration properties, etc.) may be altered in a backward-incompatible way — or even removed — in subsequent releases.
You are encouraged to use incubating features so the development team can get feedback and improve them, but you should be prepared to update code which relies on them as needed.
Defines how vector similarity is calculated during a vector search.
Only available on
@VectorField
.Value Definition VectorSimilarity.L2
An L2 (Euclidean) norm, that is a sensible default for most scenarios. Distance between vectors
x
andy
is calculated as \(d(x,y) = \sqrt{\sum_{i=1}^{n} (x_i - y_i)^2 } \) and the score function is \(s = \frac{1}{1+d^2}\)VectorSimilarity.DOT_PRODUCT
Inner product (dot product in particular). Distance between vectors
x
andy
is calculated as \(d(x,y) = \sum_{i=1}^{n} x_i \cdot y_i \) and the score function is \(s = \frac{1}{1+d}\)To use this similarity efficiently, both index and search vectors must be normalized; otherwise search may produce poor results. Floating point vectors must be normalized to be of unit length, while byte vectors should simply all have the same norm.
VectorSimilarity.COSINE
Cosine similarity. Distance between vectors
x
andy
is calculated as \(d(x,y) = \frac{1 - \sum_{i=1} ^{n} x_i \cdot y_i }{ \sqrt{ \sum_{i=1} ^{n} x_i^2 } \sqrt{ \sum_{i=1} ^{n} y_i^2 }} \) and the score function is \(s = \frac{1}{1+d}\)VectorSimilarity.MAX_INNER_PRODUCT
Similar to a dot product similarity, but does not require vector normalization. Distance between vectors
x
andy
is calculated as \(d(x,y) = \sum_{i=1}^{n} x_i \cdot y_i \) and the score function is \(s = \begin{cases} \frac{1}{1-d} & \text{if d < 0}\\ d+1 & \text{otherwise} \end{cases} \)VectorSimilarity.DEFAULT
Use the backend-specific default. For the Lucene backend an
L2
similarity is used.Table 4. How the vector similarity matches to a backend-specific value Hibernate Search Value Lucene Backend Elasticsearch Backend Elasticsearch Backend (OpenSearch distribution) DEFAULT
EUCLIDEAN
Elasticsearch default
OpenSearch default.
L2
EUCLIDEAN
l2_norm
l2
DOT_PRODUCT
DOT_PRODUCT
dot_product
currently not supported by OpenSearch and will result in an exception.
COSINE
COSINE
cosine
cosinesimil
MAX_INNER_PRODUCT
MAXIMUM_INNER_PRODUCT
max_inner_product
currently not supported by OpenSearch and will result in an exception.
-
efConstruction
-
Features detailed below are incubating: they are still under active development.
The usual compatibility policy does not apply: the contract of incubating elements (e.g. types, methods, configuration properties, etc.) may be altered in a backward-incompatible way — or even removed — in subsequent releases.
You are encouraged to use incubating features so the development team can get feedback and improve them, but you should be prepared to update code which relies on them as needed.
efConstruction
is the size of the dynamic list used during k-NN graph creation. It affects how vectors are stored. Higher values lead to a more accurate graph but slower indexing speed.Default value is backend-specific.
Only available on
@VectorField
. -
m
-
Features detailed below are incubating: they are still under active development.
The usual compatibility policy does not apply: the contract of incubating elements (e.g. types, methods, configuration properties, etc.) may be altered in a backward-incompatible way — or even removed — in subsequent releases.
You are encouraged to use incubating features so the development team can get feedback and improve them, but you should be prepared to update code which relies on them as needed.
The number of neighbors each node will be connected to in the HNSW (Hierarchical Navigable Small World graphs) graph. Modifying this value will have an impact on memory consumption. It is recommended to keep this value between 2 and 100.
Default value is backend-specific.
Only available on
@VectorField
.
10.5.4. Supported property types
Below is a table listing all types with built-in value bridges, i.e. property types that are supported out of the box when mapping a property to an index field.
The table also explains the value assigned to the index field, i.e. the value passed to the underlying backend for indexing.
For information about the underlying indexing and storage used by the backend, see Lucene field types or Elasticsearch field types depending on your backend. |
Property type | Value of index field (if different) | Limitations | Parsing method for 'indexNullAs'/terms in query string predicates |
---|---|---|---|
|
|
- |
|
|
- |
- |
- |
|
A single-character |
- |
Accepts any single-character |
|
- |
- |
|
|
- |
- |
|
|
- |
- |
|
|
- |
- |
|
|
- |
- |
|
|
- |
- |
|
|
- |
- |
Accepts the strings |
|
- |
- |
|
|
- |
- |
|
|
|
- |
|
|
|
- |
|
|
- |
|
|
|
- |
|
|
|
- |
|
|
|
- |
|
|
|
- |
|
|
|
- |
|
|
|
- |
|
|
|
|
- |
|
|
|
- |
|
|
A formatted |
- |
|
|
|
|
|
|
- |
|
|
|
- |
|
|
|
- |
- |
|
|
|
- |
|
|
A |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
- |
- |
Latitude as double and longitude as double, separated by a comma. Ex: |
Range and resolution of date/time fields
With a few exceptions, most date and time values are passed as-is to the backend;
e.g. a Internally, however, the Lucene and Elasticsearch backend use a different representation of date/time types. As a result, date and time fields stored in the index may have a smaller range and resolution than the corresponding Java type. The documentation of each backend provides more information: see here for Lucene and here for Elasticsearch. |
10.5.5. Support for legacy java.util
date/time APIs
Using legacy date/time types such as java.util.Calendar
, java.util.Date
, java.sql.Timestamp
, java.sql.Date
, java.sql.Time
is not recommended,
due to their numerous quirks and shortcomings.
The java.time
package introduced
in Java 8 should generally be preferred.
That being said, integration constraints may force you to rely on the legacy date/time APIs, which is why Hibernate Search still attempts to support them on a best effort basis.
Since Hibernate Search uses the java.time
APIs to represent date/time internally,
the legacy date/time types need to be converted before they can be indexed.
Hibernate Search keeps things simple:
java.util.Date
, java.util.Calendar
, etc. will be converted using their time-value (number of milliseconds since the epoch),
which will be assumed to represent the same date/time in Java 8 APIs.
In the case of java.util.Calendar
, timezone information will be preserved for projections.
For all dates after 1900, this will work exactly as expected.
Before 1900, indexing and searching through Hibernate Search APIs will also work as expected,
but if you need to access the index natively, for example through direct HTTP calls to an Elasticsearch server,
you will notice that the indexed values are slightly "off".
This is caused by differences in the implementation of java.time
and legacy date/time APIs
which lead to slight differences in the interpretation of time-values (number of milliseconds since the epoch).
The "drifts" are consistent: they will also happen when building a predicate, and they will happen in the opposite direction when projecting. As a result, the differences will not be visible from an application relying on the Hibernate Search APIs exclusively. They will, however, be visible when accessing indexes natively.
For the large majority of use cases, this will not be a problem.
If this behavior is not acceptable for your application,
you should look into implementing custom value bridges
and instructing Hibernate Search to use them by default for java.util.Date
, java.util.Calendar
, etc.:
see Assigning default bridges with the bridge resolver.
Technically, conversions are difficult because the In particular:
Those are the two main problems, but there may be others. |
10.5.6. Mapping custom property types
Even types that are not supported out of the box can be mapped. There are various solutions, some simple and some more powerful, but they all come down to extracting data from the unsupported type and converting it to types that are supported by the backend.
There are two cases to distinguish between:
-
If the unsupported type is simply a container (
List<String>
) or multiple nested containers (Map<Integer, List<String>>
) whose elements have a supported type, then what you need is a container extractor. See Mapping container types with container extractors for more information. -
Otherwise, you will have to rely on a custom component, called a bridge, to extract data from your type. See Binding and bridges for more information on custom bridges.
10.5.7. Programmatic mapping
You can map properties of an entity to an index field directly through the programmatic mapping too. Behavior and options are identical to annotation-based mapping.
.genericField()
, .fullTextField()
, …TypeMappingStep bookMapping = mapping.type( Book.class );
bookMapping.indexed();
bookMapping.property( "title" )
.fullTextField()
.analyzer( "english" ).projectable( Projectable.YES )
.keywordField( "title_sort" )
.normalizer( "english" ).sortable( Sortable.YES );
bookMapping.property( "pageCount" )
.genericField().projectable( Projectable.YES ).sortable( Sortable.YES );
10.6. Mapping associated elements with @IndexedEmbedded
10.6.1. Basics
Using only @Indexed
combined with @*Field
annotations allows indexing an entity and its direct properties,
which is nice but simplistic.
A real-world model will include multiple object types holding references to one another,
like the authors
association in the example below.
This mapping will declare the following fields in the Book
index:
-
title
-
… and nothing else.
@Entity
@Indexed (1)
public class Book {
@Id
private Integer id;
@FullTextField(analyzer = "english") (2)
private String title;
@ManyToMany
private List<Author> authors = new ArrayList<>(); (3)
public Book() {
}
// Getters and setters
// ...
}
@Entity
public class Author {
@Id
private Integer id;
private String name;
@ManyToMany(mappedBy = "authors")
private List<Book> books = new ArrayList<>();
public Author() {
}
// Getters and setters
// ...
}
1 | The Book entity is indexed. |
2 | The title of the book is mapped to an index field. |
3 | But how to index the Author name into the Book index? |
When searching for a book, users will likely need to search by author name.
In the world of high-performance indexes, cross-index joins are costly and usually not an option.
The best way to address such use cases is generally to copy data:
when indexing a Book
, just copy the name of all its authors into the Book
document.
That’s what @IndexedEmbedded
does:
it instructs Hibernate Search to embed the fields of an associated object into the main object.
In the example below, it will instruct Hibernate Search to embed the name
field
defined in Author
into Book
, creating the field authors.name
.
|
@IndexedEmbedded
to index associated elementsThis mapping will declare the following fields in the Book
index:
-
title
-
authors.name
@Entity
@Indexed
public class Book {
@Id
private Integer id;
@FullTextField(analyzer = "english")
private String title;
@ManyToMany
@IndexedEmbedded (1)
private List<Author> authors = new ArrayList<>();
public Book() {
}
// Getters and setters
// ...
}
@Entity
public class Author {
@Id
private Integer id;
@FullTextField(analyzer = "name") (2)
private String name;
@ManyToMany(mappedBy = "authors")
private List<Book> books = new ArrayList<>();
public Author() {
}
// Getters and setters
// ...
}
1 | Add an @IndexedEmbedded to the authors property. |
2 | Map Author.name to an index field, even though Author is not directly mapped to an index (no @Indexed ). |
Document identifiers are not index fields.
Consequently, they will be ignored by To embed another entity’s identifier with |
When See Reindexing when embedded elements change for the reasons behind this restriction and ways to circumvent it. |
Index-embedding can be nested on multiple levels; for example you can decide to index-embed the place of birth of authors, to be able to search for books written by Russian authors exclusively:
@IndexedEmbedded
This mapping will declare the following fields in the Book
index:
-
title
-
authors.name
-
authors.placeOfBirth.country
@Entity
@Indexed
public class Book {
@Id
private Integer id;
@FullTextField(analyzer = "english")
private String title;
@ManyToMany
@IndexedEmbedded (1)
private List<Author> authors = new ArrayList<>();
public Book() {
}
// Getters and setters
// ...
}
@Entity
public class Author {
@Id
private Integer id;
@FullTextField(analyzer = "name") (2)
private String name;
@Embedded
@IndexedEmbedded (3)
private Address placeOfBirth;
@ManyToMany(mappedBy = "authors")
private List<Book> books = new ArrayList<>();
public Author() {
}
// Getters and setters
// ...
}
@Embeddable
public class Address {
@FullTextField(analyzer = "name") (4)
private String country;
private String city;
private String street;
public Address() {
}
// Getters and setters
// ...
}
1 | Add an @IndexedEmbedded to the authors property. |
2 | Map Author.name to an index field, even though Author is not directly mapped to an index (no @Indexed ). |
3 | Add an @IndexedEmbedded to the placeOfBirth property. |
4 | Map Address.country to an index field, even though Address is not directly mapped to an index (no @Indexed ). |
By default, To address this, see Filtering embedded fields and breaking |
10.6.2. @IndexedEmbedded
and null
values
When properties targeted by an @IndexedEmbedded
contain null
elements,
these elements are simply not indexed.
On contrary to Mapping a property to an index field with @GenericField
, @FullTextField
, …,
there is no indexNullAs
feature to index a specific value for null
objects,
but you can take advantage of the exists
predicate
in search queries to look for documents where a given @IndexedEmbedded
has or doesn’t have a value:
simply pass the name of the object field to the exists
predicate,
for example authors
in the example above.
10.6.3. @IndexedEmbedded
on container types
When properties targeted by an @IndexedEmbedded
have a container type
(List
, Optional
, Map
, …),
the innermost elements will be embedded.
For example for a property of type List<MyEntity>
, elements of type MyEntity
will be embedded.
This default behavior and ways to override it are described in the section Mapping container types with container extractors.
10.6.4. Setting the object field name with name
By default, @IndexedEmbedded
will create an object field with the same name as the annotated property,
and will add embedded fields to that object field.
So if @IndexedEmbedded
is applied to a property named authors
in a Book
entity,
the index field name
of the authors will be copied to the index field authors.name
when Book
is indexed.
It is possible to change the name of the object field by setting the name
attribute;
for example using @IndexedEmbedded(name = "allAuthors")
in the example above will result
in the name of authors being copied to the index field allAuthors.name
instead of authors.name
.
The name must not contain the dot character ( |
10.6.5. Setting the field name prefix with prefix
The |
By default, @IndexedEmbedded
will prepend the name of embedded fields
with the name of the property it is applied to followed by a dot.
So if @IndexedEmbedded
is applied to a property named authors
in a Book
entity,
the name
field of the authors will be copied to the authors.name
field when Book
is indexed.
It is possible to change this prefix by setting the prefix
attribute,
for example @IndexedEmbedded(prefix = "author.")
(do not forget the trailing dot!).
The prefix should generally be a sequence of non-dots ending with a single dot, for example Changing the prefix to a string that does not include any dot at the end ( In particular, a prefix that does not end with a dot will lead to incorrect behavior
in some APIs exposed to custom bridges:
the |
10.6.6. Casting the target of @IndexedEmbedded
with targetType
By default, the type of indexed-embedded values is detected automatically using reflection,
taking into account container extraction if relevant;
for example @IndexedEmbedded List<MyEntity>
will be detected as having values of type MyEntity
.
Fields to be embedded will be inferred from the mapping of the value type and its supertypes;
in the example, @GenericField
annotations present on MyEntity
and its superclasses will be taken into account,
but annotations defined in its subclasses will be ignored.
If for some reason a schema does not expose the correct type for a property
(e.g. a raw List
, or List<MyEntityInterface>
instead of List<MyEntityImpl>
)
it is possible to define the expected type of values
by setting the targetType
attribute in @IndexedEmbedded
.
On bootstrap, Hibernate Search will then resolve fields to be embedded based on the given target type,
and at runtime it will cast values to the given target type.
Failures to cast indexed-embedded values to the designated type will be propagated and lead to indexing failure. |
10.6.7. Reindexing when embedded elements change
When the "embedded" entity changes, Hibernate Search will handle reindexing of the "embedding" entity.
This will work transparently most of the time,
as long as the association @IndexedEmbedded
is applied to is bidirectional
(uses Hibernate ORM’s mappedBy
).
When Hibernate Search is unable to handle an association, it will throw an exception on bootstrap. If this happens, refer to Basics to know more.
10.6.8. Embedding the entity identifier
Mapping a property as an identifier in the indexed-embedded type
will not automatically result into it being embedded when using @IndexedEmbedded
on that type,
because document identifiers are not fields.
To embed the data of such a property, you can use @IndexedEmbedded(includeEmbeddedObjectId = true)
,
which will have Hibernate Search automatically insert a field in the resulting embedded object
for the indexed-embedded type’s identifier property.
The index field will be defined as if the following field annotation
was put on the identifier property of the embedded type:
@GenericField(searchable = Searchable.YES, projectable = Projectable.YES)
.
The name of the index field will be the name of the identifier property.
Its bridge will be the identifier bridge referenced by the embedded type’s
@DocumentId
annotation, if any,
or the default value bridge for the identifier property type’s, by default.
If you need more advanced mapping (custom name, custom bridge, sortable, …),
do not use Instead, define the field explicitly in the indexed-embedded type
by annotating the identifier property with |
Below is an example of using includeEmbeddedObjectId
:
includeEmbeddedObjectId
This mapping will declare the following fields in the Employee
index:
-
name
-
department.name
: implicitly included by@IndexedEmbedded
. -
department.id
: explicitly inserted by `includeEmbeddedObjectId = true.
@Entity
public class Department {
@Id
private Integer id; (1)
@FullTextField
private String name;
@OneToMany(mappedBy = "department")
private List<Employee> employees = new ArrayList<>();
// Getters and setters
// ...
}
1 | The Department identifier is not mapped to an index field (not @*Field annotation). |
@Entity
@Indexed
public class Employee {
@Id
private Integer id;
@FullTextField
private String name;
@ManyToOne
@IndexedEmbedded(includeEmbeddedObjectId = true) (1)
private Department department;
// Getters and setters
// ...
}
1 | @IndexedEmbedded will insert a department.id field into the Employee index for the Department identifier,
even though in Department the identifier property is not mapped to an index field. |
10.6.9. Filtering embedded fields and breaking @IndexedEmbedded
cycles
By default, @IndexedEmbedded
will "embed" everything:
every field encountered in the indexed-embedded element,
and every @IndexedEmbedded
encountered in the indexed-embedded element,
recursively.
This will work just fine for simpler use cases, but may lead to problems for more complex models:
-
If the indexed-embedded element declares many index fields (Hibernate Search fields), only some of which are actually useful to the "index-embedding" type, the extra fields will decrease indexing performance needlessly.
-
If there is a cycle of
@IndexedEmbedded
(e.g.A
index-embedsb
of typeB
, which index-embedsa
of typeA
) the index-embedding type will end up with an infinite amount of fields (a.b.someField
,a.b.a.b.someField
,a.b.a.b.a.b.someField
, …), which Hibernate Search will detect and reject with an exception.
To address these problems, it is possible to filter the fields to embed,
to only include those that are actually useful.
Available filtering attributes on @IndexedEmbedded
are:
includePaths
-
The paths of index fields from the indexed-embedded element that should be embedded.
Provided paths must be relative to the indexed-embedded element, i.e. they must not include its name or prefix.
This takes precedence over
includeDepth
(see below).Cannot be used in combination with
excludePaths
in the same@IndexedEmbedded
. excludePaths
-
The paths of index fields from the indexed-embedded element that must not be embedded.
Provided paths must be relative to the indexed-embedded element, i.e. they must not include its name or prefix.
This takes precedence over
includeDepth
(see below).Cannot be used in combination with
includePaths
in the same@IndexedEmbedded
. includeDepth
-
The number of levels of indexed-embedded that will have all their fields included by default.
includeDepth
is the number of@IndexedEmbedded
that will be traversed and for which all fields of the indexed-embedded element will be included, even if these fields are not included explicitly throughincludePaths
, unless these fields are excluded explicitly throughexcludePaths
:-
includeDepth=0
means that fields of this indexed-embedded element are not included, nor is any field of nested indexed-embedded elements, unless these fields are included explicitly throughincludePaths
. -
includeDepth=1
means that fields of this indexed-embedded element are included, unless these fields are excluded explicitly throughexcludePaths
, but not fields of nested indexed-embedded elements (@IndexedEmbedded
within this@IndexedEmbedded
), unless these fields are included explicitly throughincludePaths
. -
includeDepth=2
means that fields of this indexed-embedded element and fields of the immediately nested indexed-embedded (@IndexedEmbedded
within this@IndexedEmbedded
) elements are included, unless these fields are explicitly excluded throughexcludePaths
, but not fields of nested indexed-embedded elements beyond that (@IndexedEmbedded
within an@IndexedEmbedded
within this@IndexedEmbedded
), unless these fields are included explicitly throughincludePaths
. -
And so on.
The default value depends on the value of the
includePaths
attribute:-
if
includePaths
is empty, the default isInteger.MAX_VALUE
(include all fields at every level) -
if
includePaths
is not empty, the default is0
(only include fields included explicitly).
-
Dynamic fields and filtering
Dynamic fields are not directly affected by filtering rules: a dynamic field will be included if and only if its parent is included. This means in particular that |
Mixing
includePaths and excludePaths at different nesting levelsIn general, it is possible to use |
Below are three examples: one leveraging includePaths
only,
one leveraging excludePaths
, and one leveraging includePaths
and includeDepth
.
includePaths
This mapping will declare the following fields in the Human
index:
-
name
-
nickname
-
parents.name
: explicitly included becauseincludePaths
onparents
includesname
. -
parents.nickname
: explicitly included becauseincludePaths
onparents
includesnickname
. -
parents.parents.name
: explicitly included becauseincludePaths
onparents
includesparents.name
.
The following fields in particular are excluded:
-
parents.parents.nickname
: not implicitly included becauseincludeDepth
is not set and defaults to0
, and not explicitly included either becauseincludePaths
onparents
does not includeparents.nickname
. -
parents.parents.parents.name
: not implicitly included becauseincludeDepth
is not set and defaults to0
, and not explicitly included either becauseincludePaths
onparents
does not includeparents.parents.name
.
@Entity
@Indexed
public class Human {
@Id
private Integer id;
@FullTextField(analyzer = "name")
private String name;
@FullTextField(analyzer = "name")
private String nickname;
@ManyToMany
@IndexedEmbedded(includePaths = { "name", "nickname", "parents.name" })
private List<Human> parents = new ArrayList<>();
@ManyToMany(mappedBy = "parents")
private List<Human> children = new ArrayList<>();
public Human() {
}
// Getters and setters
// ...
}
excludePaths
This mapping will result in the same schema as in the Filtering indexed-embedded fields with includePaths
example, but through using the excludePaths
instead.
Following fields in the Human
index will be declared:
-
name
-
nickname
-
parents.name
: implicitly included becauseincludeDepth
onparents
defaults toInteger.MAX_VALUE
. -
parents.nickname
: implicitly included becauseincludeDepth
onparents
defaults toInteger.MAX_VALUE
. -
parents.parents.name
: implicitly included becauseincludeDepth
onparents
defaults toInteger.MAX_VALUE
.
The following fields in particular are excluded:
-
parents.parents.nickname
: not included becauseexcludePaths
explicitly excludesparents.nickname
. -
parents.parents.parents
/parents.parents.parents.<any-field>
: not included becauseexcludePaths
explicitly excludesparents.parents
stopping any further traversing.
@Entity
@Indexed
public class Human {
@Id
private Integer id;
@FullTextField(analyzer = "name")
private String name;
@FullTextField(analyzer = "name")
private String nickname;
@ManyToMany
@IndexedEmbedded(excludePaths = { "parents.nickname", "parents.parents" })
private List<Human> parents = new ArrayList<>();
@ManyToMany(mappedBy = "parents")
private List<Human> children = new ArrayList<>();
public Human() {
}
// Getters and setters
// ...
}
includePaths
and includeDepth
This mapping will declare the following fields in the Human
index:
-
name
-
surname
-
parents.name
: implicitly at depth0
becauseincludeDepth > 0
(soparents.*
is included implicitly). -
parents.nickname
: implicitly included at depth0
becauseincludeDepth > 0
(soparents.*
is included implicitly). -
parents.parents.name
: implicitly included at depth1
becauseincludeDepth > 1
(soparents.parents.*
is included implicitly). -
parents.parents.nickname
: implicitly included at depth1
becauseincludeDepth > 1
(soparents.parents.*
is included implicitly). -
parents.parents.parents.name
: not implicitly included at depth2
becauseincludeDepth = 2
(soparents.parents.parents
is included implicitly, but subfields can only be included explicitly) but explicitly included becauseincludePaths
onparents
includesparents.parents.name
.
The following fields in particular are excluded:
-
parents.parents.parents.nickname
: not implicitly included at depth2
becauseincludeDepth = 2
(soparents.parents.parents
is included implicitly, but subfields must be included explicitly) and not explicitly included either becauseincludePaths
onparents
does not includeparents.parents.nickname
. -
parents.parents.parents.parents.name
: not implicitly included at depth3
becauseincludeDepth = 2
(soparents.parents.parents
is included implicitly, butparents.parents.parents.parents
and subfields can only be included explicitly) and not explicitly included either becauseincludePaths
onparents
does not includeparents.parents.parents.name
.
@Entity
@Indexed
public class Human {
@Id
private Integer id;
@FullTextField(analyzer = "name")
private String name;
@FullTextField(analyzer = "name")
private String nickname;
@ManyToMany
@IndexedEmbedded(includeDepth = 2, includePaths = { "parents.parents.name" })
private List<Human> parents = new ArrayList<>();
@ManyToMany(mappedBy = "parents")
private List<Human> children = new ArrayList<>();
public Human() {
}
// Getters and setters
// ...
}
10.6.10. Structuring embedded elements as nested documents using structure
Indexed-embedded fields can be structured in one of two ways,
configured through the structure
attribute of the @IndexedEmbedded
annotation.
To illustrate structure options, let’s assume the class Book
is annotated with @Indexed
and its authors
property is annotated with @IndexedEmbedded
:
-
Book instance
-
title = Leviathan Wakes
-
authors =
-
Author instance
-
firstName = Daniel
-
lastName = Abraham
-
-
Author instance
-
firstName = Ty
-
lastName = Frank
-
-
-
DEFAULT
or FLATTENED
structure
By default, or when using @IndexedEmbedded(structure = FLATTENED)
as shown below,
indexed-embedded fields are "flattened",
meaning that the tree structure is not preserved.
@IndexedEmbedded
with a flattened structure@Entity
@Indexed
public class Book {
@Id
private Integer id;
@FullTextField(analyzer = "english")
private String title;
@ManyToMany
@IndexedEmbedded(structure = ObjectStructure.FLATTENED) (1)
private List<Author> authors = new ArrayList<>();
public Book() {
}
// Getters and setters
// ...
}
1 | Explicitly set the structure of indexed-embedded to FLATTENED .
This is not strictly necessary, since FLATTENED is the default. |
@Entity
public class Author {
@Id
private Integer id;
@FullTextField(analyzer = "name")
private String firstName;
@FullTextField(analyzer = "name")
private String lastName;
@ManyToMany(mappedBy = "authors")
private List<Book> books = new ArrayList<>();
public Author() {
}
// Getters and setters
// ...
}
The book instance mentioned earlier would be indexed with a structure roughly similar to this:
-
Book document
-
title = Leviathan Wakes
-
authors.firstName = [Daniel, Ty]
-
authors.lastName = [Abraham, Frank]
-
The authors.firstName
and authors.lastName
fields were "flattened"
and now each has two values;
the knowledge of which last name corresponds to which first name has been lost.
This is more efficient for indexing and querying, but can cause unexpected behavior when querying the index on both the author’s first name and the author’s last name.
For example, the book instance described above
would show up as a match to a query such as authors.firstname:Ty AND authors.lastname:Abraham
,
even though "Ty Abraham" is not one of this book’s authors:
List<Book> hits = searchSession.search( Book.class )
.where( f -> f.and(
f.match().field( "authors.firstName" ).matching( "Ty" ), (1)
f.match().field( "authors.lastName" ).matching( "Abraham" ) (1)
) )
.fetchHits( 20 );
assertThat( hits ).isNotEmpty(); (2)
1 | Require that hits have an author with the first name Ty and an author with the last name Abraham …
but not necessarily the same author! |
2 | The hits will include a book whose authors are "Ty Daniel" and "Frank Abraham". |
NESTED
structure
When indexed-embedded elements are "nested", i.e. when using @IndexedEmbedded(structure = NESTED)
as shown below,
the tree structure is preserved by transparently creating one separate "nested" document
for each indexed-embedded element.
@IndexedEmbedded
with a nested structure@Entity
@Indexed
public class Book {
@Id
private Integer id;
@FullTextField(analyzer = "english")
private String title;
@ManyToMany
@IndexedEmbedded(structure = ObjectStructure.NESTED) (1)
private List<Author> authors = new ArrayList<>();
public Book() {
}
// Getters and setters
// ...
}
1 | Explicitly set the structure of indexed-embedded objects to NESTED . |
@Entity
public class Author {
@Id
private Integer id;
@FullTextField(analyzer = "name")
private String firstName;
@FullTextField(analyzer = "name")
private String lastName;
@ManyToMany(mappedBy = "authors")
private List<Book> books = new ArrayList<>();
public Author() {
}
// Getters and setters
// ...
}
The book instance mentioned earlier would be indexed with a structure roughly similar to this:
-
Book document
-
title = Leviathan Wakes
-
Nested documents
-
Nested document #1 for "authors"
-
authors.firstName = Daniel
-
authors.lastName = Abraham
-
-
Nested document #2 for "authors"
-
authors.firstName = Ty
-
authors.lastName = Frank
-
-
-
The book is effectively indexed as three documents: the root document for the book, and two internal, "nested" documents for the authors, preserving the knowledge of which last name corresponds to which first name at the cost of degraded performance when indexing and querying.
The nested documents are "hidden" and won’t directly show up in search results. No need to worry about nested documents being "mixed up" with root documents. |
If special care is taken when building predicates on fields within nested documents,
using a nested
predicate,
queries containing predicates on both the author’s first name and the author’s last name
will behave as one would (intuitively) expect.
For example, the book instance described above
would not show up as a match to a query such as authors.firstname:Ty AND authors.lastname:Abraham
,
thanks to the nested
predicate (which can only be used when indexing with the NESTED
structure):
List<Book> hits = searchSession.search( Book.class )
.where( f -> f.nested( "authors" ) (1)
.add( f.match().field( "authors.firstName" ).matching( "Ty" ) ) (2)
.add( f.match().field( "authors.lastName" ).matching( "Abraham" ) ) ) (2)
.fetchHits( 20 );
assertThat( hits ).isEmpty(); (3)
1 | Require that the two constraints (first name and last name) apply to the same author. |
2 | Require that hits have an author with the first name Ty and an author with the last name Abraham . |
3 | The hits will not include a book whose authors are "Ty Daniel" and "Frank Abraham". |
With the Lucene backend, the nested structure is also necessary if
you want to perform object projections.
|
10.6.11. Filtering association elements
Sometimes, only some elements of an association should be included in an @IndexedEmbedded
.
For example a Book
entity might index-embed BookEdition
instances,
but some editions might be retired and thus need to be filtered out before indexing.
Such filtering can be achieved by applying @IndexedEmbedded
to a transient getter representing the filtered association,
and configuring reindexing with @AssociationInverseSide
and @IndexingDependency.derivedFrom
.
@IndexedEmbedded
association with a transient getter, @AssociationInverseSide
and @IndexingDependency.derivedFrom
@Entity
@Indexed
public class Book {
@Id
private Integer id;
@FullTextField(analyzer = "english")
private String title;
@OneToMany(mappedBy = "book")
@OrderBy("id asc")
private List<BookEdition> editions = new ArrayList<>(); (1)
public Book() {
}
// Getters and setters
// ...
@Transient (2)
@IndexedEmbedded (3)
@AssociationInverseSide(inversePath = @ObjectPath({ (4)
@PropertyValue(propertyName = "book")
}))
@IndexingDependency(derivedFrom = @ObjectPath({ (5)
@PropertyValue(propertyName = "editions"),
@PropertyValue(propertyName = "status")
}))
public List<BookEdition> getEditionsNotRetired() {
return editions.stream()
.filter( e -> e.getStatus() != BookEdition.Status.RETIRED )
.collect( Collectors.toList() );
}
}
@Entity
public class BookEdition {
public enum Status {
PUBLISHING,
RETIRED
}
@Id
private Integer id;
@ManyToOne
private Book book;
@FullTextField(analyzer = "english")
private String label;
private Status status; (6)
public BookEdition() {
}
// Getters and setters
// ...
}
1 | The association between Book and BookEdition is mapped in Hibernate ORM, but not Hibernate Search. |
2 | The transient editionsNotRetired property dynamically returns the editions that are not retired. |
3 | @IndexedEmbedded is applied to editionsNotRetired instead of editions .
If we wanted to, we could use `@IndexedEmbedded(name = "editions")
to make this transparent when searching. |
4 | Hibernate ORM does not know about editionsNotRetired , so Hibernate Search cannot infer the inverse side of this "filtered" association.
Thus, we use @AssociationInverseSide to tell Hibernate Search that.
Should the label of a BookEdition be modified, Hibernate Search
will use this information to retrieve the corresponding Book to reindex. |
5 | We use @IndexingDependency.derivedFrom to tell Hibernate Search
that whenever the status of an edition changes,
the result of getEditionsNotRetired() may have changed as well,
requiring reindexing. |
6 | While BookEdition#status is not annotated, Hibernate Search will still
track its changes because of the @IndexingDependency annotation in Book . |
10.6.12. Programmatic mapping
You can embed the fields of an associated object into the main object through the programmatic mapping too. Behavior and options are identical to annotation-based mapping.
.indexedEmbedded()
to index associated elementsThis mapping will declare the following fields in the Book
index:
-
title
-
authors.name
TypeMappingStep bookMapping = mapping.type( Book.class );
bookMapping.indexed();
bookMapping.property( "title" )
.fullTextField().analyzer( "english" );
bookMapping.property( "authors" )
.indexedEmbedded();
TypeMappingStep authorMapping = mapping.type( Author.class );
authorMapping.property( "name" )
.fullTextField().analyzer( "name" );
10.7. Mapping container types with container extractors
10.7.1. Basics
Most built-in annotations applied to properties will work transparently when applied to container types:
-
@GenericField
applied to a property of typeString
will index the property value directly. -
@GenericField
applied to a property of typeOptionalInt
will index the optional’s value (an integer). -
@GenericField
applied to a property of typeList<String>
will index the list elements (strings). -
@GenericField
applied to a property of typeMap<Integer, String>
will index the map values (strings). -
@GenericField
applied to a property of typeMap<Integer, List<String>>
will index the list elements in the map values (strings). -
Etc.
Same goes for other field annotations such as @FullTextField
,
as well as @IndexedEmbedded
in particular.
With @VectorField
being an exception to this behaviour,
requiring explicit instructions to extract values from a container.
What happens behind the scenes is that Hibernate Search will inspect the property type and attempt to apply "container extractors", picking the first that works.
10.7.2. Explicit container extraction
In some cases, you will want to pick the container extractors to use explicitly.
This is the case when a map’s keys must be indexed, instead of the values.
Relevant annotations offer an extraction
attribute to configure this,
as shown in the example below.
All built-in extractor names are available as constants
in org.hibernate.search.mapper.pojo.extractor.builtin.BuiltinContainerExtractors .
|
Map
keys to an index field using explicit container extractor definition@ElementCollection (1)
@JoinTable(name = "book_pricebyformat")
@MapKeyColumn(name = "format")
@Column(name = "price")
@OrderBy("format asc")
@GenericField( (2)
name = "availableFormats",
extraction = @ContainerExtraction(BuiltinContainerExtractors.MAP_KEY) (3)
)
private Map<BookFormat, BigDecimal> priceByFormat = new LinkedHashMap<>();
1 | This annotation — and those below — are just Hibernate ORM configuration. |
2 | Declare an index field based on the priceByFormat property. |
3 | By default, Hibernate Search would index the map values (the book prices).
This uses the extraction attribute to specify that map keys (the book formats)
must be indexed instead. |
When multiple levels of extractions are necessary,
multiple extractors can be configured:
extraction = @ContainerExtraction(BuiltinContainerExtractors.MAP_KEY, BuiltinContainerExtractors.OPTIONAL) .
However, such complex mappings are unlikely since they are generally not supported by Hibernate ORM.
|
It is possible to implement and use custom container extractors, but at the moment Hibernate Search will not detect that the changes to the data inside such container must trigger the reindexing of a containing element. Hence, the corresponding property must disable reindexing on change. See HSEARCH-3688 for more information. |
10.7.3. Disabling container extraction
In some rare cases, container extraction is not wanted,
and the @GenericField
/@IndexedEmbedded
is meant to be applied to the List
/Optional
/etc. directly.
To ignore the default container extractors,
most annotations offer an extraction
attribute.
Set it as below to disable extraction altogether:
@ManyToMany
@GenericField( (1)
name = "authorCount",
valueBridge = @ValueBridgeRef(type = MyCollectionSizeBridge.class), (2)
extraction = @ContainerExtraction(extract = ContainerExtract.NO) (3)
)
private List<Author> authors = new ArrayList<>();
1 | Declare an index field based on the authors property. |
2 | Instruct Hibernate Search to use the given bridge, which will extract the collection size (the number of authors). |
3 | Because the bridge is applied to the collection as a whole,
and not to each author,
the extraction attribute is used to disable container extraction. |
10.7.4. Programmatic mapping
You can pick the container extractors to use explicitly when defining fields or indexed-embeddeds through the programmatic mapping too. Behavior and options are identical to annotation-based mapping.
Map
keys to an index field using .extractor(…)
/.extactors(…)
for explicit container extractor definitionbookMapping.property( "priceByFormat" )
.genericField( "availableFormats" )
.extractor( BuiltinContainerExtractors.MAP_KEY );
Similarly, you can disable container extraction.
.noExtractors()
bookMapping.property( "authors" )
.genericField( "authorCount" )
.valueBridge( new MyCollectionSizeBridge() )
.noExtractors();
10.8. Mapping geo-point types
10.8.1. Basics
Hibernate Search provides a variety of spatial features such as a distance predicate and a distance sort. These features require that spatial coordinates are indexed. More precisely, it requires that a geo-point, i.e. a latitude and longitude in the geographic coordinate system, are indexed.
Geo-points are a bit of an exception,
because there isn’t any type in the standard Java library to represent them.
For that reason, Hibernate Search defines its own interface,
org.hibernate.search.engine.spatial.GeoPoint
.
Since your model probably uses a different type to represent geo-points,
mapping geo-points requires some extra steps.
Two options are available:
10.8.2. Using @GenericField
and the GeoPoint
interface
When geo-points are represented in your entity model by a dedicated, immutable type,
you can simply make that type implement the GeoPoint
interface,
and use simple property/field mapping with @GenericField
:
GeoPoint
and using @GenericField
@Embeddable
public class MyCoordinates implements GeoPoint { (1)
@Basic
private Double latitude;
@Basic
private Double longitude;
protected MyCoordinates() {
// For Hibernate ORM
}
public MyCoordinates(double latitude, double longitude) {
this.latitude = latitude;
this.longitude = longitude;
}
@Override
public double latitude() { (2)
return latitude;
}
@Override
public double longitude() {
return longitude;
}
}
@Entity
@Indexed
public class Author {
@Id
@GeneratedValue
private Integer id;
private String name;
@Embedded
@GenericField (3)
private MyCoordinates placeOfBirth;
public Author() {
}
// Getters and setters
// ...
}
1 | Model the geo-point as an embeddable implementing GeoPoint .
A custom type with a corresponding Hibernate ORM UserType would work as well. |
2 | The geo-point type must be immutable: it does not declare any setter. |
3 | Apply the @GenericField annotation to the placeOfBirth property holding the coordinates.
An index field named placeOfBirth will be added to the index.
Options generally used on @GenericField can be used here as well. |
The geo-point type must be immutable, i.e. the latitude and longitude of a given instance may never change. This is a core assumption of If the type holding your coordinates is mutable,
do not use |
If your geo-point type is immutable, but extending the |
10.8.3. Using @GeoPointBinding
, @Latitude
and @Longitude
For cases where coordinates are stored in a mutable object,
the solution is the @GeoPointBinding
annotation.
Combined with the @Latitude
and @Longitude
annotation,
it can map the coordinates of any type that declares a latitude and longitude of type double
:
@GeoPointBinding
@Entity
@Indexed
@GeoPointBinding(fieldName = "placeOfBirth") (1)
public class Author {
@Id
@GeneratedValue
private Integer id;
private String name;
@Latitude (2)
private Double placeOfBirthLatitude;
@Longitude (3)
private Double placeOfBirthLongitude;
public Author() {
}
// Getters and setters
// ...
}
1 | Apply the @GeoPointBinding annotation to the type,
setting fieldName to the name of the index field. |
2 | Apply @Latitude to the property holding the latitude. It must be of double or Double type. |
3 | Apply @Longitude to the property holding the longitude. It must be of double or Double type. |
The @GeoPointBinding
annotation may also be applied to a property,
in which case the @Latitude
and @Longitude
must be applied to properties of the property’s type:
@GeoPointBinding
on a property@Embeddable
public class MyCoordinates { (1)
@Basic
@Latitude (2)
private Double latitude;
@Basic
@Longitude (3)
private Double longitude;
protected MyCoordinates() {
// For Hibernate ORM
}
public MyCoordinates(double latitude, double longitude) {
this.latitude = latitude;
this.longitude = longitude;
}
public double getLatitude() {
return latitude;
}
public void setLatitude(Double latitude) { (4)
this.latitude = latitude;
}
public double getLongitude() {
return longitude;
}
public void setLongitude(Double longitude) {
this.longitude = longitude;
}
}
@Entity
@Indexed
public class Author {
@Id
@GeneratedValue
private Integer id;
@FullTextField(analyzer = "name")
private String name;
@Embedded
@GeoPointBinding (5)
private MyCoordinates placeOfBirth;
public Author() {
}
// Getters and setters
// ...
}
1 | Model the geo-point as embeddable. An entity would work as well. |
2 | In the geo-point type, apply @Latitude to the property holding the latitude. |
3 | In the geo-point type, apply @Longitude to the property holding the longitude. |
4 | The geo-point type may safely declare setters (it can be mutable). |
5 | Apply the @GeoPointBinding annotation to the property.
Setting fieldName to the name of the index field is possible, but optional:
the property name will be used by default. |
It is possible to handle multiple sets of coordinates by applying the annotations multiple times
and setting the markerSet
attribute to a unique value:
@GeoPointBinding
@Entity
@Indexed
@GeoPointBinding(fieldName = "placeOfBirth", markerSet = "birth") (1)
@GeoPointBinding(fieldName = "placeOfDeath", markerSet = "death") (2)
public class Author {
@Id
@GeneratedValue
private Integer id;
@FullTextField(analyzer = "name")
private String name;
@Latitude(markerSet = "birth") (3)
private Double placeOfBirthLatitude;
@Longitude(markerSet = "birth") (4)
private Double placeOfBirthLongitude;
@Latitude(markerSet = "death") (5)
private Double placeOfDeathLatitude;
@Longitude(markerSet = "death") (6)
private Double placeOfDeathLongitude;
public Author() {
}
// Getters and setters
// ...
}
1 | Apply the @GeoPointBinding annotation to the type,
setting fieldName to the name of the index field, and markerSet to a unique value. |
2 | Apply the @GeoPointBinding annotation to the type a second time,
setting fieldName to the name of the index field (different from the first one),
and markerSet to a unique value (different from the first one). |
3 | Apply @Latitude to the property holding the latitude for the first geo-point field.
Set the markerSet attribute to the same value as the corresponding @GeoPointBinding annotation. |
4 | Apply @Longitude to the property holding the longitude for the first geo-point field.
Set the markerSet attribute to the same value as the corresponding @GeoPointBinding annotation. |
5 | Apply @Latitude to the property holding the latitude for the second geo-point field.
Set the markerSet attribute to the same value as the corresponding @GeoPointBinding annotation. |
6 | Apply @Longitude to the property holding the longitude for the second geo-point field.
Set the markerSet attribute to the same value as the corresponding @GeoPointBinding annotation. |
10.8.4. Programmatic mapping
You can map geo-point fields document identifier through the programmatic mapping too. Behavior and options are identical to annotation-based mapping.
GeoPoint
and using .genericField()
TypeMappingStep authorMapping = mapping.type( Author.class );
authorMapping.indexed();
authorMapping.property( "placeOfBirth" )
.genericField();
GeoPointBinder
TypeMappingStep authorMapping = mapping.type( Author.class );
authorMapping.indexed();
authorMapping.binder( GeoPointBinder.create().fieldName( "placeOfBirth" ) );
authorMapping.property( "placeOfBirthLatitude" )
.marker( GeoPointBinder.latitude() );
authorMapping.property( "placeOfBirthLongitude" )
.marker( GeoPointBinder.longitude() );
10.9. Mapping multiple alternatives
10.9.1. Basics
In some situations, it is necessary for a particular property to be indexed differently depending on the value of another property.
For example there may be an entity that has text properties whose content
is in a different language depending on the value of another property, say language
.
In that case, you probably want to analyze the text differently depending on the language.
While this could definitely be solved with a custom type bridge,
a convenient solution to that problem is to use the AlternativeBinder
.
This binder solves the problem this way:
-
at bootstrap, declare one index field per language, assigning a different analyzer to each field;
-
at runtime, put the content of the text property in a different field based on the language.
In order to use this binder, you will need to:
-
annotate a property with
@AlternativeDiscriminator
(e.g. thelanguage
property); -
implement an
AlternativeBinderDelegate
that will declare the index fields (e.g. one field per language) and create anAlternativeValueBridge
. This bridge is responsible for passing the property value to the relevant field at runtime. -
apply the
AlternativeBinder
to the type hosting the properties (e.g. the type declaring thelanguage
property and the multi-language text properties). Generally you will want to create your own annotation for that.
Below is an example of how to use the binder.
language
property using AlternativeBinder
public enum Language { (1)
ENGLISH( "en" ),
FRENCH( "fr" ),
GERMAN( "de" );
public final String code;
Language(String code) {
this.code = code;
}
}
1 | A Language enum defines supported languages. |
@Entity
@Indexed
public class BlogEntry {
@Id
private Integer id;
@AlternativeDiscriminator (1)
@Enumerated(EnumType.STRING)
private Language language;
@MultiLanguageField (2)
private String text;
// Getters and setters
// ...
}
1 | Mark the language property as the discriminator which will be used to determine the language. |
2 | Map the text property to multiple fields using a custom annotation. |
@Retention(RetentionPolicy.RUNTIME) (1)
@Target({ ElementType.METHOD, ElementType.FIELD }) (2)
@PropertyMapping(processor = @PropertyMappingAnnotationProcessorRef( (3)
type = MultiLanguageField.Processor.class
))
@Documented (4)
public @interface MultiLanguageField {
String name() default ""; (5)
class Processor (6)
implements PropertyMappingAnnotationProcessor<MultiLanguageField> { (7)
@Override
public void process(PropertyMappingStep mapping, MultiLanguageField annotation,
PropertyMappingAnnotationProcessorContext context) {
LanguageAlternativeBinderDelegate delegate = new LanguageAlternativeBinderDelegate( (8)
annotation.name().isEmpty() ? null : annotation.name()
);
mapping.hostingType() (9)
.binder( AlternativeBinder.create( (10)
Language.class, (11)
context.annotatedElement().name(), (12)
String.class, (13)
BeanReference.ofInstance( delegate ) (14)
) );
}
}
}
1 | Define an annotation with RUNTIME retention.
Any other retention policy will cause the annotation to be ignored by Hibernate Search. |
2 | Allow the annotation to target either methods (getters) or fields. |
3 | Mark this annotation as a property mapping, and instruct Hibernate Search to apply the given processor whenever it finds this annotation. It is also possible to reference the processor by its CDI/Spring bean name. |
4 | Optionally, mark the annotation as documented, so that it is included in the javadoc of your entities. |
5 | Optionally, define parameters. Here we allow to customize the field name (which will default to the property name, see further down). |
6 | Here the processor class is nested in the annotation class, because it is more convenient, but you are obviously free to implement it in a separate Java file. |
7 | The processor must implement the PropertyMappingAnnotationProcessor interface,
setting its generic type argument to the type of the corresponding annotation. |
8 | In the annotation processor, instantiate a custom binder delegate (see below for the implementation). |
9 | Access the mapping of the type hosting the property (in this example, BlogEntry ). |
10 | Apply the AlternativeBinder to the type hosting the property (in this example, BlogEntry ). |
11 | Pass to AlternativeBinder the expected type of discriminator values. |
12 | Pass to AlternativeBinder the name of the property from which field values should be extracted
(in this example, text ). |
13 | Pass to AlternativeBinder the expected type of the property from which index field values are extracted. |
14 | Pass to AlternativeBinder the binder delegate. |
public class LanguageAlternativeBinderDelegate implements AlternativeBinderDelegate<Language, String> { (1)
private final String name;
public LanguageAlternativeBinderDelegate(String name) { (2)
this.name = name;
}
@Override
public AlternativeValueBridge<Language, String> bind(IndexSchemaElement indexSchemaElement, (3)
PojoModelProperty fieldValueSource) {
EnumMap<Language, IndexFieldReference<String>> fields = new EnumMap<>( Language.class );
String fieldNamePrefix = ( name != null ? name : fieldValueSource.name() ) + "_";
for ( Language language : Language.values() ) { (4)
String languageCode = language.code;
IndexFieldReference<String> field = indexSchemaElement.field(
fieldNamePrefix + languageCode, (5)
f -> f.asString().analyzer( "text_" + languageCode ) (6)
)
.toReference();
fields.put( language, field );
}
return new Bridge( fields ); (7)
}
private static class Bridge (8)
implements AlternativeValueBridge<Language, String> { (9)
private final EnumMap<Language, IndexFieldReference<String>> fields;
private Bridge(EnumMap<Language, IndexFieldReference<String>> fields) {
this.fields = fields;
}
@Override
public void write(DocumentElement target, Language discriminator, String bridgedElement) {
target.addValue( fields.get( discriminator ), bridgedElement ); (10)
}
}
}
1 | The binder delegate must implement AlternativeBinderDelegate .
The first type parameter is the expected type of discriminator values (in this example, Language );
the second type parameter is the expected type of the property from which field values are extracted
(in this example, String ). |
2 | Any (custom) parameter can be passed through the constructor. |
3 | Implement bind , to bind a property to index fields. |
4 | Define one field per language. |
5 | Make sure to give a different name to each field.
Here we’re using the language code as a suffix, i.e. text_en , text_fr , text_de , … |
6 | Assign a different analyzer to each field.
The analyzers text_en , text_fr , text_de must have been defined in the backend;
see Analysis. |
7 | Return a bridge. |
8 | Here the bridge class is nested in the binder class, because it is more convenient, but you are obviously free to implement it as you wish: as a lambda expression, in a separate Java file… |
9 | The bridge must implement the AlternativeValueBridge interface. |
10 | The bridge is called when indexing; it selects the field to write to based on the discriminator value, then writes the value to index to that field. |
10.9.2. Programmatic mapping
You can apply AlternativeBinder
through the programmatic mapping too.
Behavior and options are identical to annotation-based mapping.
AlternativeBinder
with .binder(…)
TypeMappingStep blogEntryMapping = mapping.type( BlogEntry.class );
blogEntryMapping.indexed();
blogEntryMapping.property( "language" )
.marker( AlternativeBinder.alternativeDiscriminator() );
LanguageAlternativeBinderDelegate delegate = new LanguageAlternativeBinderDelegate( null );
blogEntryMapping.binder( AlternativeBinder.create( Language.class,
"text", String.class, BeanReference.ofInstance( delegate ) ) );
10.10. Tuning when to trigger reindexing
10.10.1. Basics
When an entity property is mapped to the index,
be it through @GenericField
, @IndexedEmbedded
,
or a custom bridge,
this mapping introduces a dependency:
the document will need to be updated when the property changes.
For simpler, single-entity mappings, this only means that Hibernate Search will need to detect when an entity changes and reindex the entity. This will be handled transparently.
If the mapping includes a "derived" property,
i.e. a property that is not persisted directly,
but instead is dynamically computed in a getter that uses other properties as input,
Hibernate Search will be unable to guess which part of the persistent state
these properties are based on.
In this case, some explicit configuration will be required;
see Reindexing when a derived value changes with @IndexingDependency
for more information.
When the mapping crosses the entity boundaries,
things get more complicated.
Let’s consider a mapping where a Book
entity is mapped to a document,
and that document must include the name
property of the Author
entity
(for example using @IndexedEmbedded
).
Whenever an author’s name changes,
Hibernate Search will need to retrieve all the books of that author,
to reindex them.
In practice, this means that whenever an entity mapping relies on an association to another entity,
this association must be bidirectional:
if Book.authors
is @IndexedEmbedded
,
Hibernate Search must be aware of an inverse association Author.books
.
An exception will be thrown on startup if the inverse association cannot be resolved.
Most of the time, when the Hibernate ORM integration is used, Hibernate Search is able to take advantage of Hibernate ORM metadata
(the mappedBy
attribute of @OneToOne
and @OneToMany
)
to resolve the inverse side of an association,
so this is all handled transparently.
In some rare cases, with the more complex mappings,
it is possible that even Hibernate ORM is not aware that an association is bidirectional,
because mappedBy
cannot be used, or because the Standalone POJO Mapper is being used.
A few solutions exist:
-
The association can simply be ignored. This means the index will be out of date whenever associated entities change, but this can be an acceptable solution if the index is rebuilt periodically. See Limiting reindexing of containing entities with
@IndexingDependency
for more information. -
If the association is actually bidirectional, its inverse side can be specified to Hibernate Search explicitly using
@AssociationInverseSide
. See Enriching the entity model with@AssociationInverseSide
for more information.
10.10.2. Enriching the entity model with @AssociationInverseSide
Given an association from an entity type A
to entity type B
,
@AssociationInverseSide
defines the inverse side of an association,
i.e. the path from B
to A
.
This is mostly useful when using the Standalone POJO Mapper
or when using the Hibernate ORM integration and a bidirectional association
is not mapped as such in Hibernate ORM (no mappedBy
).
@AssociationInverseSide
@Entity
@Indexed
public class Book {
@Id
@GeneratedValue
private Integer id;
private String title;
@ElementCollection (1)
@JoinTable(
name = "book_editionbyprice",
joinColumns = @JoinColumn(name = "book_id")
)
@MapKeyJoinColumn(name = "edition_id")
@Column(name = "price")
@OrderBy("edition_id asc")
@IndexedEmbedded( (2)
name = "editionsForSale",
extraction = @ContainerExtraction(BuiltinContainerExtractors.MAP_KEY)
)
@AssociationInverseSide( (3)
extraction = @ContainerExtraction(BuiltinContainerExtractors.MAP_KEY),
inversePath = @ObjectPath(@PropertyValue(propertyName = "book"))
)
private Map<BookEdition, BigDecimal> priceByEdition = new LinkedHashMap<>();
public Book() {
}
// Getters and setters
// ...
}
@Entity
public class BookEdition {
@Id
@GeneratedValue
private Integer id;
@ManyToOne (4)
private Book book;
@FullTextField(analyzer = "english")
private String label;
public BookEdition() {
}
// Getters and setters
// ...
}
1 | This annotation and the following ones are the Hibernate ORM mapping for a Map<BookEdition, BigDecimal>
where the keys are BookEdition entities and the values are the price of that edition. |
2 | Index-embed the editions that are actually for sale. |
3 | In Hibernate ORM, it is not possible to use mappedBy for an association modeled by a Map key.
Thus, we use @AssociationInverseSide to tell Hibernate Search what the inverse side
of this association is. |
4 | We could have applied the @AssociationInverseSide annotation here instead:
either side will do. |
10.10.3. Reindexing when a derived value changes with @IndexingDependency
When a property is not persisted directly, but instead is dynamically computed in a getter that uses other properties as input, Hibernate Search will be unable to guess which part of the persistent state these properties are based on, and thus will be unable to trigger reindexing when the relevant persistent state changes. By default, Hibernate Search will detect such cases on bootstrap and throw an exception.
Annotating the property with @IndexingDependency(derivedFrom = …)
will give Hibernate Search the information it needs and allow triggering reindexing.
@IndexingDependency.derivedFrom
@Entity
@Indexed
public class Book {
@Id
@GeneratedValue
private Integer id;
private String title;
@ElementCollection
private List<String> authors = new ArrayList<>(); (1)
public Book() {
}
// Getters and setters
// ...
@Transient (2)
@FullTextField(analyzer = "name") (3)
@IndexingDependency(derivedFrom = @ObjectPath({ (4)
@PropertyValue(propertyName = "authors")
}))
public String getMainAuthor() {
return authors.isEmpty() ? null : authors.get( 0 );
}
}
1 | Authors are modeled as a list of string containing the author names. |
2 | The transient mainAuthor property dynamically returns the main author (the first one). |
3 | We use @FullTextField on the getMainAuthor() getter to index the name of the main author. |
4 | We use @IndexingDependency.derivedFrom to tell Hibernate Search
that whenever the list of authors changes, the result of getMainAuthor() may have changed. |
10.10.4. Limiting reindexing of containing entities with @IndexingDependency
In some cases, triggering reindexing of entities every time a given property changes is not realistically achievable:
-
When an association is massive, for example a single entity instance is indexed-embedded in thousands of other entities.
-
When a property mapped to the index is updated very frequently, leading to a very frequent reindexing and unacceptable usage of disks or database.
-
Etc.
When that happens, it is possible to tell Hibernate Search to ignore updates
to a particular property (and, in the case of @IndexedEmbedded
, anything beyond that property).
Several options are available to control exactly how updates to a given property affect reindexing. See the sections below for an explanation of each option.
ReindexOnUpdate.SHALLOW
: limiting reindexing to same-entity updates only
ReindexOnUpdate.SHALLOW
is most useful when an association is highly asymmetric and therefore unidirectional.
Think associations to "reference" data such as categories, types, cities, countries, …
It essentially tells Hibernate Search that changing an association — adding or removing associated elements, i.e. "shallow" updates — should trigger reindexing of the object on which the change happened, but changing properties of associated entities — "deep" updates — should not.
For example, let’s consider the (incorrect) mapping below:
@Entity
@Indexed
public class Book {
@Id
private Integer id;
private String title;
@ManyToOne (1)
@IndexedEmbedded (2)
private BookCategory category;
public Book() {
}
// Getters and setters
// ...
}
@Entity
public class BookCategory {
@Id
private Integer id;
@FullTextField(analyzer = "english")
private String name;
(3)
// Getters and setters
// ...
}
1 | Each book has an association to a BookCategory entity. |
2 | We want to index-embed the BookCategory into the Book … |
3 | … but we really don’t want to model the (huge) inverse association from BookCategory to Book :
There are potentially thousands of books for each category, so calling a getBooks() method
would lead to loading thousands of entities into the Hibernate ORM session at once,
and would perform badly.
Thus, there isn’t any getBooks() method to list all books in a category. |
With this mapping, Hibernate Search will not be able to reindex all books when the category name changes:
the getter that would list all books for that category simply doesn’t exist.
Since Hibernate Search tries to be safe by default,
it will reject this mapping and throw an exception at bootstrap,
saying it needs an inverse side to the Book
→ BookCategory
association.
However, in this case, we don’t expect the name of a BookCategory
to change.
That’s really "reference" data, which changes so rarely that we can conceivably plan ahead such change
and reindex all books whenever that happens.
So we would really not mind if Hibernate Search just ignored changes to BookCategory
…
That’s what @IndexingDependency(reindexOnUpdate = ReindexOnUpdate.SHALLOW)
is for:
it tells Hibernate Search to ignore the impact of updates to an associated entity.
See the modified mapping below:
ReindexOnUpdate.SHALLOW
@Entity
@Indexed
public class Book {
@Id
private Integer id;
private String title;
@ManyToOne
@IndexedEmbedded
@IndexingDependency(reindexOnUpdate = ReindexOnUpdate.SHALLOW) (1)
private BookCategory category;
public Book() {
}
// Getters and setters
// ...
}
1 | We use ReindexOnUpdate.SHALLOW to tell Hibernate Search that Book
should be re-indexed when it’s assigned a new category (book.setCategory( newCategory ) ),
but not when properties of its category change (category.setName( newName ) ). |
Hibernate Search will accept the mapping above and boot successfully,
since the inverse side of the association from Book
to BookCategory
is no longer deemed necessary.
Only shallow changes to a book’s category will trigger reindexing of that book:
-
When a book is assigned a new category (
book.setCategory( newCategory )
), Hibernate Search will consider it a "shallow" change, since it only affects theBook
entity. Thus, Hibernate Search will reindex the book. -
When a category itself changes (
category.setName( newName )
), Hibernate Search will consider it a "deep" change, since it occurs beyond the boundaries of theBook
entity. Thus, Hibernate Search will not reindex books of that category by itself. The index will become slightly out-of-sync, but this can be solved by reindexingBook
entities, for example every night.
ReindexOnUpdate.NO
: disabling reindexing caused by updates of a particular property
ReindexOnUpdate.NO
is most useful for properties that change very frequently
and don’t need to be up-to-date in the index.
It essentially tells Hibernate Search that changes to that property should not trigger reindexing,
For example, let’s consider the mapping below:
@Entity
@Indexed
public class Sensor {
@Id
private Integer id;
@FullTextField
private String name; (1)
@KeywordField
private SensorStatus status; (1)
@Column(name = "\"value\"")
private double value; (2)
@GenericField
private double rollingAverage; (3)
public Sensor() {
}
// Getters and setters
// ...
}
1 | The sensor name and status get updated very rarely. |
2 | The sensor value gets updated every few milliseconds |
3 | When the sensor value gets updated, we also update the rolling average over the last few seconds (based on data not shown here). |
Updates to the name and status, which are rarely updated, can perfectly well trigger reindexing. But considering there are thousands of sensors, updates to the sensor value cannot reasonably trigger reindexing: reindexing thousands of sensors every few milliseconds probably won’t perform well.
In this scenario, however, search on sensor value is not considered critical and indexes don’t need to be as fresh. We can accept indexes to lag behind a few minutes when it comes to a sensor value. We can consider setting up a batch process that runs every few seconds to reindex all sensors, either through a mass indexer, using the Jakarta Batch mass indexing job, or explicitly. So we would really not mind if Hibernate Search just ignored changes to sensor values…
That’s what @IndexingDependency(reindexOnUpdate = ReindexOnUpdate.NO)
is for:
it tells Hibernate Search to ignore the impact of updates to the rollingAverage
property.
See the modified mapping below:
ReindexOnUpdate.NO
@Entity
@Indexed
public class Sensor {
@Id
private Integer id;
@FullTextField
private String name;
@KeywordField
private SensorStatus status;
@Column(name = "\"value\"")
private double value;
@GenericField
@IndexingDependency(reindexOnUpdate = ReindexOnUpdate.NO) (1)
private double rollingAverage;
public Sensor() {
}
// Getters and setters
// ...
}
1 | We use ReindexOnUpdate.NO to tell Hibernate Search that updates to rollingAverage
should not trigger reindexing. |
With this mapping:
-
When a sensor is assigned a new name (
sensor.setName( newName )
) or status (sensor.setStatus( newStatus )
), Hibernate Search will trigger reindexing of the sensor. -
When a sensor is assigned a new rolling average (
sensor.setRollingAverage( newName )
), Hibernate Search will not trigger reindexing of the sensor.
10.10.5. Programmatic mapping
You can control reindexing through the programmatic mapping too. Behavior and options are identical to annotation-based mapping.
.associationInverseSide(…)
TypeMappingStep bookMapping = mapping.type( Book.class );
bookMapping.indexed();
bookMapping.property( "priceByEdition" )
.indexedEmbedded( "editionsForSale" )
.extractor( BuiltinContainerExtractors.MAP_KEY )
.associationInverseSide( PojoModelPath.parse( "book" ) )
.extractor( BuiltinContainerExtractors.MAP_KEY );
TypeMappingStep bookEditionMapping = mapping.type( BookEdition.class );
bookEditionMapping.property( "label" )
.fullTextField().analyzer( "english" );
.indexingDependency().derivedFrom(…)
TypeMappingStep bookMapping = mapping.type( Book.class );
bookMapping.indexed();
bookMapping.property( "mainAuthor" )
.fullTextField().analyzer( "name" )
.indexingDependency().derivedFrom( PojoModelPath.parse( "authors" ) );
.indexingDependency().reindexOnUpdate(…)
TypeMappingStep bookMapping = mapping.type( Book.class );
bookMapping.indexed();
bookMapping.property( "category" )
.indexedEmbedded()
.indexingDependency().reindexOnUpdate( ReindexOnUpdate.SHALLOW );
TypeMappingStep bookCategoryMapping = mapping.type( BookCategory.class );
bookCategoryMapping.property( "name" )
.fullTextField().analyzer( "english" );
10.11. Changing the mapping of an existing application
Over the lifetime of an application, it will happen that the mapping of a particular indexed entity type has to change. When this happens, the mapping changes are likely to require changes to the structure of the index, i.e. its schema. Hibernate Search does not handle this structure change automatically, so manual intervention is required.
The simplest solution when the index structure needs to change is to:
-
Drop and re-create the index and its schema, either manually by deleting the filesystem directory for Lucene or using the REST API to delete the index for Elasticsearch, or using Hibernate Search’s schema management features.
-
Re-populate the index, for example using the mass indexer.
Technically, dropping the index and reindexing is not strictly required if the mapping changes include only:
However, you will still need to:
|
10.12. Custom mapping annotations
10.12.1. Basics
By default, Hibernate Search only recognizes built-in mapping annotations
such as @Indexed
, @GenericField
or @IndexedEmbedded
.
To use custom annotations in a Hibernate Search mapping, two steps are required:
-
Implementing a processor for that annotation:
TypeMappingAnnotationProcessor
for type annotations,PropertyMappingAnnotationProcessor
for method/field annotations,ConstructorMappingAnnotationProcessor
for constructor annotations, orMethodParameterMappingAnnotationProcessor
for constructor parameter annotations. -
Annotating the custom annotation with either
@TypeMapping
,@PropertyMapping
,@ConstructorMapping
, or@MethodParameterMapping
, passing as an argument the reference to the annotation processor.
Once this is done, Hibernate Search will be able to detect custom annotations in indexed classes
(though not necessarily in custom projection types, see Custom root mapping annotations).
Whenever a custom annotation is encountered,
Hibernate Search will instantiate the annotation processor
and call its process
method, passing the following as arguments:
-
A
mapping
parameter allowing to define the mapping for the type, property, constructor, or constructor parameter using the programmatic mapping API. -
An
annotation
parameter representing the annotation instance. -
A
context
object with various helpers.
Custom annotations are most frequently used to apply custom, parameterized binders or bridges. You can find examples in these sections in particular:
-
Passing parameters to a value binder/bridge through a custom annotation
-
Passing parameters to a property binder/bridge through a custom annotation
-
Passing parameters to a type binder/bridge through a custom annotation
-
Passing parameters to an identifier binder/bridge through a custom annotation
-
Passing parameters to a projection binder through a custom annotation
It is completely possible to use custom annotations for parameter-less binders or bridges, or even for more complex features such as indexed-embedded: every feature available in the programmatic mapping API can be triggered by a custom annotation. |
10.12.2. Custom root mapping annotations
To have Hibernate Search consider a custom annotation as a root mapping annotation,
add the @RootMapping
meta-annotation to the custom annotation.
This will ensure Hibernate Search processes annotations on types annotated with the custom annotation even if those types are not referenced in the index mapping, which is mainly useful for custom annotations related to projection mapping.
10.13. Inspecting the mapping
After Hibernate Search has successfully booted, the SearchMapping
can be used
to get a list of indexed entities and get more direct access to the corresponding indexes,
as shown in the example below.
SearchMapping mapping = /* ... */ (1)
SearchIndexedEntity<Book> bookEntity = mapping.indexedEntity( Book.class ); (2)
String jpaName = bookEntity.jpaName(); (3)
IndexManager indexManager = bookEntity.indexManager(); (4)
Backend backend = indexManager.backend(); (5)
SearchIndexedEntity<?> bookEntity2 = mapping.indexedEntity( "Book" ); (6)
Class<?> javaClass = bookEntity2.javaClass();
for ( SearchIndexedEntity<?> entity : mapping.allIndexedEntities() ) { (7)
// ...
}
1 | Retrieve the SearchMapping . |
2 | Retrieve the SearchIndexedEntity by its entity class.
SearchIndexedEntity gives access to information pertaining to that entity and its index. |
3 | (With the Hibernate ORM integration only) Get the JPA name of that entity. |
4 | Get the index manager for that entity. |
5 | Get the backend for that index manager. |
6 | Retrieve the SearchIndexedEntity by its entity name. |
7 | Retrieve all indexed entities. |
From an IndexManager
, you can then access the index metamodel,
to inspect available fields and their main characteristics,
as shown below.
SearchIndexedEntity<Book> bookEntity = mapping.indexedEntity( Book.class ); (1)
IndexManager indexManager = bookEntity.indexManager(); (2)
IndexDescriptor indexDescriptor = indexManager.descriptor(); (3)
indexDescriptor.field( "releaseDate" ).ifPresent( field -> { (4)
String path = field.absolutePath(); (5)
String relativeName = field.relativeName();
// Etc.
if ( field.isValueField() ) { (6)
IndexValueFieldDescriptor valueField = field.toValueField(); (7)
IndexValueFieldTypeDescriptor type = valueField.type(); (8)
boolean projectable = type.projectable();
Class<?> dslArgumentClass = type.dslArgumentClass();
Class<?> projectedValueClass = type.projectedValueClass();
Optional<String> analyzerName = type.analyzerName();
Optional<String> searchAnalyzerName = type.searchAnalyzerName();
Optional<String> normalizerName = type.normalizerName();
// Etc.
Set<String> traits = type.traits(); (9)
if ( traits.contains( IndexFieldTraits.Aggregations.RANGE ) ) {
// ...
}
}
else if ( field.isObjectField() ) { (10)
IndexObjectFieldDescriptor objectField = field.toObjectField();
IndexObjectFieldTypeDescriptor type = objectField.type();
boolean nested = type.nested();
// Etc.
}
} );
Collection<? extends AnalyzerDescriptor> analyzerDescriptors = indexDescriptor.analyzers(); (11)
for ( AnalyzerDescriptor analyzerDescriptor : analyzerDescriptors ) {
String analyzerName = analyzerDescriptor.name();
// ...
}
Optional<? extends AnalyzerDescriptor> analyzerDescriptor = indexDescriptor.analyzer( "some-analyzer-name" ); (12)
// ...
Collection<? extends NormalizerDescriptor> normalizerDescriptors = indexDescriptor.normalizers(); (13)
for ( NormalizerDescriptor normalizerDescriptor : normalizerDescriptors ) {
String normalizerName = normalizerDescriptor.name();
// ...
}
Optional<? extends NormalizerDescriptor> normalizerDescriptor = indexDescriptor.normalizer( "some-normalizer-name" ); (14)
// ...
1 | Retrieve a SearchIndexedEntity . |
2 | Get the index manager for that entity.
IndexManager gives access to information pertaining to the index.
This includes the metamodel, but not only (see below). |
3 | Get the descriptor for that index. The descriptor exposes the index metamodel. |
4 | Retrieve a field by name. The method returns an Optional , which is empty if the field does not exist. |
5 | The field descriptor exposes information about the field structure: path, name, parent, … |
6 | Check that the field is a value field, holding a value (integer, text, …), as opposed to object fields, holding other fields. |
7 | Narrow down the field descriptor to a value field descriptor. |
8 | Get the descriptor for the field type. The type descriptor exposes information about the field’s capabilities: is it searchable, sortable, projectable, what is the expected java class for arguments to the Search DSL, what are the analyzers/normalizer set on this field, … |
9 | Inspect the "traits" of a field type: each trait represents a predicate/sort/projection/aggregation that can be used on fields of that type. |
10 | Object fields can also be inspected. |
11 | A collection of all configured analyzers available for the index represented by the descriptor can also be inspected. |
12 | Alternatively, analyzer descriptors can be retrieved by name to see if a particular analyzer is available within the index context. |
13 | A collection of all configured normalizers available for the index represented by the descriptor can also be inspected. |
14 | Alternatively, normalizer descriptors can be retrieved by name to see if a particular normalizer is available within the index context. |
The |
The SearchMapping
also exposes methods to retrieve an IndexManager
by name,
or even a whole Backend
by name.
11. Mapping index content to custom types (projection constructors)
11.1. Basics
Projections allow retrieving data directly from matched documents as the result of a search query. As the structure of documents and projections becomes more complex, so do programmatic calls to the Projection DSL, which can lead to overwhelming projection definitions.
To address this, Hibernate Search offers the ability to define projections through the mapping of custom types
(typically records), by applying the @ProjectionConstructor
annotation to those types or their constructor.
Executing such a projection then becomes as easy as referencing the custom type.
Such projections are composite, their inner projections (components) being inferred from the name and type of the projection constructors' parameters.
There are a few constraints to keep in mind when annotating a custom projection type:
|
@ProjectionConstructor (1)
public record MyBookProjection(
@IdProjection Integer id, (2)
String title, (3)
List<Author> authors) { (4)
@ProjectionConstructor (5)
public record Author(String firstName, String lastName) {
}
}
1 | Annotate the record type with @ProjectionConstructor ,
either at the type level (if there’s only one constructor)
or at the constructor level (if there are multiple constructors). |
2 | To project on the entity identifier, annotate the relevant constructor parameter with @IdProjection .
Most projections have a corresponding annotation that can be used on constructor parameters. |
3 | To project on a value field, add a constructor parameter named after that field and with the same type as that field.
See Implicit inner projection inference for more information on how constructor parameters should be defined.
Alternatively, the field projection can be configured explicitly with |
4 | To project on an object field, add a constructor parameter named after that field and with its own custom projection type.
Multivalued projections must be modeled as a List<…> or supertype.
Alternatively, the object projection can be configured explicitly with |
5 | Annotate any custom projection type used for object fields with @ProjectionConstructor as well. |
List<MyBookProjection> hits = searchSession.search( Book.class )
.select( MyBookProjection.class ) (1)
.where( f -> f.matchAll() )
.fetchHits( 20 ); (2)
1 | Pass the custom projection type to .select(…) . |
2 | Each hit will be an instance of the custom projection type, populated with data retrieved from the index. |
Custom, non-record classes can also be annotated with |
The example above executes a projection equivalent to the following code:
List<MyBookProjection> hits = searchSession.search( Book.class )
.select( f -> f.composite()
.from(
f.id( Integer.class ),
f.field( "title", String.class ),
f.object( "authors" )
.from(
f.field( "authors.firstName", String.class ),
f.field( "authors.lastName", String.class )
)
.as( MyBookProjection.Author::new )
.multi()
)
.as( MyBookProjection::new ) )
.where( f -> f.matchAll() )
.fetchHits( 20 );
11.2. Detection of mapped projection types
Hibernate Search must know of projection types on startup,
which it generally does as soon as they are annotated with @ProjectionConstructor
,
thanks to classpath scanning.
For more information about classpath scanning and how to tune it (for example to scan dependencies instead of just the application JAR), see Classpath scanning.
11.3. Implicit inner projection inference
11.3.1. Basics
When constructor parameters are not annotated with explicit projection annotations, Hibernate Search applies some basic inference rules based on the name and type of those parameters in order to select (inner) projections.
The following sections explain how to define the name and type of constructor parameters to get the desired projection.
11.3.2. Inner projection and type
When a constructor parameter is not annotated with an explicit projection annotation, Hibernate Search infers the type of the inner projection from the type of the corresponding constructor parameter.
You should set the type of a constructor parameter according to the following rules:
-
For a single-valued projection:
-
For a projection on a value field (generally mapped using
@FullTextField
/@GenericField
/etc.), set the parameter type to the type of projected values for the target field, which in general is the type of the property annotated with@FullTextField
/@GenericField
/etc. -
For a projection on an object field (generally mapped using
@IndexedEmbedded
), set the parameter type to another custom type annotated with@ProjectionConstructor
, whose constructor will define which fields to extract from that object field.
-
-
For a multivalued projection, follow the rules above then wrap the type with
Iterable
,Collection
orList
, e.g.Iterable<SomeType>
,Collection<SomeType>
orList<SomeType>
.
Constructor parameters meant to represent a multivalued projection
can only have the type Other container types such as |
11.3.3. Inner projection and field path
When a constructor parameter is not annotated with an explicit projection annotation or when it is but that annotation does not provide an explicit path, Hibernate Search infers the path of the field to project on from the name of the corresponding constructor parameter.
In that case, you should set the name of a constructor parameter (in the Java code) to the name of the field to project on.
Hibernate Search can only retrieve the name of the constructor parameter:
|
11.4. Explicit inner projection
Constructor parameters can be annotated with explicit projection annotations such as @IdProjection
or @FieldProjection
.
For projections that would normally be inferred automatically, this allows further customization, for example in a field projection to set the target field path explicitly or to disable value conversion. Alternatively, in an object projection, this also allows breaking cycles of nested object projections.
For other projections such as identifier projection, this is actually the only way to use them in a projection constructor, because they would never be inferred automatically.
See the documentation of each projection for more information about the corresponding built-in annotation to be applied to projection constructor parameters.
11.5. Mapping types with multiple constructors
If the projection type (record or class) has multiple constructors,
the @ProjectionConstructor
annotation cannot be applied at the type level
and must be applied to the constructor you wish to use for projections.
@ProjectionConstructor
public class MyAuthorProjectionClassMultiConstructor {
public final String firstName;
public final String lastName;
@ProjectionConstructor (1)
public MyAuthorProjectionClassMultiConstructor(String firstName, String lastName) {
this.firstName = firstName;
this.lastName = lastName;
}
public MyAuthorProjectionClassMultiConstructor(String fullName) { (2)
this( fullName.split( " " )[0], fullName.split( " " )[1] );
}
// ... Equals and hashcode ...
}
1 | Annotate the constructor to use for projections with @ProjectionConstructor . |
2 | Other constructors can be used for other purposes than projections,
but they must not be annotated with @ProjectionConstructor (only one such constructor is allowed). |
In the case of records, the (implicit) canonical constructor can also be annotated, but it requires representing that constructor in the code with a specific syntax:
@ProjectionConstructor
public record MyAuthorProjectionRecordMultiConstructor(String firstName, String lastName) {
@ProjectionConstructor (1)
public MyAuthorProjectionRecordMultiConstructor { (2)
}
public MyAuthorProjectionRecordMultiConstructor(String fullName) { (3)
this( fullName.split( " " )[0], fullName.split( " " )[1] );
}
}
1 | Annotate the constructor to use for projections with @ProjectionConstructor . |
2 | The (implicit) canonical constructor uses a specific syntax, without parentheses or parameters. |
3 | Other constructors can be used for other purposes than projections,
but they must not be annotated with @ProjectionConstructor (only one such constructor is allowed). |
11.6. Programmatic mapping
You can map projection constructors through the programmatic mapping too. Behavior and options are identical to annotation-based mapping.
.projectionConstructor()
and .projection(<binder>)
TypeMappingStep myBookProjectionMapping = mapping.type( MyBookProjection.class );
myBookProjectionMapping.mainConstructor()
.projectionConstructor(); (1)
myBookProjectionMapping.mainConstructor().parameter( 0 )
.projection( IdProjectionBinder.create() ); (2)
TypeMappingStep myAuthorProjectionMapping = mapping.type( MyBookProjection.Author.class );
myAuthorProjectionMapping.mainConstructor()
.projectionConstructor();
1 | Mark the constructor as a projection constructor. |
2 | The equivalent to explicit projection annotations is to pass projection binder instances: there is a built-in projection binder for every built-in projection annotation. |
If the projection type (record or class) has multiple constructors,
you will need to use .constructor(…)
instead of .mainConstructor()
,
passing the (raw) type of the constructor parameters as arguments.
.projectionConstructor()
mapping.type( MyAuthorProjectionClassMultiConstructor.class )
.constructor( String.class, String.class )
.projectionConstructor();
12. Binding and bridges
12.1. Basics
In Hibernate Search, binding is the process of assigning custom components to the domain model.
The most intuitive components that can be bound are bridges, responsible for converting pieces of data from the entity model to the document model.
For example, when @GenericField
is applied to a property of a custom enum type,
a built-in bridge will be used to convert this enum to a string when indexing,
and to convert the string back to an enum when projecting.
Similarly, when an entity identifier of type Long
is mapped to a document identifier,
a built-in bridge will be used to convert the Long
to a String
(since all document identifiers are strings)
when indexing,
and back from a String
to a Long
when loading search results.
Bridges are not limited to one-to-one mapping:
for example, the @GeoPointBinding
annotation,
which maps two properties annotated with @Latitude
and @Longitude
to a single field, is backed by another built-in bridge.
While built-in bridges are provided for a wide range of standard types, they may not be enough for complex models. This is why bridges are really useful: it is possible to implement custom bridges and to refer to them in the Hibernate Search mapping. Using custom bridges, custom types can be mapped, even complex types that require user code to execute at indexing time.
There are multiple types of bridges, detailed in the next sections. If you need to implement a custom bridge, but don’t quite know which type of bridge you need, the following table may help:
Bridge type | ValueBridge |
PropertyBridge |
TypeBridge |
IdentifierBridge |
RoutingBridge |
---|---|---|---|---|---|
Applied to… |
Class field or getter |
Class field or getter |
Class |
Class field or getter (usually entity ID) |
Class |
Maps to… |
One index field. Value field only: integer, text, geopoint, etc. No object field (composite). |
One index field or more. Value fields as well as object fields (composite). |
One index field or more. Value fields as well as object fields (composite). |
Document identifier |
Route (conditional indexing, routing key) |
Built-in annotation(s) |
|
|
|
||
Supports container extractors |
Yes |
No |
No |
No |
No |
Supports mutable types |
No |
Yes |
Yes |
No |
Yes |
Not all binders are about indexing, however. The constructor parameters involved in projection constructors can be bound as well; you will find more information about that in this section.
12.2. Value bridge
12.2.1. Basics
A value bridge is a pluggable component that implements
the mapping of a property to an index field.
It is applied to a property with a @*Field
annotation
(@GenericField
, @FullTextField
, …)
or with a custom annotation.
A value bridge is relatively straightforward to implement:
in its simplest form,
it boils down to converting a value from the property type
to the index field type.
Thanks to the integration to the @*Field
annotations,
several features come for free:
-
The type of the index field can be customized directly in the
@*Field
annotation: it can be defined as sortable, projectable, it can be assigned an analyzer, … -
The bridge can be transparently applied to elements of a container. For example, you can implement a
ValueBridge<ISBN, String>
and transparently use it on a property of typeList<ISBN>
: the bridge will simply be applied once per list element and populate the index field with as many values.
However, due to these features, several limitations are imposed on a value bridge which are not present in a property bridge for example:
-
A value bridge only allows one-to-one mapping: one property to one index field. A single value bridge cannot populate more than one index field.
-
A value bridge will not work correctly when applied to a mutable type. A value bridge is expected to be applied to "atomic" data, such as a
LocalDate
; if it is applied to an entity, for example, extracting data from its properties, Hibernate Search will not be aware of which properties are used and will not be able to detect that reindexing is required when these properties change.
Below is an example of a custom value bridge that converts
a custom ISBN
type to its string representation to index it:
ValueBridge
public class ISBNValueBridge implements ValueBridge<ISBN, String> { (1)
@Override
public String toIndexedValue(ISBN value, ValueBridgeToIndexedValueContext context) { (2)
return value == null ? null : value.getStringValue();
}
}
1 | The bridge must implement the ValueBridge interface.
Two generic type arguments must be provided:
the first one is the type of property values (values in the entity model),
and the second one is the type of index fields (values in the document model). |
2 | The toIndexedValue method is the only one that must be implemented: all other methods are optional.
It takes the property value and a context object as parameters,
and is expected to return the corresponding index field value.
It is called when indexing,
but also when parameters to the search DSL must be transformed. |
@Entity
@Indexed
public class Book {
@Id
@GeneratedValue
private Integer id;
@Convert(converter = ISBNAttributeConverter.class) (1)
@KeywordField( (2)
valueBridge = @ValueBridgeRef(type = ISBNValueBridge.class), (3)
normalizer = "isbn" (4)
)
private ISBN isbn;
// Getters and setters
// ...
}
1 | This is unrelated to the value bridge, but necessary in order for Hibernate ORM to store the data correctly in the database. |
2 | Map the property to an index field. |
3 | Instruct Hibernate Search to use our custom value bridge. It is also possible to reference the bridge by its name, in the case of a CDI/Spring bean. |
4 | Customize the field as usual. |
Here is an example of what an indexed document would look like, with the Elasticsearch backend:
{
"isbn": "978-0-58-600835-5"
}
The example above is just a minimal implementations. A custom value bridge can do more:
See the next sections for more information.
12.2.2. Type resolution
By default, the value bridge’s property type and index field type are determined automatically,
using reflection to extract the generic type arguments of the ValueBridge
interface:
the first argument is the property type while the second argument is the index field type.
For example, in public class MyBridge implements ValueBridge<ISBN, String>
,
the property type is resolved to ISBN
and the index field type is resolved to String
:
the bridge will be applied to properties of type ISBN
and will populate an index field of type String
.
The fact that types are resolved automatically using reflection brings a few limitations.
In particular, it means the generic type arguments cannot be just anything;
as a general rule, you should stick to literal types (MyBridge implements ValueBridge<ISBN, String>
)
and avoid generic type parameters and wildcards (MyBridge<T> implements ValueBridge<List<T>, T>
).
If you need more complex types,
you can bypass the automatic resolution and specify types explicitly
using a ValueBinder
.
12.2.3. Using value bridges in other @*Field
annotations
In order to use a custom value bridge with specialized annotations such as @FullTextField
,
the bridge must declare a compatible index field type.
For example:
-
@FullTextField
and@KeywordField
require an index field type of typeString
(ValueBridge<Whatever, String>
); -
@ScaledNumberField
requires an index field type of typeBigDecimal
(ValueBridge<Whatever, BigDecimal>
) orBigInteger
(ValueBridge<Whatever, BigInteger>
).
Refer to Available field annotations for the specific constraints of each annotation.
Attempts to use a bridge that declares an incompatible type will trigger exceptions at bootstrap.
12.2.4. Supporting projections with fromIndexedValue()
By default, any attempt to project on a field using a custom bridge will result in an exception, because Hibernate Search doesn’t know how to convert the projected values obtained from the index back to the property type.
It is possible to disable conversion explicitly to get the raw value from the index,
but another way of solving the problem is to simply implement fromIndexedValue
in the custom bridge.
This method will be called whenever a projected value needs to be converted.
fromIndexedValue
to convert projected valuespublic class ISBNValueBridge implements ValueBridge<ISBN, String> {
@Override
public String toIndexedValue(ISBN value, ValueBridgeToIndexedValueContext context) {
return value == null ? null : value.getStringValue();
}
@Override
public ISBN fromIndexedValue(String value, ValueBridgeFromIndexedValueContext context) {
return value == null ? null : ISBN.parse( value ); (1)
}
}
1 | Implement fromIndexedValue as necessary. |
@Entity
@Indexed
public class Book {
@Id
@GeneratedValue
private Integer id;
@Convert(converter = ISBNAttributeConverter.class) (1)
@KeywordField( (2)
valueBridge = @ValueBridgeRef(type = ISBNValueBridge.class), (3)
normalizer = "isbn",
projectable = Projectable.YES (4)
)
private ISBN isbn;
// Getters and setters
// ...
}
1 | This is unrelated to the value bridge, but necessary in order for Hibernate ORM to store the data correctly in the database. |
2 | Map the property to an index field. |
3 | Instruct Hibernate Search to use our custom value bridge. |
4 | Do not forget to configure the field as projectable. |
12.2.5. Parsing the string representation to an index field type with parse()
By default, when a custom bridge is used, some Hibernate Search features like
specifying the indexNullAs
attribute of @*Field
annotations,
or using a field with such a custom bridge in query string predicates
(simpleQueryString()
/queryString()
)
with local backends (e.g. Lucene), or when using the ValueModel.STRING
in the Search DSL
will not work out of the box.
In order to make it work, the bridge needs to implement the parse
method
so that Hibernate Search can convert the string representation
to a value of the correct type for the index field.
parse
public class ISBNValueBridge implements ValueBridge<ISBN, String> {
@Override
public String toIndexedValue(ISBN value, ValueBridgeToIndexedValueContext context) {
return value == null ? null : value.getStringValue();
}
@Override
public String parse(String value) {
// Just check the string format and return the string
return ISBN.parse( value ).getStringValue(); (1)
}
}
1 | Implement parse as necessary.
The bridge may throw exceptions for invalid strings. |
@Entity
@Indexed
public class Book {
@Id
@GeneratedValue
private Integer id;
@Convert(converter = ISBNAttributeConverter.class) (1)
@KeywordField( (2)
valueBridge = @ValueBridgeRef(type = ISBNValueBridge.class), (3)
normalizer = "isbn",
indexNullAs = "000-0-00-000000-0" (4)
)
private ISBN isbn;
// Getters and setters
// ...
}
1 | This is unrelated to the value bridge, but necessary in order for Hibernate ORM to store the data correctly in the database. |
2 | Map the property to an index field. |
3 | Instruct Hibernate Search to use our custom value bridge. |
4 | Set indexNullAs to a valid value. |
List<Book> result = searchSession.search( Book.class )
.where( f -> f.queryString().field( "isbn" )
.matching( "978-0-13-468599-1" ) ) (1)
.fetchHits( 20 );
1 | Use a string representation of an ISBN in a query string predicate. |
12.2.6. Formatting the value as string with format()
By default, when a custom bridge is used, requesting a ValueModel.STRING
for a field projection
will use a simple toString()
call.
In order to customize the format, the bridge needs to implement the format
method
so that Hibernate Search can convert the index field to the desired string representation.
format
public class ISBNValueBridge implements ValueBridge<ISBN, Long> {
// Implement mandatory toDocumentIdentifier/fromDocumentIdentifier ...
// ...
@Override
public String format(Long value) { (1)
return value == null
? null
: value.toString()
.replaceAll( "(\\d{3})(\\d)(\\d{2})(\\d{6})(\\d)", "$1-$2-$3-$4-$5" );
}
}
1 | Implement format as necessary.
The bridge may throw exceptions for invalid values. |
@Entity
@Indexed
public class Book {
@Id
@GeneratedValue
private Integer id;
@Convert(converter = ISBNAttributeConverter.class) (1)
@GenericField( (2)
valueBridge = @ValueBridgeRef(type = ISBNValueBridge.class), (3)
projectable = Projectable.YES (4)
)
private ISBN isbn;
// Getters and setters
// ...
}
1 | This is unrelated to the value bridge, but necessary in order for Hibernate ORM to store the data correctly in the database. |
2 | Map the property to an index field. |
3 | Instruct Hibernate Search to use our custom value bridge. |
4 | Configure the field as projectable. |
List<String> result = searchSession.search( Book.class )
.select( f -> f.field( "isbn", String.class, ValueModel.STRING ) ) (1)
.where( f -> f.matchAll() )
.fetchHits( 20 );
1 | Use a string representation when requesting the field projection. |
12.2.7. Compatibility across indexes with isCompatibleWith()
A value bridges is involved in indexing, but also in the various search DSLs, to convert values passed to the DSL to an index field value that the backend will understand.
When creating a predicate targeting a single field across multiple indexes, Hibernate Search will have multiple bridges to choose from: one per index. Since only one predicate with a single value can be created, Hibernate Search needs to pick a single bridge. By default, when a custom bridge is assigned to the field, Hibernate Search will throw an exception because it cannot decide which bridge to pick.
If the bridges assigned to the field in all indexes produce the same result,
it is possible to indicate to Hibernate Search that any bridge will do
by implementing isCompatibleWith
.
This method accepts another bridge in parameter,
and returns true
if that bridge can be expected to always behave the same as this
.
isCompatibleWith
to support multi-index searchpublic class ISBNValueBridge implements ValueBridge<ISBN, String> {
@Override
public String toIndexedValue(ISBN value, ValueBridgeToIndexedValueContext context) {
return value == null ? null : value.getStringValue();
}
@Override
public boolean isCompatibleWith(ValueBridge<?, ?> other) { (1)
return getClass().equals( other.getClass() );
}
}
1 | Implement isCompatibleWith as necessary.
Here we just deem any instance of the same class to be compatible. |
12.2.8. Configuring the bridge more finely with ValueBinder
To configure a bridge more finely, it is possible to implement a value binder that will be executed at bootstrap. This binder will be able in particular to define a custom index field type.
ValueBinder
public class ISBNValueBinder implements ValueBinder { (1)
@Override
public void bind(ValueBindingContext<?> context) { (2)
context.bridge( (3)
ISBN.class, (4)
new ISBNValueBridge(), (5)
context.typeFactory() (6)
.asString() (7)
.normalizer( "isbn" ) (8)
);
}
private static class ISBNValueBridge implements ValueBridge<ISBN, String> {
@Override
public String toIndexedValue(ISBN value, ValueBridgeToIndexedValueContext context) {
return value == null ? null : value.getStringValue(); (9)
}
}
}
1 | The binder must implement the ValueBinder interface. |
2 | Implement the bind method. |
3 | Call context.bridge(…) to define the value bridge to use. |
4 | Pass the expected type of property values. |
5 | Pass the value bridge instance. |
6 | Use the context’s type factory to create an index field type. |
7 | Pick a base type for the index field using an as*() method. |
8 | Configure the type as necessary.
This configuration will set defaults that are applied for any type using this bridge,
but they can be overridden.
Type configuration is similar to the attributes found in the various @*Field annotations.
See Defining index field types for more information. |
9 | The value bridge must still be implemented.
Here the bridge class is nested in the binder class, because it is more convenient, but you are obviously free to implement it as you wish: as a lambda expression, in a separate Java file… |
@Entity
@Indexed
public class Book {
@Id
@GeneratedValue
private Integer id;
@Convert(converter = ISBNAttributeConverter.class) (1)
@KeywordField( (2)
valueBinder = @ValueBinderRef(type = ISBNValueBinder.class), (3)
sortable = Sortable.YES (4)
)
private ISBN isbn;
// Getters and setters
// ...
}
1 | This is unrelated to the value bridge, but necessary in order for Hibernate ORM to store the data correctly in the database. |
2 | Map the property to an index field. |
3 | Instruct Hibernate Search to use our custom value binder.
Note the use of valueBinder instead of valueBridge .
It is also possible to reference the binder by its name, in the case of a CDI/Spring bean. |
4 | Customize the field as usual. Configuration set using annotation attributes take precedence over the index field type configuration set by the value binder. For example, in this case, the field with be sortable even if the binder didn’t define the field as sortable. |
When using a value binder with a specialized For example, These restrictions are similar to those when
assigning a value bridge directly;
see Using value bridges in other |
12.2.9. Passing parameters
The value bridges are usually applied with built-in @*Field
annotation,
which already accept parameters to configure the field name,
whether the field is sortable, etc.
However, these parameters are not passed to the value bridge or value binder. There are two ways to pass parameters to value bridges:
-
One is (mostly) limited to string parameters, but is trivial to implement.
-
The other can allow any type of parameters, but requires you to declare your own annotations.
Simple, string parameters
You can define string parameters to the @ValueBinderRef
annotation and then use them later in the binder:
ValueBridge
using the @ValueBinderRef
annotationpublic class BooleanAsStringBridge implements ValueBridge<Boolean, String> { (1)
private final String trueAsString;
private final String falseAsString;
public BooleanAsStringBridge(String trueAsString, String falseAsString) { (2)
this.trueAsString = trueAsString;
this.falseAsString = falseAsString;
}
@Override
public String toIndexedValue(Boolean value, ValueBridgeToIndexedValueContext context) {
if ( value == null ) {
return null;
}
return value ? trueAsString : falseAsString;
}
}
1 | Implement a bridge that does not index booleans directly, but indexes them as strings instead. |
2 | The bridge accepts two parameters in its constructors:
the string representing true and the string representing false . |
public class BooleanAsStringBinder implements ValueBinder {
@Override
public void bind(ValueBindingContext<?> context) {
String trueAsString = context.params().get( "trueAsString", String.class ); (1)
String falseAsString = context.params().get( "falseAsString", String.class );
context.bridge( Boolean.class, (2)
new BooleanAsStringBridge( trueAsString, falseAsString ) );
}
}
1 | Use the binding context to get the parameter values.
The |
2 | Pass them as arguments to the bridge constructor. |
@Entity
@Indexed
public class Book {
@Id
@GeneratedValue
private Integer id;
private String title;
@GenericField(valueBinder = @ValueBinderRef(type = BooleanAsStringBinder.class, (1)
params = {
@Param(name = "trueAsString", value = "yes"),
@Param(name = "falseAsString", value = "no")
}))
private boolean published;
@ElementCollection
@GenericField(valueBinder = @ValueBinderRef(type = BooleanAsStringBinder.class, (2)
params = {
@Param(name = "trueAsString", value = "passed"),
@Param(name = "falseAsString", value = "failed")
}), name = "censorshipAssessments_allYears")
private Map<Year, Boolean> censorshipAssessments = new HashMap<>();
// Getters and setters
// ...
}
1 | Define the binder to use on the property,
setting the fieldName parameter. |
2 | Because we use a value bridge, the annotation can be transparently applied to containers. Here, the bridge will be applied successively to each value in the map. |
Parameters with custom annotations
You can pass parameters of any type to the bridge by defining a custom annotation with attributes:
ValueBridge
using a custom annotationpublic class BooleanAsStringBridge implements ValueBridge<Boolean, String> { (1)
private final String trueAsString;
private final String falseAsString;
public BooleanAsStringBridge(String trueAsString, String falseAsString) { (2)
this.trueAsString = trueAsString;
this.falseAsString = falseAsString;
}
@Override
public String toIndexedValue(Boolean value, ValueBridgeToIndexedValueContext context) {
if ( value == null ) {
return null;
}
return value ? trueAsString : falseAsString;
}
}
1 | Implement a bridge that does not index booleans directly, but indexes them as strings instead. |
2 | The bridge accepts two parameters in its constructors:
the string representing true and the string representing false . |
@Retention(RetentionPolicy.RUNTIME) (1)
@Target({ ElementType.METHOD, ElementType.FIELD }) (2)
@PropertyMapping(processor = @PropertyMappingAnnotationProcessorRef( (3)
type = BooleanAsStringField.Processor.class
))
@Documented (4)
@Repeatable(BooleanAsStringField.List.class) (5)
public @interface BooleanAsStringField {
String trueAsString() default "true"; (6)
String falseAsString() default "false";
String name() default ""; (7)
ContainerExtraction extraction() default @ContainerExtraction(); (7)
@Documented
@Target({ ElementType.METHOD, ElementType.FIELD })
@Retention(RetentionPolicy.RUNTIME)
@interface List {
BooleanAsStringField[] value();
}
class Processor (8)
implements PropertyMappingAnnotationProcessor<BooleanAsStringField> { (9)
@Override
public void process(PropertyMappingStep mapping, BooleanAsStringField annotation,
PropertyMappingAnnotationProcessorContext context) {
BooleanAsStringBridge bridge = new BooleanAsStringBridge( (10)
annotation.trueAsString(), annotation.falseAsString()
);
mapping.genericField( (11)
annotation.name().isEmpty() ? null : annotation.name()
)
.valueBridge( bridge ) (12)
.extractors( (13)
context.toContainerExtractorPath( annotation.extraction() )
);
}
}
}
1 | Define an annotation with RUNTIME retention.
Any other retention policy will cause the annotation to be ignored by Hibernate Search. |
2 | Since we’re defining a value bridge, allow the annotation to target either methods (getters) or fields. |
3 | Mark this annotation as a property mapping, and instruct Hibernate Search to apply the given processor whenever it finds this annotation. It is also possible to reference the processor by its CDI/Spring bean name. |
4 | Optionally, mark the annotation as documented, so that it is included in the javadoc of your entities. |
5 | Optionally, mark the annotation as repeatable, in order to be able to declare multiple fields on the same property. |
6 | Define custom attributes to configure the value bridge.
Here we define two strings that the bridge should use to represent the boolean values true and false . |
7 | Since we will be using a custom annotation,
and not the built-in @*Field annotation,
the standard parameters that make sense for this bridge need to be declared here, too. |
8 | Here the processor class is nested in the annotation class, because it is more convenient, but you are obviously free to implement it in a separate Java file. |
9 | The processor must implement the PropertyMappingAnnotationProcessor interface,
setting its generic type argument to the type of the corresponding annotation. |
10 | In the process method, instantiate the bridge
and pass the annotation attributes as constructor arguments. |
11 | Declare the field with the configured name (if provided). |
12 | Assign our bridge to the field.
Alternatively, we could assign a value binder instead,
using the valueBinder() method. |
13 | Configure the remaining standard parameters.
Note that the context object passed to the process method
exposes utility methods to convert standard Hibernate Search annotations
to something that can be passed to the mapping
(here, @ContainerExtraction is converted to a container extractor path). |
@Entity
@Indexed
public class Book {
@Id
@GeneratedValue
private Integer id;
private String title;
@BooleanAsStringField(trueAsString = "yes", falseAsString = "no") (1)
private boolean published;
@ElementCollection
@BooleanAsStringField( (2)
name = "censorshipAssessments_allYears",
trueAsString = "passed", falseAsString = "failed"
)
private Map<Year, Boolean> censorshipAssessments = new HashMap<>();
// Getters and setters
// ...
}
1 | Apply the bridge using its custom annotation, setting the parameters. |
2 | Because we use a value bridge, the annotation can be transparently applied to containers. Here, the bridge will be applied successively to each value in the map. |
12.2.10. Accessing the ORM session or session factory from the bridge
This feature is only available with the Hibernate ORM integration. It cannot be used with the Standalone POJO Mapper in particular. |
Contexts passed to the bridge methods can be used to retrieve the Hibernate ORM session or session factory.
ValueBridge
public class MyDataValueBridge implements ValueBridge<MyData, String> {
@Override
public String toIndexedValue(MyData value, ValueBridgeToIndexedValueContext context) {
SessionFactory sessionFactory = context.extension( HibernateOrmExtension.get() ) (1)
.sessionFactory(); (2)
// ... do something with the factory ...
}
@Override
public MyData fromIndexedValue(String value, ValueBridgeFromIndexedValueContext context) {
Session session = context.extension( HibernateOrmExtension.get() ) (3)
.session(); (4)
// ... do something with the session ...
}
}
1 | Apply an extension to the context to access content specific to Hibernate ORM. |
2 | Retrieve the SessionFactory from the extended context.
The Session is not available here. |
3 | Apply an extension to the context to access content specific to Hibernate ORM. |
4 | Retrieve the Session from the extended context. |
12.2.11. Injecting beans into the value bridge or value binder
With compatible frameworks,
Hibernate Search supports injecting beans into both the ValueBridge
and the ValueBinder
.
This only applies to beans instantiated
through Hibernate Search’s bean resolution.
As a rule of thumb, if you need to call new MyBridge() explicitly at some point,
the bridge won’t get auto-magically injected.
|
The context passed to the value binder’s bind
method
also exposes a beanResolver()
method to access the bean resolver and instantiate beans explicitly.
See Bean injection for more details.
12.2.12. Programmatic mapping
You can apply a value bridge through the programmatic mapping too. Just pass an instance of the bridge.
ValueBridge
with .valueBridge(…)
TypeMappingStep bookMapping = mapping.type( Book.class );
bookMapping.indexed();
bookMapping.property( "isbn" )
.keywordField().valueBridge( new ISBNValueBridge() );
Similarly, you can pass a binder instance. You can pass arguments either through the binder’s constructor or through setters.
ValueBinder
with .valueBinder(…)
TypeMappingStep bookMapping = mapping.type( Book.class );
bookMapping.indexed();
bookMapping.property( "isbn" )
.genericField()
.valueBinder( new ISBNValueBinder() )
.sortable( Sortable.YES );
12.2.13. Incubating features
Features detailed below are incubating: they are still under active development. The usual compatibility policy does not apply: the contract of incubating elements (e.g. types, methods, configuration properties, etc.) may be altered in a backward-incompatible way — or even removed — in subsequent releases. You are encouraged to use incubating features so the development team can get feedback and improve them, but you should be prepared to update code which relies on them as needed. |
The context passed to the value binder’s bind
method
exposes a bridgedElement()
method that gives access to metadata about the value being bound,
in particular its type.
See the javadoc for more information.
12.3. Property bridge
12.3.1. Basics
A property bridge, like a value bridge,
is a pluggable component that implements
the mapping of a property to one or more index fields.
It is applied to a property with the @PropertyBinding
annotation
or with a custom annotation.
Compared to the value bridge, the property bridge is more complex to implement, but covers a broader range of use cases:
-
A property bridge can map a single property to more than one index field.
-
A property bridge can work correctly when applied to a mutable type, provided it is implemented correctly.
However, due to its rather flexible nature, the property bridge does not transparently provide all the features that come for free with a value bridge. They can be supported, but have to be implemented manually. This includes in particular container extractors, which cannot be combined with a property bridge: the property bridge must extract container values explicitly.
Implementing a property bridge requires two components:
-
A custom implementation of
PropertyBinder
, to bind the bridge to a property at bootstrap. This involves declaring the parts of the property that will be used, declaring the index fields that will be populated along with their type, and instantiating the property bridge. -
A custom implementation of
PropertyBridge
, to perform the conversion at runtime. This involves extracting data from the property, transforming it if necessary, and pushing it to index fields.
Below is an example of a custom property bridge that maps a list of invoice line items to several fields summarizing the invoice.
PropertyBridge
public class InvoiceLineItemsSummaryBinder implements PropertyBinder { (1)
@Override
public void bind(PropertyBindingContext context) { (2)
context.dependencies() (3)
.use( "category" )
.use( "amount" );
IndexSchemaObjectField summaryField = context.indexSchemaElement() (4)
.objectField( "summary" );
IndexFieldType<BigDecimal> amountFieldType = context.typeFactory() (5)
.asBigDecimal().decimalScale( 2 ).toIndexFieldType();
context.bridge( (6)
List.class, (7)
new Bridge( (8)
summaryField.toReference(), (9)
summaryField.field( "total", amountFieldType ).toReference(), (10)
summaryField.field( "books", amountFieldType ).toReference(), (10)
summaryField.field( "shipping", amountFieldType ).toReference() (10)
)
);
}
// ... class continues below
1 | The binder must implement the PropertyBinder interface. |
2 | Implement the bind method in the binder. |
3 | Declare the dependencies of the bridge, i.e. the parts of the property value that the bridge will actually use. This is absolutely necessary in order for Hibernate Search to correctly trigger reindexing when these parts are modified. See Declaring dependencies to bridged elements for more information about declaring dependencies. |
4 | Declare the fields that are populated by this bridge.
In this case we’re creating a summary object field,
which will have multiple subfields (see below).
See Declaring and writing to index fields
for more information about declaring index fields. |
5 | Declare the type of the subfields.
We’re going to index monetary amounts,
so we will use a BigDecimal type with two digits after the decimal point.
See Defining index field types
for more information about declaring index field types. |
6 | Call context.bridge(…) to define the property bridge to use. |
7 | Pass the expected type of property. |
8 | Pass the property bridge instance. |
9 | Pass a reference to the summary object field to the bridge. |
10 | Create a subfield for the total amount of the invoice,
a subfield for the subtotal for books ,
and a subfield for the subtotal for shipping .
Pass references to these fields to the bridge. |
// ... class InvoiceLineItemsSummaryBinder (continued)
@SuppressWarnings("rawtypes")
private static class Bridge (1)
implements PropertyBridge<List> { (2)
private final IndexObjectFieldReference summaryField;
private final IndexFieldReference<BigDecimal> totalField;
private final IndexFieldReference<BigDecimal> booksField;
private final IndexFieldReference<BigDecimal> shippingField;
private Bridge(IndexObjectFieldReference summaryField, (3)
IndexFieldReference<BigDecimal> totalField,
IndexFieldReference<BigDecimal> booksField,
IndexFieldReference<BigDecimal> shippingField) {
this.summaryField = summaryField;
this.totalField = totalField;
this.booksField = booksField;
this.shippingField = shippingField;
}
@Override
public void write(DocumentElement target, List bridgedElement, PropertyBridgeWriteContext context) { (4)
@SuppressWarnings("unchecked")
List<InvoiceLineItem> lineItems = (List<InvoiceLineItem>) bridgedElement;
BigDecimal total = BigDecimal.ZERO;
BigDecimal books = BigDecimal.ZERO;
BigDecimal shipping = BigDecimal.ZERO;
for ( InvoiceLineItem lineItem : lineItems ) { (5)
BigDecimal amount = lineItem.getAmount();
total = total.add( amount );
switch ( lineItem.getCategory() ) {
case BOOK:
books = books.add( amount );
break;
case SHIPPING:
shipping = shipping.add( amount );
break;
}
}
DocumentElement summary = target.addObject( this.summaryField ); (6)
summary.addValue( this.totalField, total ); (7)
summary.addValue( this.booksField, books ); (7)
summary.addValue( this.shippingField, shipping ); (7)
}
}
}
1 | Here the bridge class is nested in the binder class, because it is more convenient, but you are obviously free to implement it as you wish: as a lambda expression, in a separate Java file… |
2 | The bridge must implement the PropertyBridge interface.
One generic type argument must be provided: the type of the property,
i.e. the type of the "bridged element". |
3 | The bridge stores references to the fields — it will need them when indexing. |
4 | Implement the write method in the bridge.
This method is called on indexing. |
5 | Extract data from the bridged element, and optionally transform it. |
6 | Add an object to the summary object field.
Note the summary field was declared at the root,
so we call addObject directly on the target argument. |
7 | Add a value to each of the summary.total , summary.books
and summary.shipping fields.
Note the fields were declared as subfields of summary ,
so we call addValue on summaryValue instead of target . |
@Entity
@Indexed
public class Invoice {
@Id
@GeneratedValue
private Integer id;
@ElementCollection
@OrderColumn
@PropertyBinding(binder = @PropertyBinderRef(type = InvoiceLineItemsSummaryBinder.class)) (1)
private List<InvoiceLineItem> lineItems = new ArrayList<>();
// Getters and setters
// ...
}
1 | Apply the bridge using the @PropertyBinding annotation. |
Here is an example of what an indexed document would look like, with the Elasticsearch backend:
{
"summary": {
"total": 38.96,
"books": 30.97,
"shipping": 7.99
}
}
12.3.2. Passing parameters
There are two ways to pass parameters to property bridges:
-
One is (mostly) limited to string parameters, but is trivial to implement.
-
The other can allow any type of parameters, but requires you to declare your own annotations.
Simple, string parameters
You can pass string parameters to the @PropertyBinderRef
annotation and then use them later in the binder:
PropertyBinder
using the @PropertyBinderRef
annotationpublic class InvoiceLineItemsSummaryBinder implements PropertyBinder {
@Override
public void bind(PropertyBindingContext context) {
context.dependencies()
.use( "category" )
.use( "amount" );
String fieldName = context.params().get( "fieldName", String.class ); (1)
IndexSchemaObjectField summaryField = context.indexSchemaElement()
.objectField( fieldName ); (2)
IndexFieldType<BigDecimal> amountFieldType = context.typeFactory()
.asBigDecimal().decimalScale( 2 ).toIndexFieldType();
context.bridge( List.class, new Bridge(
summaryField.toReference(),
summaryField.field( "total", amountFieldType ).toReference(),
summaryField.field( "books", amountFieldType ).toReference(),
summaryField.field( "shipping", amountFieldType ).toReference()
) );
}
@SuppressWarnings("rawtypes")
private static class Bridge implements PropertyBridge<List> {
/* ... same implementation as before ... */
}
}
1 | Use the binding context to get the parameter value.
The |
2 | In the bind method, use the value of parameters.
Here use the fieldName parameter to set the field name,
but we could pass parameters for any purpose:
defining the field as sortable,
defining a normalizer,
… |
@Entity
@Indexed
public class Invoice {
@Id
@GeneratedValue
private Integer id;
@ElementCollection
@OrderColumn
@PropertyBinding(binder = @PropertyBinderRef( (1)
type = InvoiceLineItemsSummaryBinder.class,
params = @Param(name = "fieldName", value = "itemSummary")))
private List<InvoiceLineItem> lineItems = new ArrayList<>();
// Getters and setters
// ...
}
1 | Define the binder to use on the property,
setting the fieldName parameter. |
Parameters with custom annotations
You can pass parameters of any type to the bridge by defining a custom annotation with attributes:
PropertyBinder
using a custom annotation@Retention(RetentionPolicy.RUNTIME) (1)
@Target({ ElementType.METHOD, ElementType.FIELD }) (2)
@PropertyMapping(processor = @PropertyMappingAnnotationProcessorRef( (3)
type = InvoiceLineItemsSummaryBinding.Processor.class
))
@Documented (4)
public @interface InvoiceLineItemsSummaryBinding {
String fieldName() default ""; (5)
class Processor (6)
implements PropertyMappingAnnotationProcessor<InvoiceLineItemsSummaryBinding> { (7)
@Override
public void process(PropertyMappingStep mapping,
InvoiceLineItemsSummaryBinding annotation,
PropertyMappingAnnotationProcessorContext context) {
InvoiceLineItemsSummaryBinder binder = new InvoiceLineItemsSummaryBinder(); (8)
if ( !annotation.fieldName().isEmpty() ) { (9)
binder.fieldName( annotation.fieldName() );
}
mapping.binder( binder ); (10)
}
}
}
1 | Define an annotation with RUNTIME retention.
Any other retention policy will cause the annotation to be ignored by Hibernate Search. |
2 | Since we’re defining a property bridge, allow the annotation to target either methods (getters) or fields. |
3 | Mark this annotation as a property mapping, and instruct Hibernate Search to apply the given processor whenever it finds this annotation. It is also possible to reference the processor by its CDI/Spring bean name. |
4 | Optionally, mark the annotation as documented, so that it is included in the javadoc of your entities. |
5 | Define an attribute of type String to specify the field name. |
6 | Here the processor class is nested in the annotation class, because it is more convenient, but you are obviously free to implement it in a separate Java file. |
7 | The processor must implement the PropertyMappingAnnotationProcessor interface,
setting its generic type argument to the type of the corresponding annotation. |
8 | In the annotation processor, instantiate the binder. |
9 | Process the annotation attributes and pass the data to the binder.
Here we’re using a setter, but passing the data through the constructor would work, too. |
10 | Apply the binder to the property. |
public class InvoiceLineItemsSummaryBinder implements PropertyBinder {
private String fieldName = "summary";
public InvoiceLineItemsSummaryBinder fieldName(String fieldName) { (1)
this.fieldName = fieldName;
return this;
}
@Override
public void bind(PropertyBindingContext context) {
context.dependencies()
.use( "category" )
.use( "amount" );
IndexSchemaObjectField summaryField = context.indexSchemaElement()
.objectField( this.fieldName ); (2)
IndexFieldType<BigDecimal> amountFieldType = context.typeFactory()
.asBigDecimal().decimalScale( 2 ).toIndexFieldType();
context.bridge( List.class, new Bridge(
summaryField.toReference(),
summaryField.field( "total", amountFieldType ).toReference(),
summaryField.field( "books", amountFieldType ).toReference(),
summaryField.field( "shipping", amountFieldType ).toReference()
) );
}
@SuppressWarnings("rawtypes")
private static class Bridge implements PropertyBridge<List> {
/* ... same implementation as before ... */
}
}
1 | Implement setters in the binder. Alternatively, we could expose a parameterized constructor. |
2 | In the bind method, use the value of parameters.
Here use the fieldName parameter to set the field name,
but we could pass parameters for any purpose:
defining the field as sortable,
defining a normalizer,
… |
@Entity
@Indexed
public class Invoice {
@Id
@GeneratedValue
private Integer id;
@ElementCollection
@OrderColumn
@InvoiceLineItemsSummaryBinding( (1)
fieldName = "itemSummary"
)
private List<InvoiceLineItem> lineItems = new ArrayList<>();
// Getters and setters
// ...
}
1 | Apply the bridge using its custom annotation,
setting the fieldName parameter. |
12.3.3. Accessing the ORM session from the bridge
This feature is only available with the Hibernate ORM integration. It cannot be used with the Standalone POJO Mapper in particular. |
Contexts passed to the bridge methods can be used to retrieve the Hibernate ORM session.
PropertyBridge
private static class Bridge implements PropertyBridge<Object> {
private final IndexFieldReference<String> field;
private Bridge(IndexFieldReference<String> field) {
this.field = field;
}
@Override
public void write(DocumentElement target, Object bridgedElement, PropertyBridgeWriteContext context) {
Session session = context.extension( HibernateOrmExtension.get() ) (1)
.session(); (2)
// ... do something with the session ...
}
}
1 | Apply an extension to the context to access content specific to Hibernate ORM. |
2 | Retrieve the Session from the extended context. |
12.3.4. Injecting beans into the binder
With compatible frameworks, Hibernate Search supports injecting beans into:
-
the
PropertyMappingAnnotationProcessor
if you use custom annotations. -
the
PropertyBinder
if you use the@PropertyBinding
annotation.
This only applies to beans instantiated
through Hibernate Search’s bean resolution.
As a rule of thumb, if you need to call new MyBinder() explicitly at some point,
the binder won’t get auto-magically injected.
|
The context passed to the property binder’s bind
method
also exposes a beanResolver()
method to access the bean resolver and instantiate beans explicitly.
See Bean injection for more details.
12.3.5. Programmatic mapping
You can apply a property bridge through the programmatic mapping too. Just pass an instance of the binder. You can pass arguments either through the binder’s constructor, or through setters.
PropertyBinder
with .binder(…)
TypeMappingStep invoiceMapping = mapping.type( Invoice.class );
invoiceMapping.indexed();
invoiceMapping.property( "lineItems" )
.binder( new InvoiceLineItemsSummaryBinder().fieldName( "itemSummary" ) );
12.3.6. Incubating features
Features detailed below are incubating: they are still under active development. The usual compatibility policy does not apply: the contract of incubating elements (e.g. types, methods, configuration properties, etc.) may be altered in a backward-incompatible way — or even removed — in subsequent releases. You are encouraged to use incubating features so the development team can get feedback and improve them, but you should be prepared to update code which relies on them as needed. |
The context passed to the property binder’s bind
method
exposes a bridgedElement()
method that gives access to metadata about the property being bound.
The metadata can be used to inspect the property in details:
-
Getting the name of the property.
-
Checking the type of the property.
-
Getting accessors to properties.
-
Detecting properties with markers. Markers are applied by specific annotations carrying a
@MarkerBinding
meta-annotation.
See the javadoc for more information.
Below is an example of the simplest use of this metadata, getting the property name and using it as a field name.
PropertyBinder
public class InvoiceLineItemsSummaryBinder implements PropertyBinder {
@Override
@SuppressWarnings("uncheked")
public void bind(PropertyBindingContext context) {
context.dependencies()
.use( "category" )
.use( "amount" );
PojoModelProperty bridgedElement = context.bridgedElement(); (1)
IndexSchemaObjectField summaryField = context.indexSchemaElement()
.objectField( bridgedElement.name() ); (2)
IndexFieldType<BigDecimal> amountFieldType = context.typeFactory()
.asBigDecimal().decimalScale( 2 ).toIndexFieldType();
context.bridge( List.class, new Bridge(
summaryField.toReference(),
summaryField.field( "total", amountFieldType ).toReference(),
summaryField.field( "books", amountFieldType ).toReference(),
summaryField.field( "shipping", amountFieldType ).toReference()
) );
}
@SuppressWarnings("rawtypes")
private static class Bridge implements PropertyBridge<List> {
/* ... same implementation as before ... */
}
}
1 | Use the binding context to get the bridged element. |
2 | Use the name of the property as the name of a newly declared index field. |
@Entity
@Indexed
public class Invoice {
@Id
@GeneratedValue
private Integer id;
@ElementCollection
@OrderColumn
@PropertyBinding(binder = @PropertyBinderRef( (1)
type = InvoiceLineItemsSummaryBinder.class
))
private List<InvoiceLineItem> lineItems = new ArrayList<>();
// Getters and setters
// ...
}
1 | Apply the bridge using the @PropertyBinding annotation. |
Here is an example of what an indexed document would look like, with the Elasticsearch backend:
{
"lineItems": {
"total": 38.96,
"books": 30.97,
"shipping": 7.99
}
}
12.4. Type bridge
12.4.1. Basics
A type bridge is a pluggable component that implements
the mapping of a whole type to one or more index fields.
It is applied to a type with the @TypeBinding
annotation
or with a custom annotation.
The type bridge is very similar to the property bridge in its core principles and in how it is implemented. The only (obvious) difference is that the property bridge is applied to properties (fields or getters), while the type bridge is applied to the type (class or interface). This entails some slight differences in the APIs exposed to the type bridge.
Implementing a type bridge requires two components:
-
A custom implementation of
TypeBinder
, to bind the bridge to a type at bootstrap. This involves declaring the properties of the type that will be used, declaring the index fields that will be populated along with their type, and instantiating the type bridge. -
A custom implementation of
TypeBridge
, to perform the conversion at runtime. This involves extracting data from an instance of the type, transforming the data if necessary, and pushing it to index fields.
Below is an example of a custom type bridge that maps
two properties of the Author
class, the firstName
and lastName
,
to a single fullName
field.
TypeBridge
public class FullNameBinder implements TypeBinder { (1)
@Override
public void bind(TypeBindingContext context) { (2)
context.dependencies() (3)
.use( "firstName" )
.use( "lastName" );
IndexFieldReference<String> fullNameField = context.indexSchemaElement() (4)
.field( "fullName", f -> f.asString().analyzer( "name" ) ) (5)
.toReference();
context.bridge( (6)
Author.class, (7)
new Bridge( (8)
fullNameField (9)
)
);
}
// ... class continues below
1 | The binder must implement the TypeBinder interface. |
2 | Implement the bind method in the binder. |
3 | Declare the dependencies of the bridge, i.e. the parts of the type instances that the bridge will actually use. This is absolutely necessary in order for Hibernate Search to correctly trigger reindexing when these parts are modified. See Declaring dependencies to bridged elements for more information about declaring dependencies. |
4 | Declare the field that will be populated by this bridge.
In this case we’re creating a single fullName String field.
Multiple index fields can be declared.
See Declaring and writing to index fields
for more information about declaring index fields. |
5 | Declare the type of the field.
Since we’re indexing a full name,
we will use a String type with a name analyzer (defined separately, see Analysis).
See Defining index field types
for more information about declaring index field types. |
6 | Call context.bridge(…) to define the type bridge to use. |
7 | Pass the expected type of the entity. |
8 | Pass the type bridge instance. |
9 | Pass a reference to the fullName field to the bridge. |
// ... class FullNameBinder (continued)
private static class Bridge (1)
implements TypeBridge<Author> { (2)
private final IndexFieldReference<String> fullNameField;
private Bridge(IndexFieldReference<String> fullNameField) { (3)
this.fullNameField = fullNameField;
}
@Override
public void write(
DocumentElement target,
Author author,
TypeBridgeWriteContext context) { (4)
String fullName = author.getLastName() + " " + author.getFirstName(); (5)
target.addValue( this.fullNameField, fullName ); (6)
}
}
}
1 | Here the bridge class is nested in the binder class, because it is more convenient, but you are obviously free to implement it as you wish: as a lambda expression, in a separate Java file… |
2 | The bridge must implement the TypeBridge interface.
One generic type argument must be provided: the type of the "bridged element". |
3 | The bridge stores references to the fields — it will need them when indexing. |
4 | Implement the write method in the bridge.
This method is called on indexing. |
5 | Extract data from the bridged element, and optionally transform it. |
6 | Set the value of the fullName field.
Note the fullName field was declared at the root,
so we call addValue directly on the target argument. |
@Entity
@Indexed
@TypeBinding(binder = @TypeBinderRef(type = FullNameBinder.class)) (1)
public class Author {
@Id
@GeneratedValue
private Integer id;
private String firstName;
private String lastName;
@GenericField (2)
private LocalDate birthDate;
// Getters and setters
// ...
}
1 | Apply the bridge using the @TypeBinding annotation. |
2 | It is still possible to map properties directly using other annotations,
as long as index field names are distinct from the names used in the type binder.
But no annotation is necessary on the firstName and lastName properties:
these are already handled by the bridge. |
Here is an example of what an indexed document would look like, with the Elasticsearch backend:
{
"fullName": "Asimov Isaac"
}
12.4.2. Passing parameters
There are two ways to pass parameters to type bridges:
-
One is (mostly) limited to string parameters, but is trivial to implement.
-
The other can allow any type of parameters, but requires you to declare your own annotations.
Simple, string parameters
You can pass string parameters to the @TypeBinderRef
annotation and then use them later in the binder:
TypeBinder
using the @TypeBinderRef
annotationpublic class FullNameBinder implements TypeBinder {
@Override
public void bind(TypeBindingContext context) {
context.dependencies()
.use( "firstName" )
.use( "lastName" );
IndexFieldReference<String> fullNameField = context.indexSchemaElement()
.field( "fullName", f -> f.asString().analyzer( "name" ) )
.toReference();
IndexFieldReference<String> fullNameSortField = null;
String sortField = context.params().get( "sortField", String.class ); (1)
if ( "true".equalsIgnoreCase( sortField ) ) { (2)
fullNameSortField = context.indexSchemaElement()
.field(
"fullName_sort",
f -> f.asString().normalizer( "name" ).sortable( Sortable.YES )
)
.toReference();
}
context.bridge( Author.class, new Bridge(
fullNameField,
fullNameSortField
) );
}
private static class Bridge implements TypeBridge<Author> {
private final IndexFieldReference<String> fullNameField;
private final IndexFieldReference<String> fullNameSortField;
private Bridge(IndexFieldReference<String> fullNameField,
IndexFieldReference<String> fullNameSortField) { (2)
this.fullNameField = fullNameField;
this.fullNameSortField = fullNameSortField;
}
@Override
public void write(
DocumentElement target,
Author author,
TypeBridgeWriteContext context) {
String fullName = author.getLastName() + " " + author.getFirstName();
target.addValue( this.fullNameField, fullName );
if ( this.fullNameSortField != null ) {
target.addValue( this.fullNameSortField, fullName );
}
}
}
}
1 | Use the binding context to get the parameter value.
The |
2 | In the bind method, use the value of parameters.
Here use the sortField parameter to decide whether to add another, sortable field,
but we could pass parameters for any purpose:
defining the field name,
defining a normalizer,custom annotation
… |
@Entity
@Indexed
@TypeBinding(binder = @TypeBinderRef(type = FullNameBinder.class, (1)
params = @Param(name = "sortField", value = "true")))
public class Author {
@Id
@GeneratedValue
private Integer id;
private String firstName;
private String lastName;
// Getters and setters
// ...
}
1 | Define the binder to use on the type,
setting the sortField parameter. |
Parameters with custom annotations
You can pass parameters of any type to the bridge by defining a custom annotation with attributes:
TypeBinder
using a custom annotation@Retention(RetentionPolicy.RUNTIME) (1)
@Target({ ElementType.TYPE }) (2)
@TypeMapping(processor = @TypeMappingAnnotationProcessorRef(type = FullNameBinding.Processor.class)) (3)
@Documented (4)
public @interface FullNameBinding {
boolean sortField() default false; (5)
class Processor (6)
implements TypeMappingAnnotationProcessor<FullNameBinding> { (7)
@Override
public void process(TypeMappingStep mapping, FullNameBinding annotation,
TypeMappingAnnotationProcessorContext context) {
FullNameBinder binder = new FullNameBinder() (8)
.sortField( annotation.sortField() ); (9)
mapping.binder( binder ); (10)
}
}
}
1 | Define an annotation with RUNTIME retention.
Any other retention policy will cause the annotation to be ignored by Hibernate Search. |
2 | Since we’re defining a type bridge, allow the annotation to target types. |
3 | Mark this annotation as a type mapping, and instruct Hibernate Search to apply the given binder whenever it finds this annotation. It is also possible to reference the binder by its name, in the case of a CDI/Spring bean. |
4 | Optionally, mark the annotation as documented, so that it is included in the javadoc of your entities. |
5 | Define an attribute of type boolean to specify whether a sort field should be added. |
6 | Here the processor class is nested in the annotation class, because it is more convenient, but you are obviously free to implement it in a separate Java file. |
7 | The processor must implement the TypeMappingAnnotationProcessor interface,
setting its generic type argument to the type of the corresponding annotation. |
8 | In the annotation processor, instantiate the binder. |
9 | Process the annotation attributes and pass the data to the binder.
Here we’re using a setter, but passing the data through the constructor would work, too. |
10 | Apply the binder to the type. |
public class FullNameBinder implements TypeBinder {
private boolean sortField;
public FullNameBinder sortField(boolean sortField) { (1)
this.sortField = sortField;
return this;
}
@Override
public void bind(TypeBindingContext context) {
context.dependencies()
.use( "firstName" )
.use( "lastName" );
IndexFieldReference<String> fullNameField = context.indexSchemaElement()
.field( "fullName", f -> f.asString().analyzer( "name" ) )
.toReference();
IndexFieldReference<String> fullNameSortField = null;
if ( this.sortField ) { (2)
fullNameSortField = context.indexSchemaElement()
.field(
"fullName_sort",
f -> f.asString().normalizer( "name" ).sortable( Sortable.YES )
)
.toReference();
}
context.bridge( Author.class, new Bridge(
fullNameField,
fullNameSortField
) );
}
private static class Bridge implements TypeBridge<Author> {
private final IndexFieldReference<String> fullNameField;
private final IndexFieldReference<String> fullNameSortField;
private Bridge(IndexFieldReference<String> fullNameField,
IndexFieldReference<String> fullNameSortField) { (2)
this.fullNameField = fullNameField;
this.fullNameSortField = fullNameSortField;
}
@Override
public void write(
DocumentElement target,
Author author,
TypeBridgeWriteContext context) {
String fullName = author.getLastName() + " " + author.getFirstName();
target.addValue( this.fullNameField, fullName );
if ( this.fullNameSortField != null ) {
target.addValue( this.fullNameSortField, fullName );
}
}
}
}
1 | Implement setters in the binder. Alternatively, we could expose a parameterized constructor. |
2 | In the bind method, use the value of parameters.
Here use the sortField parameter to decide whether to add another, sortable field,
but we could pass parameters for any purpose:
defining the field name,
defining a normalizer,custom annotation
… |
@Entity
@Indexed
@FullNameBinding(sortField = true) (1)
public class Author {
@Id
@GeneratedValue
private Integer id;
private String firstName;
private String lastName;
// Getters and setters
// ...
}
1 | Apply the bridge using its custom annotation,
setting the sortField parameter. |
12.4.3. Accessing the ORM session from the bridge
This feature is only available with the Hibernate ORM integration. It cannot be used with the Standalone POJO Mapper in particular. |
Contexts passed to the bridge methods can be used to retrieve the Hibernate ORM session.
TypeBridge
private static class Bridge implements TypeBridge<Object> {
private final IndexFieldReference<String> field;
private Bridge(IndexFieldReference<String> field) {
this.field = field;
}
@Override
public void write(DocumentElement target, Object bridgedElement, TypeBridgeWriteContext context) {
Session session = context.extension( HibernateOrmExtension.get() ) (1)
.session(); (2)
// ... do something with the session ...
}
}
1 | Apply an extension to the context to access content specific to Hibernate ORM. |
2 | Retrieve the Session from the extended context. |
12.4.4. Injecting beans into the binder
With compatible frameworks, Hibernate Search supports injecting beans into:
-
the
TypeMappingAnnotationProcessor
if you use custom annotations. -
the
TypeBinder
if you use the@TypeBinding
annotation.
This only applies to beans instantiated
through Hibernate Search’s bean resolution.
As a rule of thumb, if you need to call new MyBinder() explicitly at some point,
the binder won’t get auto-magically injected.
|
The context passed to the routing key binder’s bind
method
also exposes a beanResolver()
method to access the bean resolver and instantiate beans explicitly.
See Bean injection for more details.
12.4.5. Programmatic mapping
You can apply a type bridge through the programmatic mapping too. Just pass an instance of the binder. You can pass arguments either through the binder’s constructor, or through setters.
TypeBinder
with .binder(…)
TypeMappingStep authorMapping = mapping.type( Author.class );
authorMapping.indexed();
authorMapping.binder( new FullNameBinder().sortField( true ) );
12.4.6. Incubating features
Features detailed below are incubating: they are still under active development. The usual compatibility policy does not apply: the contract of incubating elements (e.g. types, methods, configuration properties, etc.) may be altered in a backward-incompatible way — or even removed — in subsequent releases. You are encouraged to use incubating features so the development team can get feedback and improve them, but you should be prepared to update code which relies on them as needed. |
The context passed to the type binder’s bind
method
exposes a bridgedElement()
method that gives access to metadata about the type being bound.
The metadata can in particular be used to inspect the type in details:
-
Getting accessors to properties.
-
Detecting properties with markers. Markers are applied by specific annotations carrying a
@MarkerBinding
meta-annotation.
See the javadoc for more information.
12.5. Identifier bridge
12.5.1. Basics
An identifier bridge is a pluggable component that implements
the mapping of an entity property to a document identifier.
It is applied to a property with the @DocumentId
annotation
or with a custom annotation.
Implementing an identifier bridge boils down to implementing two methods:
-
one method to convert the property value (any type) to the document identifier (a string);
-
one method to convert the document identifier back to the original property value.
Below is an example of a custom identifier bridge that converts
a custom BookId
type to its string representation and back:
IdentifierBridge
public class BookIdBridge implements IdentifierBridge<BookId> { (1)
@Override
public String toDocumentIdentifier(BookId value,
IdentifierBridgeToDocumentIdentifierContext context) { (2)
return value.getPublisherId() + "/" + value.getPublisherSpecificBookId();
}
@Override
public BookId fromDocumentIdentifier(String documentIdentifier,
IdentifierBridgeFromDocumentIdentifierContext context) { (3)
String[] split = documentIdentifier.split( "/" );
return new BookId( Long.parseLong( split[0] ), Long.parseLong( split[1] ) );
}
}
1 | The bridge must implement the IdentifierBridge interface.
One generic parameters must be provided:
the type of property values (values in the entity model). |
2 | The toDocumentIdentifier method takes the property value and a context object as parameters,
and is expected to return the corresponding document identifier.
It is called when indexing,
but also when parameters to the search DSL
must be transformed,
in particular for the ID predicate. |
3 | The fromDocumentIdentifier methods takes the document identifier and a context object as parameters,
and is expected to return the original property value.
It is called when mapping search hits to the corresponding entity. |
@Entity
@Indexed
public class Book {
@EmbeddedId
@DocumentId( (1)
identifierBridge = @IdentifierBridgeRef(type = BookIdBridge.class) (2)
)
private BookId id = new BookId();
private String title;
// Getters and setters
// ...
}
1 | Map the property to the document identifier. |
2 | Instruct Hibernate Search to use our custom identifier bridge. It is also possible to reference the bridge by its name, in the case of a CDI/Spring bean. |
12.5.2. Type resolution
By default, the identifier bridge’s property type is determined automatically,
using reflection to extract the generic type argument of the IdentifierBridge
interface.
For example, in public class MyBridge implements IdentifierBridge<BookId>
,
the property type is resolved to BookId
:
the bridge will be applied to properties of type BookId
.
The fact that the type is resolved automatically using reflection brings a few limitations.
In particular, it means the generic type argument cannot be just anything;
as a general rule, you should stick to literal types (MyBridge implements IdentifierBridge<BookId>
)
and avoid generic type parameters and wildcards
(MyBridge<T extends Number> implements IdentifierBridge<T>
,
`MyBridge implements IdentifierBridge<List<? extends Number>>).
If you need more complex types,
you can bypass the automatic resolution and specify types explicitly
using an IdentifierBinder
.
12.5.3. Compatibility across indexes with isCompatibleWith()
An identifier bridge is involved in indexing,
but also in the search DSLs,
to convert values passed to the id
predicate
to a document identifier that the backend will understand.
When creating an id
predicate targeting multiple entity types (and their indexes),
Hibernate Search will have multiple bridges to choose from: one per entity type.
Since only one predicate with a single value can be created,
Hibernate Search needs to pick a single bridge.
By default, when a custom bridge is assigned to the field, Hibernate Search will throw an exception because it cannot decide which bridge to pick.
If the bridges assigned to the field in all indexes produce the same result,
it is possible to indicate to Hibernate Search that any bridge will do
by implementing isCompatibleWith
.
This method accepts another bridge in parameter,
and returns true
if that bridge can be expected to always behave the same as this
.
isCompatibleWith
to support multi-index searchpublic class BookOrMagazineIdBridge implements IdentifierBridge<BookOrMagazineId> {
@Override
public String toDocumentIdentifier(BookOrMagazineId value,
IdentifierBridgeToDocumentIdentifierContext context) {
return value.getPublisherId() + "/" + value.getPublisherSpecificBookId();
}
@Override
public BookOrMagazineId fromDocumentIdentifier(String documentIdentifier,
IdentifierBridgeFromDocumentIdentifierContext context) {
String[] split = documentIdentifier.split( "/" );
return new BookOrMagazineId( Long.parseLong( split[0] ), Long.parseLong( split[1] ) );
}
@Override
public boolean isCompatibleWith(IdentifierBridge<?> other) {
return getClass().equals( other.getClass() ); (1)
}
}
1 | Implement isCompatibleWith as necessary.
Here we just deem any instance of the same class to be compatible. |
12.5.4. Parsing identifier’s string representation with parseIdentifierLiteral(..)
In some scenarios, Hibernate Search may need to parse a string representation of an identifier,
e.g. when the ValueModel.STRING
is used in the matching clause of an identifier match predicate.
With a custom identifier bridge, Hibernate Search cannot automatically parse such identifier literals by default.
To address this, parseIdentifierLiteral(..)
can be implemented.
parseIdentifierLiteral(..)
public class BookIdBridge implements IdentifierBridge<BookId> { (1)
// Implement mandatory toDocumentIdentifier/fromDocumentIdentifier ...
// ...
@Override
public BookId parseIdentifierLiteral(String value) { (2)
if ( value == null ) {
return null;
}
String[] parts = value.split( "/" );
if ( parts.length != 2 ) {
throw new IllegalArgumentException( "BookId string literal must be in a `pubId/bookId` format." );
}
return new BookId( Long.parseLong( parts[0] ), Long.parseLong( parts[1] ) );
}
}
1 | Start implementing the identifier bridge as usual. |
2 | Implement parseIdentifierLiteral(..) to convert a string value to a BookId . |
List<Book> result = searchSession.search( Book.class )
.where( f -> f.id().matching( "1/42", ValueModel.STRING ) ) (1)
.fetchHits( 20 );
1 | Use the ValueModel.STRING and a string representation of the identifier in the identifier match predicate. |
12.5.5. Configuring the bridge more finely with IdentifierBinder
To configure a bridge more finely, it is possible to implement a value binder that will be executed at bootstrap. This binder will be able in particular to inspect the type of the property.
IdentifierBinder
public class BookIdBinder implements IdentifierBinder { (1)
@Override
public void bind(IdentifierBindingContext<?> context) { (2)
context.bridge( (3)
BookId.class, (4)
new Bridge() (5)
);
}
private static class Bridge implements IdentifierBridge<BookId> { (6)
@Override
public String toDocumentIdentifier(BookId value,
IdentifierBridgeToDocumentIdentifierContext context) {
return value.getPublisherId() + "/" + value.getPublisherSpecificBookId();
}
@Override
public BookId fromDocumentIdentifier(String documentIdentifier,
IdentifierBridgeFromDocumentIdentifierContext context) {
String[] split = documentIdentifier.split( "/" );
return new BookId( Long.parseLong( split[0] ), Long.parseLong( split[1] ) );
}
}
}
1 | The binder must implement the IdentifierBinder interface. |
2 | Implement the bind method. |
3 | Call context.bridge(…) to define the identifier bridge to use. |
4 | Pass the expected type of property values. |
5 | Pass the identifier bridge instance. |
6 | The identifier bridge must still be implemented.
Here the bridge class is nested in the binder class, because it is more convenient, but you are obviously free to implement it as you wish: as a lambda expression, in a separate Java file… |
@Entity
@Indexed
public class Book {
@EmbeddedId
@DocumentId( (1)
identifierBinder = @IdentifierBinderRef(type = BookIdBinder.class) (2)
)
private BookId id = new BookId();
@FullTextField(analyzer = "english")
private String title;
// Getters and setters
// ...
}
1 | Map the property to the document identifier. |
2 | Instruct Hibernate Search to use our custom identifier binder.
Note the use of identifierBinder instead of identifierBridge .
It is also possible to reference the binder by its name, in the case of a CDI/Spring bean. |
12.5.6. Passing parameters
There are two ways to pass parameters to identifier bridges:
-
One is (mostly) limited to string parameters, but is trivial to implement.
-
The other can allow any type of parameters, but requires you to declare your own annotations.
Simple, string parameters
You can pass string parameters to the @IdentifierBinderRef
annotation and then use them later in the binder:
IdentifierBridge
using the @IdentifierBinderRef
annotationpublic class OffsetIdentifierBridge implements IdentifierBridge<Integer> { (1)
private final int offset;
public OffsetIdentifierBridge(int offset) { (2)
this.offset = offset;
}
@Override
public String toDocumentIdentifier(Integer propertyValue, IdentifierBridgeToDocumentIdentifierContext context) {
return String.valueOf( propertyValue + offset );
}
@Override
public Integer fromDocumentIdentifier(String documentIdentifier,
IdentifierBridgeFromDocumentIdentifierContext context) {
return Integer.parseInt( documentIdentifier ) - offset;
}
}
1 | Implement a bridge that indexes the identifier as is, but adds a configurable offset, For example, with an offset of 1 and database identifiers starting at 0, index identifiers will start at 1. |
2 | The bridge accepts one parameter in its constructors: the offset to apply to identifiers. |
public class OffsetIdentifierBinder implements IdentifierBinder {
@Override
public void bind(IdentifierBindingContext<?> context) {
String offset = context.params().get( "offset", String.class ); (1)
context.bridge(
Integer.class,
new OffsetIdentifierBridge( Integer.parseInt( offset ) ) (2)
);
}
}
1 | Use the binding context to get the parameter value.
The |
2 | Pass the parameter value as an argument to the bridge constructor. |
@Entity
@Indexed
public class Book {
@Id
// DB identifiers start at 0, but index identifiers start at 1
@DocumentId(identifierBinder = @IdentifierBinderRef( (1)
type = OffsetIdentifierBinder.class,
params = @Param(name = "offset", value = "1")))
private Integer id;
private String title;
// Getters and setters
// ...
}
1 | Define the binder to use on the identifier, setting the parameter. |
Parameters with custom annotations
You can pass parameters of any type to the bridge by defining a custom annotation with attributes:
IdentifierBridge
using a custom annotationpublic class OffsetIdentifierBridge implements IdentifierBridge<Integer> { (1)
private final int offset;
public OffsetIdentifierBridge(int offset) { (2)
this.offset = offset;
}
@Override
public String toDocumentIdentifier(Integer propertyValue, IdentifierBridgeToDocumentIdentifierContext context) {
return String.valueOf( propertyValue + offset );
}
@Override
public Integer fromDocumentIdentifier(String documentIdentifier,
IdentifierBridgeFromDocumentIdentifierContext context) {
return Integer.parseInt( documentIdentifier ) - offset;
}
}
1 | Implement a bridge that index the identifier as is, but adds a configurable offset, For example, with an offset of 1 and database identifiers starting at 0, index identifiers will start at 1. |
2 | The bridge accepts one parameter in its constructors: the offset to apply to identifiers. |
@Retention(RetentionPolicy.RUNTIME) (1)
@Target({ ElementType.METHOD, ElementType.FIELD }) (2)
@PropertyMapping(processor = @PropertyMappingAnnotationProcessorRef( (3)
type = OffsetDocumentId.Processor.class
))
@Documented (4)
public @interface OffsetDocumentId {
int offset(); (5)
class Processor (6)
implements PropertyMappingAnnotationProcessor<OffsetDocumentId> { (7)
@Override
public void process(PropertyMappingStep mapping, OffsetDocumentId annotation,
PropertyMappingAnnotationProcessorContext context) {
OffsetIdentifierBridge bridge = new OffsetIdentifierBridge( (8)
annotation.offset()
);
mapping.documentId() (9)
.identifierBridge( bridge ); (10)
}
}
}
1 | Define an annotation with RUNTIME retention.
Any other retention policy will cause the annotation to be ignored by Hibernate Search. |
2 | Since we’re defining an identifier bridge, allow the annotation to target either methods (getters) or fields. |
3 | Mark this annotation as a property mapping, and instruct Hibernate Search to apply the given processor whenever it finds this annotation. It is also possible to reference the processor by its CDI/Spring bean name. |
4 | Optionally, mark the annotation as documented, so that it is included in the javadoc of your entities. |
5 | Define custom attributes to configure the value bridge. Here we define an offset that the bridge should add to entity identifiers. |
6 | Here the processor class is nested in the annotation class, because it is more convenient, but you are obviously free to implement it in a separate Java file. |
7 | The processor must implement the PropertyMappingAnnotationProcessor interface,
setting its generic type argument to the type of the corresponding annotation. |
8 | In the process method, instantiate the bridge
and pass the annotation attribute as constructor argument. |
9 | Declare that this property is to be used to generate the document identifier. |
10 | Instruct Hibernate Search to use our bridge to convert between the property and the document identifiers.
Alternatively, we could pass an identifier binder instead,
using the identifierBinder() method. |
@Entity
@Indexed
public class Book {
@Id
// DB identifiers start at 0, but index identifiers start at 1
@OffsetDocumentId(offset = 1) (1)
private Integer id;
private String title;
// Getters and setters
// ...
}
1 | Apply the bridge using its custom annotation, setting the parameter. |
12.5.7. Accessing the ORM session or session factory from the bridge
This feature is only available with the Hibernate ORM integration. It cannot be used with the Standalone POJO Mapper in particular. |
Contexts passed to the bridge methods can be used to retrieve the Hibernate ORM session or session factory.
IdentifierBridge
public class MyDataIdentifierBridge implements IdentifierBridge<MyData> {
@Override
public String toDocumentIdentifier(MyData propertyValue, IdentifierBridgeToDocumentIdentifierContext context) {
SessionFactory sessionFactory = context.extension( HibernateOrmExtension.get() ) (1)
.sessionFactory(); (2)
// ... do something with the factory ...
}
@Override
public MyData fromDocumentIdentifier(String documentIdentifier,
IdentifierBridgeFromDocumentIdentifierContext context) {
Session session = context.extension( HibernateOrmExtension.get() ) (3)
.session(); (4)
// ... do something with the session ...
}
}
1 | Apply an extension to the context to access content specific to Hibernate ORM. |
2 | Retrieve the SessionFactory from the extended context.
The Session is not available here. |
3 | Apply an extension to the context to access content specific to Hibernate ORM. |
4 | Retrieve the Session from the extended context. |
12.5.8. Injecting beans into the bridge or binder
With compatible frameworks,
Hibernate Search supports injection of beans into both the IdentifierBridge
and the IdentifierBinder
.
This only applies to beans instantiated
through Hibernate Search’s bean resolution.
As a rule of thumb, if you need to call new MyBridge() explicitly at some point,
the bridge won’t get auto-magically injected.
|
The context passed to the identifier binder’s bind
method
also exposes a beanResolver()
method to access the bean resolver and instantiate beans explicitly.
See Bean injection for more details.
12.5.9. Programmatic mapping
You can apply an identifier bridge through the programmatic mapping too. Just pass an instance of the bridge.
IdentifierBridge
with .identifierBridge(…)
TypeMappingStep bookMapping = mapping.type( Book.class );
bookMapping.indexed();
bookMapping.property( "id" )
.documentId().identifierBridge( new BookIdBridge() );
Similarly, you can pass a binder instance:
IdentifierBinder
with .identifierBinder(…)
TypeMappingStep bookMapping = mapping.type( Book.class );
bookMapping.indexed();
bookMapping.property( "id" )
.documentId().identifierBinder( new BookIdBinder() );
12.5.10. Incubating features
Features detailed below are incubating: they are still under active development. The usual compatibility policy does not apply: the contract of incubating elements (e.g. types, methods, configuration properties, etc.) may be altered in a backward-incompatible way — or even removed — in subsequent releases. You are encouraged to use incubating features so the development team can get feedback and improve them, but you should be prepared to update code which relies on them as needed. |
The context passed to the identifier binder’s bind
method
exposes a bridgedElement()
method that gives access to metadata about the value being bound,
in particular its type.
See the javadoc for more information.
12.6. Routing bridge
12.6.1. Basics
A routing bridge is a pluggable component that defines, at runtime,
whether an entity should be indexed and to which shard the corresponding indexed document should be routed.
It is applied to an indexed entity type with the @Indexed
annotation,
using its routingBinder
attribute (@Indexed(routingBinder = …)
).
Implementing a routing bridge requires two components:
-
A custom implementation of
RoutingBinder
, to bind the bridge to an indexed entity type at bootstrap. This involves declaring the properties of the indexed entity type that will be used by the routing bridge and instantiating the routing bridge. -
A custom implementation of
RoutingBridge
, to route entities to the index at runtime. This involves extracting data from an instance of the type, transforming the data if necessary, and defining the current route (or marking the entity as "not indexed").If routing can change during the lifetime of an entity instance, you will also need to define the potential previous routes, so that Hibernate Search can find and delete previous documents indexed for this entity instance.
In the sections below, you will find examples for the main use cases:
12.6.2. Using a routing bridge for conditional indexing
Below is a first example of a custom routing bridge that
disables indexing for instances of the Book
class if their status is ARCHIVED
.
RoutingBridge
for conditional indexingpublic class BookStatusRoutingBinder implements RoutingBinder { (1)
@Override
public void bind(RoutingBindingContext context) { (2)
context.dependencies() (3)
.use( "status" );
context.bridge( (4)
Book.class, (5)
new Bridge() (6)
);
}
// ... class continues below
1 | The binder must implement the RoutingBinder interface. |
2 | Implement the bind method in the binder. |
3 | Declare the dependencies of the bridge, i.e. the parts of the entity instances that the bridge will actually use. See Declaring dependencies to bridged elements for more information about declaring dependencies. |
4 | Call context.bridge(…) to define the routing bridge to use. |
5 | Pass the expected type of indexed entities. |
6 | Pass the routing bridge instance. |
// ... class BookStatusRoutingBinder (continued)
public static class Bridge (1)
implements RoutingBridge<Book> { (2)
@Override
public void route(DocumentRoutes routes, Object entityIdentifier, (3)
Book indexedEntity, RoutingBridgeRouteContext context) {
switch ( indexedEntity.getStatus() ) { (4)
case PUBLISHED:
routes.addRoute(); (5)
break;
case ARCHIVED:
routes.notIndexed(); (6)
break;
}
}
@Override
public void previousRoutes(DocumentRoutes routes, Object entityIdentifier, (7)
Book indexedEntity, RoutingBridgeRouteContext context) {
routes.addRoute(); (8)
}
}
}
1 | Here the bridge class is nested in the binder class, because it is more convenient, but you are obviously free to implement it as you wish: as a lambda expression, in a separate Java file… |
2 | The bridge must implement the RoutingBridge interface. |
3 | Implement the route(…) method in the bridge.
This method is called on indexing. |
4 | Extract data from the bridged element and inspect it. |
5 | If the Book status is PUBLISHED , then we want to proceed with indexing:
add a route so that Hibernate Search indexes the entity as usual. |
6 | If the Book status is ARCHIVED , then we don’t want to index it:
call notIndexed() so that Hibernate Search knows it should not index the entity. |
7 | When a book gets archived, there might be a previously indexed document that needs to be deleted.
The previousRoutes(…) method allows you to tell Hibernate Search where this document can possibly be.
When necessary, Hibernate Search will follow each given route, look for documents corresponding to this entity,
and delete them. |
8 | In this case, routing is very simple: there is only one possible previous route, so we only register that route. |
@Entity
@Indexed(routingBinder = @RoutingBinderRef(type = BookStatusRoutingBinder.class)) (1)
public class Book {
@Id
private Integer id;
private String title;
@Basic(optional = false)
@KeywordField (2)
private Status status;
// Getters and setters
// ...
}
1 | Apply the bridge using the @Indexed annotation. |
2 | Properties used in the bridge can still be mapped as index fields, but they don’t have to be. |
12.6.3. Using a routing bridge to control routing to index shards
For a preliminary introduction to sharding, including how it works in Hibernate Search and what its limitations are, see Sharding and routing. |
Routing bridges can also be used to control routing to index shards.
Below is an example of a custom routing bridge that
uses the genre
property of the Book
class as a routing key.
See Routing for an example of how to use routing in search queries,
with the same mapping as the example below.
RoutingBridge
to control routing to index shardspublic class BookGenreRoutingBinder implements RoutingBinder { (1)
@Override
public void bind(RoutingBindingContext context) { (2)
context.dependencies() (3)
.use( "genre" );
context.bridge( (4)
Book.class, (5)
new Bridge() (6)
);
}
// ... class continues below
1 | The binder must implement the RoutingBinder interface. |
2 | Implement the bind method in the binder. |
3 | Declare the dependencies of the bridge, i.e. the parts of the entity instances that the bridge will actually use. See Declaring dependencies to bridged elements for more information about declaring dependencies. |
4 | Call context.bridge(…) to define the routing bridge to use. |
5 | Pass the expected type of indexed entities. |
6 | Pass the routing bridge instance. |
// ... class BookGenreRoutingBinder (continued)
public static class Bridge implements RoutingBridge<Book> { (1)
@Override
public void route(DocumentRoutes routes, Object entityIdentifier, (2)
Book indexedEntity, RoutingBridgeRouteContext context) {
String routingKey = indexedEntity.getGenre().name(); (3)
routes.addRoute().routingKey( routingKey ); (4)
}
@Override
public void previousRoutes(DocumentRoutes routes, Object entityIdentifier, (5)
Book indexedEntity, RoutingBridgeRouteContext context) {
for ( Genre possiblePreviousGenre : Genre.values() ) {
String routingKey = possiblePreviousGenre.name();
routes.addRoute().routingKey( routingKey ); (6)
}
}
}
}
1 | The bridge must implement the RoutingBridge interface.
Here the bridge class is nested in the binder class,
because it is more convenient,
but you are obviously free to implement it in a separate java file. |
2 | Implement the route(…) method in the bridge.
This method is called on indexing. |
3 | Extract data from the bridged element and derive a routing key. |
4 | Add a route with the generated routing key. Hibernate Search will follow this route when adding/updating/deleting the entity in the index. |
5 | When the genre of a book changes, the route will change,
and there it might be a previously indexed document in the index that needs to be deleted.
The previousRoutes(…) method allows you to tell Hibernate Search where this document can possibly be.
When necessary, Hibernate Search will follow each given route, look for documents corresponding to this entity,
and delete them. |
6 | In this case, we simply don’t know what the previous genre of the book was, so we tell Hibernate Search to follow all possible routes, one for every possible genre. |
@Entity
@Indexed(routingBinder = @RoutingBinderRef(type = BookGenreRoutingBinder.class)) (1)
public class Book {
@Id
private Integer id;
private String title;
@Basic(optional = false)
@KeywordField (2)
private Genre genre;
// Getters and setters
// ...
}
1 | Apply the bridge using the @Indexed annotation. |
2 | Properties used in the bridge can still be mapped as index fields, but they don’t have to be. |
Optimizing
previousRoutes(…) In some cases you might have more information than in the example above about the previous routes, and you can take advantage of that information to trigger fewer deletions in the index:
|
12.6.4. Passing parameters
There are two ways to pass parameters to routing bridges:
-
One is (mostly) limited to string parameters, but is trivial to implement.
-
The other can allow any type of parameters, but requires you to declare your own annotations.
Refer to this example for TypeBinder
,
which is fairly similar to what you’ll need for a RoutingBinder
.
12.6.5. Accessing the ORM session from the bridge
This feature is only available with the Hibernate ORM integration. It cannot be used with the Standalone POJO Mapper in particular. |
Contexts passed to the bridge methods can be used to retrieve the Hibernate ORM session.
RoutingBridge
private static class Bridge implements RoutingBridge<MyEntity> {
@Override
public void route(DocumentRoutes routes, Object entityIdentifier, MyEntity indexedEntity,
RoutingBridgeRouteContext context) {
Session session = context.extension( HibernateOrmExtension.get() ) (1)
.session(); (2)
// ... do something with the session ...
}
@Override
public void previousRoutes(DocumentRoutes routes, Object entityIdentifier, MyEntity indexedEntity,
RoutingBridgeRouteContext context) {
// ...
}
}
1 | Apply an extension to the context to access content specific to Hibernate ORM. |
2 | Retrieve the Session from the extended context. |
12.6.6. Injecting beans into the binder
With compatible frameworks, Hibernate Search supports injecting beans into:
-
the
TypeMappingAnnotationProcessor
if you use custom annotations. -
the
RoutingBinder
if you use@Indexed(routingBinder = …)
.
This only applies to beans instantiated
through Hibernate Search’s bean resolution.
As a rule of thumb, if you need to call new MyBinder() explicitly at some point,
the binder won’t get auto-magically injected.
|
The context passed to the routing binder’s bind
method
also exposes a beanResolver()
method to access the bean resolver and instantiate beans explicitly.
See Bean injection for more details.
12.6.7. Programmatic mapping
You can apply a routing key bridge through the programmatic mapping too. Just pass an instance of the binder.
RoutingBinder
with .binder(…)
TypeMappingStep bookMapping = mapping.type( Book.class );
bookMapping.indexed()
.routingBinder( new BookStatusRoutingBinder() );
bookMapping.property( "status" ).keywordField();
12.6.8. Incubating features
Features detailed below are incubating: they are still under active development. The usual compatibility policy does not apply: the contract of incubating elements (e.g. types, methods, configuration properties, etc.) may be altered in a backward-incompatible way — or even removed — in subsequent releases. You are encouraged to use incubating features so the development team can get feedback and improve them, but you should be prepared to update code which relies on them as needed. |
The context passed to the routing binder’s bind
method
exposes a bridgedElement()
method that gives access to metadata about the type being bound.
The metadata can in particular be used to inspect the type in details:
-
Getting accessors to properties.
-
Detecting properties with markers. Markers are applied by specific annotations carrying a
@MarkerBinding
meta-annotation.
See the javadoc for more information.
12.7. Declaring dependencies to bridged elements
12.7.1. Basics
In order to keep the index synchronized, Hibernate Search needs to be aware of all the entity properties that are used to produce indexed documents, so that it can trigger reindexing when they change.
When using a type bridge or a property bridge, the bridge itself decides which entity properties to access during indexing. Thus, it needs to let Hibernate Search know of its "dependencies" (the entity properties it may access).
This is done through a dedicated DSL, accessible from the bind(…)
method of TypeBinder
and PropertyBinder
.
Below is an example of a type binder that expects to be applied to the ScientificPaper
type,
and declares a dependency to the paper author’s last name and first name.
public class AuthorFullNameBinder implements TypeBinder {
@Override
public void bind(TypeBindingContext context) {
context.dependencies() (1)
.use( "author.firstName" ) (2)
.use( "author.lastName" ); (3)
IndexFieldReference<String> authorFullNameField = context.indexSchemaElement()
.field( "authorFullName", f -> f.asString().analyzer( "name" ) )
.toReference();
context.bridge( Book.class, new Bridge( authorFullNameField ) );
}
private static class Bridge implements TypeBridge<Book> {
// ...
}
}
1 | Start the declaration of dependencies. |
2 | Declare that the bridge will access the paper’s author property,
then the author’s firstName property. |
3 | Declare that the bridge will access the paper’s author property,
then the author’s lastName property. |
The above should be enough to get started, but if you want to know more, here are a few facts about declaring dependencies.
- Paths are relative to the bridged element
-
For example:
-
for a type bridge on type
ScientificPaper
, pathauthor
will refer to the value of propertyauthor
onScientificPaper
instances. -
for a property bridge on the property
author
ofScientificPaper
, pathname
will refer to the value of propertyname
onAuthor
instances.
-
- Every component of given paths will be considered as a dependency
-
You do not need to declare any parent path.
For example, if the path
myProperty.someOtherProperty
is declared as used, Hibernate Search will automatically assume thatmyProperty
is also used. - Only mutable properties need to be declared
-
If a property never, ever changes after the entity is first persisted, then it will never trigger reindexing and Hibernate Search does not need to know about the dependency.
If your bridge only relies on immutable properties, see
useRootOnly()
: declaring no dependency at all. - Associations included in dependency paths need to have an inverse side
-
If you declare a dependency that crosses entity boundaries through an association, and that association has no inverse side in the other entity, an exception will be thrown.
For example, when you declare a dependency to path
author.lastName
, Hibernate Search infers that whenever the last name of an author changes, its books need to be re-indexed. Thus, when it detects an author’s last name changed, Hibernate Search will need to retrieve the books to reindex them. That’s why theauthor
association in entityScientificPaper
needs to have an inverse side in entityAuthor
, e.g. abooks
association.See Tuning when to trigger reindexing for more information about these constraints and how to address non-trivial models.
12.7.2. Traversing non-default containers (map keys, …)
Features detailed below are incubating: they are still under active development. The usual compatibility policy does not apply: the contract of incubating elements (e.g. types, methods, configuration properties, etc.) may be altered in a backward-incompatible way — or even removed — in subsequent releases. You are encouraged to use incubating features so the development team can get feedback and improve them, but you should be prepared to update code which relies on them as needed. |
When a path element refers to a property of a container type (List
, Map
, Optional
, …),
the path will be implicitly resolved to elements of that container.
For example someMap.otherObject
will resolve to the otherObject
property
of the values (not the keys) of someMap
.
If the default resolution is not what you need,
you can explicitly control how to traverse containers by passing PojoModelPath
objects
instead of just strings:
@Entity
@Indexed
@TypeBinding(binder = @TypeBinderRef(type = BookEditionsForSaleTypeBinder.class)) (1)
public class Book {
@Id
@GeneratedValue
private Integer id;
@FullTextField(analyzer = "name")
private String title;
@ElementCollection
@JoinTable(
name = "book_editionbyprice",
joinColumns = @JoinColumn(name = "book_id")
)
@MapKeyJoinColumn(name = "edition_id")
@Column(name = "price")
@OrderBy("edition_id asc")
@AssociationInverseSide(
extraction = @ContainerExtraction(BuiltinContainerExtractors.MAP_KEY),
inversePath = @ObjectPath(@PropertyValue(propertyName = "book"))
)
private Map<BookEdition, BigDecimal> priceByEdition = new LinkedHashMap<>(); (2)
public Book() {
}
// Getters and setters
// ...
}
1 | Apply a custom bridge to the ScientificPaper entity. |
2 | This (rather complex) map is the one we’ll access in the custom bridge. |
public class BookEditionsForSaleTypeBinder implements TypeBinder {
@Override
public void bind(TypeBindingContext context) {
context.dependencies()
.use( PojoModelPath.builder() (1)
.property( "priceByEdition" ) (2)
.value( BuiltinContainerExtractors.MAP_KEY ) (3)
.property( "label" ) (4)
.toValuePath() ); (5)
IndexFieldReference<String> editionsForSaleField = context.indexSchemaElement()
.field( "editionsForSale", f -> f.asString().analyzer( "english" ) )
.multiValued()
.toReference();
context.bridge( Book.class, new Bridge( editionsForSaleField ) );
}
private static class Bridge implements TypeBridge<Book> {
private final IndexFieldReference<String> editionsForSaleField;
private Bridge(IndexFieldReference<String> editionsForSaleField) {
this.editionsForSaleField = editionsForSaleField;
}
@Override
public void write(DocumentElement target, Book book, TypeBridgeWriteContext context) {
for ( BookEdition edition : book.getPriceByEdition().keySet() ) { (6)
target.addValue( editionsForSaleField, edition.getLabel() );
}
}
}
}
1 | Start building a PojoModelPath . |
2 | Append the priceByEdition property (a Map ) to the path. |
3 | Explicitly mention that the bridge will access keys from the priceByEdition map — the paper editions.
Without this, Hibernate Search would have assumed that values are accessed. |
4 | Append the label property to the path. This is the label property in paper editions. |
5 | Create the path and pass it to .use(…) to declare the dependency. |
6 | This is the actual code that accesses the paths as declared above. |
For property binders applied to a container property,
you can control how to traverse the property itself
by passing a container extractor path as the first argument to use(…)
:
@Entity
@Indexed
public class Book {
@Id
@GeneratedValue
private Integer id;
@FullTextField(analyzer = "name")
private String title;
@ElementCollection
@JoinTable(
name = "book_editionbyprice",
joinColumns = @JoinColumn(name = "book_id")
)
@MapKeyJoinColumn(name = "edition_id")
@Column(name = "price")
@OrderBy("edition_id asc")
@AssociationInverseSide(
extraction = @ContainerExtraction(BuiltinContainerExtractors.MAP_KEY),
inversePath = @ObjectPath(@PropertyValue(propertyName = "book"))
)
@PropertyBinding(binder = @PropertyBinderRef(type = BookEditionsForSalePropertyBinder.class)) (1)
private Map<BookEdition, BigDecimal> priceByEdition = new LinkedHashMap<>();
public Book() {
}
// Getters and setters
// ...
}
1 | Apply a custom bridge to the pricesByEdition property of the ScientificPaper entity. |
public class BookEditionsForSalePropertyBinder implements PropertyBinder {
@Override
public void bind(PropertyBindingContext context) {
context.dependencies()
.use( ContainerExtractorPath.explicitExtractor( BuiltinContainerExtractors.MAP_KEY ), (1)
"label" ); (2)
IndexFieldReference<String> editionsForSaleField = context.indexSchemaElement()
.field( "editionsForSale", f -> f.asString().analyzer( "english" ) )
.multiValued()
.toReference();
context.bridge( Map.class, new Bridge( editionsForSaleField ) );
}
@SuppressWarnings("rawtypes")
private static class Bridge implements PropertyBridge<Map> {
private final IndexFieldReference<String> editionsForSaleField;
private Bridge(IndexFieldReference<String> editionsForSaleField) {
this.editionsForSaleField = editionsForSaleField;
}
@Override
public void write(DocumentElement target, Map bridgedElement, PropertyBridgeWriteContext context) {
@SuppressWarnings("unchecked")
Map<BookEdition, ?> priceByEdition = (Map<BookEdition, ?>) bridgedElement;
for ( BookEdition edition : priceByEdition.keySet() ) { (3)
target.addValue( editionsForSaleField, edition.getLabel() );
}
}
}
}
1 | Explicitly mention that the bridge will access keys from the priceByEdition property — the paper editions.
Without this, Hibernate Search would have assumed that values are accessed. |
2 | Declare a dependency to the label property in paper editions. |
3 | This is the actual code that accesses the paths as declared above. |
12.7.3. useRootOnly()
: declaring no dependency at all
If your bridge only accesses immutable properties, then it’s safe to declare that its only dependency is to the root object.
To do so, call .dependencies().useRootOnly()
.
Without this call, Hibernate Search will suspect an oversight and will throw an exception on startup. |
12.7.4. fromOtherEntity(…)
: declaring dependencies using the inverse path
Features detailed below are incubating: they are still under active development. The usual compatibility policy does not apply: the contract of incubating elements (e.g. types, methods, configuration properties, etc.) may be altered in a backward-incompatible way — or even removed — in subsequent releases. You are encouraged to use incubating features so the development team can get feedback and improve them, but you should be prepared to update code which relies on them as needed. |
It is not always possible to represent the dependency as a path from the bridged element to the values accessed by the bridge.
In particular, when the bridge relies on other components (queries, services) to retrieve another entity, there may not even be a path from the bridge element to that entity. In this case, if there is an inverse path from the other entity to the bridged element, and the bridged element is an entity, you can simply declare the dependency from the other entity, as shown below.
@Entity
@Indexed
@TypeBinding(binder = @TypeBinderRef(type = ScientificPapersReferencedByBinder.class)) (1)
public class ScientificPaper {
@Id
private Integer id;
private String title;
@ManyToMany
private List<ScientificPaper> references = new ArrayList<>();
public ScientificPaper() {
}
// Getters and setters
// ...
}
1 | Apply a custom bridge to the ScientificPaper type. |
public class ScientificPapersReferencedByBinder implements TypeBinder {
@Override
public void bind(TypeBindingContext context) {
context.dependencies()
.fromOtherEntity( ScientificPaper.class, "references" ) (1)
.use( "title" ); (2)
IndexFieldReference<String> papersReferencingThisOneField = context.indexSchemaElement()
.field( "referencedBy", f -> f.asString().analyzer( "english" ) )
.multiValued()
.toReference();
context.bridge( ScientificPaper.class, new Bridge( papersReferencingThisOneField ) );
}
private static class Bridge implements TypeBridge<ScientificPaper> {
private final IndexFieldReference<String> referencedByField;
private Bridge(IndexFieldReference<String> referencedByField) { (2)
this.referencedByField = referencedByField;
}
@Override
public void write(DocumentElement target, ScientificPaper paper, TypeBridgeWriteContext context) {
for ( String referencingPaperTitle : findReferencingPaperTitles( context, paper ) ) { (3)
target.addValue( referencedByField, referencingPaperTitle );
}
}
private List<String> findReferencingPaperTitles(TypeBridgeWriteContext context, ScientificPaper paper) {
Session session = context.extension( HibernateOrmExtension.get() ).session();
Query<String> query = session.createQuery(
"select p.title from ScientificPaper p where :this member of p.references",
String.class );
query.setParameter( "this", paper );
return query.list();
}
}
}
1 | Declare that this bridge relies on other entities of type ScientificPaper ,
and that those other entities reference the indexed entity through their references property. |
2 | Declare which parts of the other entities are actually used by the bridge. |
3 | The bridge retrieves the other entity through a query, but then uses exclusively the parts that were declared previously. |
Currently, dependencies declared this way will be ignored when the "other entity" gets deleted. See HSEARCH-3567 to track progress on solving this problem. |
12.8. Declaring and writing to index fields
12.8.1. Basics
When implementing a PropertyBinder
or TypeBinder
,
it is necessary to declare the index fields that the bridge will contribute to.
This declaration is performed using a dedicated DSL.
The entry point to this DSL is the IndexNode
,
which represents the part of the document structure that the binder will push data to.
From the IndexNode
, it is possible to declare fields.
The declaration of each field yields a field reference.
This reference is to be stored in the bridge,
which will use it at runtime to set the value of this field in a given document,
represented by a DocumentElement
.
Below is a simple example using the DSL to declare a single field in a property binder and then write to that field in a property bridge.
public class ISBNBinder implements PropertyBinder {
@Override
public void bind(PropertyBindingContext context) {
context.dependencies()
/* ... (declaration of dependencies, not relevant) ... */
IndexSchemaElement schemaElement = context.indexSchemaElement(); (1)
IndexFieldReference<String> field =
schemaElement.field( (2)
"isbn", (3)
f -> f.asString() (4)
.normalizer( "isbn" )
)
.toReference(); (5)
context.bridge( (6)
ISBN.class, (7)
new ISBNBridge( field ) (8)
);
}
}
1 | Get the IndexNode , the entry point to the index field declaration DSL. |
2 | Declare a field. |
3 | Pass the name of the field. |
4 | Declare the type of the field. This is done through a lambda taking advantage of another DSL. See Defining index field types for more information. |
5 | Get a reference to the declared field. |
6 | Call context.bridge(…) to define the bridge to use. |
7 | Pass the expected type of values. |
8 | Pass the bridge instance. |
private static class ISBNBridge implements PropertyBridge<ISBN> {
private final IndexFieldReference<String> fieldReference;
private ISBNBridge(IndexFieldReference<String> fieldReference) {
this.fieldReference = fieldReference;
}
@Override
public void write(DocumentElement target, ISBN bridgedElement, PropertyBridgeWriteContext context) {
String indexedValue = /* ... (extraction of data, not relevant) ... */
target.addValue( this.fieldReference, indexedValue ); (1)
}
}
1 | In the bridge, use the reference obtained above to add a value to the field for the current document. |
12.8.2. Type objects
The lambda syntax to declare the type of each field is convenient, but sometimes gets in the way, in particular when multiple fields must be declared with the exact same type.
For that reason, the context object passed to binders exposes a typeFactory()
method.
Using this factory, it is possible to build IndexFieldType
objects
that can be re-used in multiple field declarations.
@Override
public void bind(TypeBindingContext context) {
context.dependencies()
/* ... (declaration of dependencies, not relevant) ... */
IndexSchemaElement schemaElement = context.indexSchemaElement();
IndexFieldType<String> nameType = context.typeFactory() (1)
.asString() (2)
.analyzer( "name" )
.toIndexFieldType(); (3)
context.bridge( Author.class, new Bridge(
schemaElement.field( "firstName", nameType ) (4)
.toReference(),
schemaElement.field( "lastName", nameType ) (4)
.toReference(),
schemaElement.field( "fullName", nameType ) (4)
.toReference()
) );
}
1 | Get the type factory. |
2 | Define the type. |
3 | Get the resulting type. |
4 | Pass the type directly instead of using a lambda when defining the field. |
12.8.3. Multivalued fields
Fields are considered single-valued by default: if you attempt to add multiple values to a single-valued field during indexing, an exception will be thrown.
In order to add multiple values to a field, this field must be marked as multivalued during its declaration:
@Override
public void bind(TypeBindingContext context) {
context.dependencies()
/* ... (declaration of dependencies, not relevant) ... */
IndexSchemaElement schemaElement = context.indexSchemaElement();
context.bridge( Author.class, new Bridge(
schemaElement.field( "names", f -> f.asString().analyzer( "name" ) )
.multiValued() (1)
.toReference()
) );
}
1 | Declare the field as multivalued. |
12.8.4. Object fields
The previous sections only presented flat schemas with value fields, but the index schema can actually be organized in a tree structure, with two categories of index fields:
-
Value fields, often simply called "fields", which hold an atomic value of a specific type: string, integer, date, …
-
Object fields, which hold a composite value.
Object fields are declared similarly to value fields, with an additional step to declare each subfield, as shown below.
@Override
public void bind(PropertyBindingContext context) {
context.dependencies()
/* ... (declaration of dependencies, not relevant) ... */
IndexSchemaElement schemaElement = context.indexSchemaElement();
IndexSchemaObjectField summaryField =
schemaElement.objectField( "summary" ); (1)
IndexFieldType<BigDecimal> amountFieldType = context.typeFactory()
.asBigDecimal().decimalScale( 2 )
.toIndexFieldType();
context.bridge( List.class, new Bridge(
summaryField.toReference(), (2)
summaryField.field( "total", amountFieldType ) (3)
.toReference(),
summaryField.field( "books", amountFieldType ) (3)
.toReference(),
summaryField.field( "shipping", amountFieldType ) (3)
.toReference()
) );
}
1 | Declare an object field with objectField , passing its name in parameter. |
2 | Get a reference to the declared object field and pass it to the bridge for later use. |
3 | Create subfields, get references to these fields and pass them to the bridge for later use. |
The subfields of an object field can include object fields. |
Just as value fields, object fields are single-valued by default.
Be sure to call |
Object fields as well as their subfields are each assigned a reference, which will be used by the bridge to write to documents, as shown in the example below.
@Override
public void write(DocumentElement target, List bridgedElement, PropertyBridgeWriteContext context) {
@SuppressWarnings("unchecked")
List<InvoiceLineItem> lineItems = (List<InvoiceLineItem>) bridgedElement;
BigDecimal total = BigDecimal.ZERO;
BigDecimal books = BigDecimal.ZERO;
BigDecimal shipping = BigDecimal.ZERO;
/* ... (computation of amounts, not relevant) ... */
DocumentElement summary = target.addObject( this.summaryField ); (1)
summary.addValue( this.totalField, total ); (2)
summary.addValue( this.booksField, books ); (2)
summary.addValue( this.shippingField, shipping ); (2)
}
1 | Add an object to the summary object field for the current document,
and get a reference to that object. |
2 | Add a value to the subfields for the object we just added.
Note we’re calling addValue on the object we just added, not on target . |
12.8.5. Object structure
By default, object fields are flattened,
meaning that the tree structure is not preserved.
See DEFAULT
or FLATTENED
structure for more information.
It is possible to switch to a nested structure
by passing an argument to the objectField
method, as shown below.
Each value of the object field will then transparently be indexed as a separate nested document,
without any change to the write
method of the bridge.
@Override
public void bind(PropertyBindingContext context) {
context.dependencies()
/* ... (declaration of dependencies, not relevant) ... */
IndexSchemaElement schemaElement = context.indexSchemaElement();
IndexSchemaObjectField lineItemsField =
schemaElement.objectField( (1)
"lineItems", (2)
ObjectStructure.NESTED (3)
)
.multiValued(); (4)
context.bridge( List.class, new Bridge(
lineItemsField.toReference(), (5)
lineItemsField.field( "category", f -> f.asString() ) (6)
.toReference(),
lineItemsField.field( "amount", f -> f.asBigDecimal().decimalScale( 2 ) ) (7)
.toReference()
) );
}
1 | Declare an object field with objectField . |
2 | Define the name of the object field. |
3 | Define the structure of the object field, here NESTED . |
4 | Define the object field as multivalued. |
5 | Get a reference to the declared object field and pass it to the bridge for later use. |
6 | Create subfields, get references to these fields and pass them to the bridge for later use. |
12.8.6. Dynamic fields with field templates
Field declared in the sections above are all static: their path and type are known on bootstrap.
In some very specific cases, the path of a field is not known until you actually index it;
for example, you may want to index a Map<String, Integer>
by using the map keys as field names,
or index the properties of a JSON object whose schema is not known in advance.
The fields, then, are considered dynamic.
Dynamic fields are not declared on bootstrap, but need to match a field template that is declared on bootstrap. The template includes the field types and structural information (multivalued or not, …), but omits the field names.
A field template is always declared in a binder: either in a type binder
or in a property binder.
As for static fields, the entry point to declaring a template is the IndexNode
passed to the binder’s bind(…)
method.
A call to the fieldTemplate
method on the schema element will declare a field template.
Assuming a field template was declared during binding,
the bridge can then add dynamic fields to the DocumentElement
when indexing,
by calling addValue
and passing the field name (as a string) and the field value.
Below is a simple example using the DSL to declare a field template in a property binder and then write to that field in a property bridge.
public class UserMetadataBinder implements PropertyBinder {
@Override
public void bind(PropertyBindingContext context) {
context.dependencies()
/* ... (declaration of dependencies, not relevant) ... */
IndexSchemaElement schemaElement = context.indexSchemaElement();
IndexSchemaObjectField userMetadataField =
schemaElement.objectField( "userMetadata" ); (1)
userMetadataField.fieldTemplate( (2)
"userMetadataValueTemplate", (3)
f -> f.asString().analyzer( "english" ) (4)
); (5)
context.bridge( Map.class, new UserMetadataBridge(
userMetadataField.toReference() (6)
) );
}
}
1 | Declare an object field with objectField .
It’s better to always host your dynamic fields on a dedicated object field,
to avoid conflicts with other templates. |
2 | Declare a field template with fieldTemplate . |
3 | Pass the template name — this is not the field name, and is only used to uniquely identify the template. |
4 | Define the field type. |
5 | On contrary to static field declarations, field template declarations do not return a field reference, because you won’t need it when writing to the document. |
6 | Get a reference to the declared object field and pass it to the bridge for later use. |
@SuppressWarnings("rawtypes")
private static class UserMetadataBridge implements PropertyBridge<Map> {
private final IndexObjectFieldReference userMetadataFieldReference;
private UserMetadataBridge(IndexObjectFieldReference userMetadataFieldReference) {
this.userMetadataFieldReference = userMetadataFieldReference;
}
@Override
public void write(DocumentElement target, Map bridgedElement, PropertyBridgeWriteContext context) {
@SuppressWarnings("unchecked")
Map<String, String> userMetadata = (Map<String, String>) bridgedElement;
DocumentElement indexedUserMetadata = target.addObject( userMetadataFieldReference ); (1)
for ( Map.Entry<String, String> entry : userMetadata.entrySet() ) {
String fieldName = entry.getKey();
String fieldValue = entry.getValue();
indexedUserMetadata.addValue( fieldName, fieldValue ); (2)
}
}
}
1 | Add an object to the userMetadata object field for the current document,
and get a reference to that object. |
2 | Add one field per user metadata entry, with the field name and field value defined by the user. Note that field names should usually be validated before that point, in order to avoid exotic characters (whitespaces, dots, …). |
Though rarely necessary, you can also declare templates for object fields using the |
It is also possible to add multiple fields with different types to the same object. To that end, make sure that the format of a field can be inferred from the field name. You can then declare multiple templates and assign a path pattern to each template, as shown below.
public class MultiTypeUserMetadataBinder implements PropertyBinder {
@Override
public void bind(PropertyBindingContext context) {
context.dependencies()
/* ... (declaration of dependencies, not relevant) ... */
IndexSchemaElement schemaElement = context.indexSchemaElement();
IndexSchemaObjectField userMetadataField =
schemaElement.objectField( "multiTypeUserMetadata" ); (1)
userMetadataField.fieldTemplate( (2)
"userMetadataValueTemplate_int", (3)
f -> f.asInteger().sortable( Sortable.YES ) (4)
)
.matchingPathGlob( "*_int" ); (5)
userMetadataField.fieldTemplate( (6)
"userMetadataValueTemplate_default",
f -> f.asString().analyzer( "english" )
);
context.bridge( Map.class, new Bridge( userMetadataField.toReference() ) );
}
}
1 | Declare an object field with objectField . |
2 | Declare a field template for integer with fieldTemplate . |
3 | Pass the template name. |
4 | Define the field type as integer, sortable. |
5 | Assign a path pattern to the template, so that only fields ending with _int will be considered as integers. |
6 | Declare another field template, so that fields are considered as english text if they do not match the previous template. |
@SuppressWarnings("rawtypes")
private static class Bridge implements PropertyBridge<Map> {
private final IndexObjectFieldReference userMetadataFieldReference;
private Bridge(IndexObjectFieldReference userMetadataFieldReference) {
this.userMetadataFieldReference = userMetadataFieldReference;
}
@Override
public void write(DocumentElement target, Map bridgedElement, PropertyBridgeWriteContext context) {
@SuppressWarnings("unchecked")
Map<String, Object> userMetadata = (Map<String, Object>) bridgedElement;
DocumentElement indexedUserMetadata = target.addObject( userMetadataFieldReference ); (1)
for ( Map.Entry<String, Object> entry : userMetadata.entrySet() ) {
String fieldName = entry.getKey();
Object fieldValue = entry.getValue();
indexedUserMetadata.addValue( fieldName, fieldValue ); (2)
}
}
}
1 | Add an object to the userMetadata object field for the current document,
and get a reference to that object. |
2 | Add one field per user metadata entry,
with the field name and field value defined by the user.
Note that field values should be validated before that point;
in this case, adding a field named foo_int with a value of type String
will lead to a SearchException when indexing. |
Precedence of field templates
Hibernate Search tries to match templates in the order they are declared, so you should always declare the templates with the most specific path pattern first. Templates declared on a given schema element can be matched in children of that element. For example, if you declare templates at the root of your entity (through a type bridge), these templates will be implicitly available in every single property bridge of that entity. In such cases, templates declared in property bridges will take precedence over those declared in the type bridge. |
12.9. Defining index field types
12.9.1. Basics
A specificity of Lucene-based search engines (including Elasticsearch) is that field types are much more complex than just a data type ("string", "integer", …).
When declaring a field, you must not only declare the data type, but also various characteristics that will define how the data is stored exactly: is the field sortable, is it projectable, is it analyzed and if so with which analyzer, …
Because of this complexity,
when field types must be defined in the various binders
(ValueBinder
, PropertyBinder
, TypeBinder
),
they are defined using a dedicated DSL.
The entry point to this DSL is the IndexFieldTypeFactory
.
The type factory is generally accessible though the context object passed to the binders
(context.typeFactory()
).
In the case of PropertyBinder
and TypeBinder
,
the type factory can also be passed to the lambda expression passed to the field
method
to define the field type inline.
The type factory exposes various as*()
methods,
for example asString
or asLocalDate
.
These are the first steps of the type definition DSL,
where the data type is defined.
They return other steps, from which options
can be set, such as the analyzer.
See below for an example.
IndexFieldType<String> type = context.typeFactory() (1)
.asString() (2)
.normalizer( "isbn" ) (3)
.sortable( Sortable.YES ) (3)
.toIndexFieldType(); (4)
1 | Get the IndexFieldTypeFactory from the binding context. |
2 | Define the data type. |
3 | Define options.
Available options differ based on the field type:
for example, normalizer is available for String fields,
but not for Double fields. |
4 | Get the index field type. |
In
|
12.9.2. Available data types
All available data types have a dedicated as*()
method in IndexFieldTypeFactory
.
For details, see the javadoc of IndexFieldTypeFactory
,
or the backend-specific documentation:
12.9.3. Available type options
Most of the options available in the index field type DSL are identical
to the options exposed by @*Field
annotations.
See Field annotation attributes for details about them.
Other options are explained in the following sections.
12.9.4. DSL converter
This section is not relevant for |
The various search DSLs expose some methods that expect a field value:
matching()
, between()
, atMost()
, missingValue().use()
, …
By default, the expected type will be the same as the data type,
i.e. String
if you called asString()
,
LocalDate
if you called asLocalDate()
,
etc.
This can be annoying when the bridge converts values from a different type when indexing. For example, if the bridge converts an enum to a string when indexing, you probably don’t want to pass a string to search predicates, but rather the enum.
By setting a DSL converter on a field type, it is possible to change the expected type of values passed to the various DSL, See below for an example.
IndexFieldType<String> type = context.typeFactory()
.asString() (1)
.normalizer( "isbn" )
.sortable( Sortable.YES )
.dslConverter( (2)
ISBN.class, (3)
(value, convertContext) -> value.getStringValue() (4)
)
.toIndexFieldType();
1 | Define the data type as String . |
2 | Define a DSL converter that converts from ISBN to String .
This converter will be used transparently by the search DSLs. |
3 | Define the input type as ISBN by passing ISBN.class as the first parameter. |
4 | Define how to convert an ISBN to a String by passing a converter as the second parameter. |
ISBN expectedISBN = /* ... */
List<Book> result = searchSession.search( Book.class )
.where( f -> f.match().field( "isbn" )
.matching( expectedISBN ) ) (1)
.fetchHits( 20 );
1 | Thanks to the DSL converter,
predicates targeting fields using our type
accept ISBN values by default. |
DSL converters can be disabled in the various DSLs where necessary. See Type of arguments passed to the DSL. |
12.9.5. Projection converter
This section is not relevant for |
By default, the type of values returned by field projections
or aggregations
will be the same as the data type of the corresponding field,
i.e. String
if you called asString()
,
LocalDate
if you called asLocalDate()
,
etc.
This can be annoying when the bridge converts values from a different type when indexing. For example, if the bridge converts an enum to a string when indexing, you probably don’t want projections to return a string, but rather the enum.
By setting a projection converter on a field type, it is possible to change the type of values returned by field projections or aggregations. See below for an example.
IndexFieldType<String> type = context.typeFactory()
.asString() (1)
.projectable( Projectable.YES )
.projectionConverter( (2)
ISBN.class, (3)
(value, convertContext) -> ISBN.parse( value ) (4)
)
.toIndexFieldType();
1 | Define the data type as String . |
2 | Define a projection converter that converts from String to ISBN .
This converter will be used transparently by the search DSLs. |
3 | Define the converted type as ISBN by passing ISBN.class as the first parameter. |
4 | Define how to convert a String to an ISBN by passing a converter as the second parameter. |
List<ISBN> result = searchSession.search( Book.class )
.select( f -> f.field( "isbn", ISBN.class ) ) (1)
.where( f -> f.matchAll() )
.fetchHits( 20 );
1 | Thanks to the projection converter,
fields using our type are projected to an ISBN by default. |
Projection converters can be disabled in the projection DSL where necessary. See Type of projected values. |
12.10. Defining named predicates
Features detailed below are incubating: they are still under active development. The usual compatibility policy does not apply: the contract of incubating elements (e.g. types, methods, configuration properties, etc.) may be altered in a backward-incompatible way — or even removed — in subsequent releases. You are encouraged to use incubating features so the development team can get feedback and improve them, but you should be prepared to update code which relies on them as needed. |
When implementing a PropertyBinder
or TypeBinder
,
it is possible to assign "named predicates"
to index schema elements (either the index root or an object field).
These named predicates will then be usable through the Search DSL, referencing them by name and optionally passing parameters. The main point is that the implementation is hidden from callers: they do not need to understand how data is indexed in order to use a named predicate.
Below is a simple example using the DSL to declare an object field and assign a named predicate to that field, in a property binder.
/**
* A binder for Stock Keeping Unit (SKU) identifiers, i.e. Strings with a specific format.
*/
public class SkuIdentifierBinder implements PropertyBinder {
@Override
public void bind(PropertyBindingContext context) {
context.dependencies().useRootOnly();
IndexSchemaObjectField skuIdObjectField = context.indexSchemaElement()
.objectField( context.bridgedElement().name() );
IndexFieldType<String> skuIdPartType = context.typeFactory()
.asString().normalizer( "lowercase" ).toIndexFieldType();
context.bridge( String.class, new Bridge(
skuIdObjectField.toReference(),
skuIdObjectField.field( "departmentCode", skuIdPartType ).toReference(),
skuIdObjectField.field( "collectionCode", skuIdPartType ).toReference(),
skuIdObjectField.field( "itemCode", skuIdPartType ).toReference()
) );
skuIdObjectField.namedPredicate( (1)
"skuIdMatch", (2)
new SkuIdentifierMatchPredicateDefinition() (3)
);
}
// ... class continues below
1 | The binder defines a named predicate. Note this predicate is assigned to an object field. |
2 | The predicate name will be used to refer to this predicate when calling the named predicate. Since the predicate is assigned to an object field, callers will have to prefix the predicate name with the path to that object field. |
3 | The predicate definition will define how to create the predicate when searching. |
// ... class SkuIdentifierBinder (continued)
private static class Bridge implements PropertyBridge<String> { (1)
private final IndexObjectFieldReference skuIdObjectField;
private final IndexFieldReference<String> departmentCodeField;
private final IndexFieldReference<String> collectionCodeField;
private final IndexFieldReference<String> itemCodeField;
private Bridge(IndexObjectFieldReference skuIdObjectField,
IndexFieldReference<String> departmentCodeField,
IndexFieldReference<String> collectionCodeField,
IndexFieldReference<String> itemCodeField) {
this.skuIdObjectField = skuIdObjectField;
this.departmentCodeField = departmentCodeField;
this.collectionCodeField = collectionCodeField;
this.itemCodeField = itemCodeField;
}
@Override
public void write(DocumentElement target, String skuId, PropertyBridgeWriteContext context) {
DocumentElement skuIdObject = target.addObject( this.skuIdObjectField );(2)
// An SKU identifier is formatted this way: "<department code>.<collection code>.<item code>".
String[] skuIdParts = skuId.split( "\\." );
skuIdObject.addValue( this.departmentCodeField, skuIdParts[0] ); (3)
skuIdObject.addValue( this.collectionCodeField, skuIdParts[1] ); (3)
skuIdObject.addValue( this.itemCodeField, skuIdParts[2] ); (3)
}
}
// ... class continues below
1 | Here the bridge class is nested in the binder class, because it is more convenient, but you are obviously free to implement it as you wish: as a lambda expression, in a separate Java file… |
2 | The bridge creates an object to hold the various components of the SKU identifier. |
3 | The bridge populates the various components of the SKU identifier. |
// ... class SkuIdentifierBinder (continued)
private static class SkuIdentifierMatchPredicateDefinition implements PredicateDefinition { (1)
@Override
public SearchPredicate create(PredicateDefinitionContext context) {
SearchPredicateFactory f = context.predicate(); (2)
String pattern = context.params().get( "pattern", String.class ); (3)
return f.and().with( and -> { (4)
// An SKU identifier pattern is formatted this way: "<department code>.<collection code>.<item code>".
// Each part supports * and ? wildcards.
String[] patternParts = pattern.split( "\\." );
if ( patternParts.length > 0 ) {
and.add( f.wildcard()
.field( "departmentCode" ) (5)
.matching( patternParts[0] ) );
}
if ( patternParts.length > 1 ) {
and.add( f.wildcard()
.field( "collectionCode" )
.matching( patternParts[1] ) );
}
if ( patternParts.length > 2 ) {
and.add( f.wildcard()
.field( "itemCode" )
.matching( patternParts[2] ) );
}
} ).toPredicate(); (6)
}
}
}
1 | The predicate definition must implement the PredicateDefinition interface.
Here the predicate definition class is nested in the binder class, because it is more convenient, but you are obviously free to implement it in a separate java file. |
2 | The context passed to the definition exposes the predicate factory, which is the entry point to the predicate DSL, used to create predicates. |
3 | The definition can access parameters that are passed when calling the named predicates.
The |
4 | The definition uses the predicate factory to create predicates. In this example, this implementation transforms a pattern with a custom format into three patterns, one for each field populated by the bridge. |
5 | Be careful: the search predicate factory expects paths
relative to the object field where the named predicate was registered.
Here the path departmentCode will be understood as <path to the object field>.departmentCode .
See also Field paths. |
6 | Do not forget to call toPredicate() to return a SearchPredicate instance. |
@Entity
@Indexed
public class ItemStock {
@Id
@PropertyBinding(binder = @PropertyBinderRef(type = SkuIdentifierBinder.class)) (1)
private String skuId;
private int amountInStock;
// Getters and setters
// ...
}
1 | Apply the bridge using the @PropertyBinding annotation.
The predicate will be available in the Search DSL,
as shown in named : call a predicate defined in the mapping. |
12.11. Assigning default bridges with the bridge resolver
12.11.1. Basics
Both the @*Field
annotations
and the @DocumentId
annotation
support a broad range of standard types by default,
without needing to tell Hibernate Search how to convert values to something that can be indexed.
Under the hood, the support for default types is handled by the bridge resolver.
For example, when a property is mapped with @GenericField
and neither @GenericField.valueBridge
nor @GenericField.valueBinder
is set,
Hibernate Search will resolve the type of this property,
then pass it to the bridge resolver,
which will return an appropriate bridge, or fail if there isn’t any.
It is possible to customize the bridge resolver,
to override existing default bridges (indexing java.util.Date
differently, for example)
or to define default bridges for additional types (a geospatial type from an external library, for example).
To that end, define a mapping configurer as explained in Programmatic mapping, then define bridges as shown below:
public class MyDefaultBridgesConfigurer implements HibernateOrmSearchMappingConfigurer {
@Override
public void configure(HibernateOrmMappingConfigurationContext context) {
context.bridges().exactType( MyCoordinates.class )
.valueBridge( new MyCoordinatesBridge() ); (1)
context.bridges().exactType( MyProductId.class )
.identifierBridge( new MyProductIdBridge() ); (2)
context.bridges().exactType( ISBN.class )
.valueBinder( new ValueBinder() { (3)
@Override
public void bind(ValueBindingContext<?> context) {
context.bridge( ISBN.class, new ISBNValueBridge(),
context.typeFactory().asString().normalizer( "isbn" ) );
}
} );
}
}
1 | Use our custom bridge (MyCoordinatesBridge ) by default when a property of type MyCoordinates
is mapped to an index field (e.g. with @GenericField ). |
2 | Use our custom bridge (MyProductBridge ) by default when a property of type MyProductId
is mapped to a document identifier (e.g. with @DocumentId ). |
3 | It’s also possible to specify a binder instead of a bridge, so that additional settings can be tuned. Here we’re assigning the "isbn" normalizer every time we map an ISBN to an index field. |
12.11.2. Assigning a single binder to multiple types
Features detailed below are incubating: they are still under active development. The usual compatibility policy does not apply: the contract of incubating elements (e.g. types, methods, configuration properties, etc.) may be altered in a backward-incompatible way — or even removed — in subsequent releases. You are encouraged to use incubating features so the development team can get feedback and improve them, but you should be prepared to update code which relies on them as needed. |
For more advanced use cases, it is also possible to assign a single binder to subtypes of a given type. This is useful when many types should be indexed similarly.
Below is an example where enums are not indexed as their .name()
(which is the default),
but instead are indexed as their label retrieved from an external service.
context.bridges().subTypesOf( Enum.class ) (1)
.valueBinder( new ValueBinder() {
@Override
public void bind(ValueBindingContext<?> context) {
Class<?> enumType = context.bridgedElement().rawType(); (2)
doBind( context, enumType );
}
private <T> void doBind(ValueBindingContext<?> context, Class<T> enumType) {
BeanHolder<EnumLabelService> serviceHolder = context.beanResolver()
.resolve( EnumLabelService.class, BeanRetrieval.ANY ); (3)
context.bridge(
enumType,
new EnumLabelBridge<>( enumType, serviceHolder )
); (4)
}
} );
1 | Match all subtypes of Enum . |
2 | Retrieve the type of the element being bridged. |
3 | Retrieve an external service (through CDI/Spring). |
4 | Create and assign the bridge. |
12.12. Projection binder
12.12.1. Basics
Projection binders are an advanced feature that application developers generally shouldn’t need to bother with.
Before resorting to custom projection binders,
consider relying on explicit projection constructor parameter mapping
using built-in annotations such as @IdProjection ,
@FieldProjection ,
@ObjectProjection , …
|
A projection binder is a pluggable component that implements
the binding of a constructor parameter to a projection.
It is applied to a parameter of a projection constructor
with the @ProjectionBinding
annotation
or with a custom annotation.
The projection binder can inspect the constructor parameter, and is expected to assign a projection definition to that constructor parameter, so that whenever the projection constructor is invoked, Hibernate Search will pass the result of that projection through that constructor parameter.
Implementing a projection binder requires two components:
-
A custom implementation of
ProjectionBinder
, to bind the projection definition to the parameter at bootstrap. This involves inspecting the constructor parameter if necessary, and instantiating the projection definition. -
A custom implementation of
ProjectionDefinition
, to instantiate the projection at runtime. This involves using the projection DSL and returning the resultingSearchProjection
.
Below is an example of a custom projection binder that binds
a parameter of type String
to a projection to the title
field in the index.
A similar result can be achieved without a custom projection binder. This is just to keep the example simple. |
ProjectionBinder
public class MyFieldProjectionBinder implements ProjectionBinder { (1)
@Override
public void bind(ProjectionBindingContext context) { (2)
context.definition( (3)
String.class, (4)
new MyProjectionDefinition() (5)
);
}
// ... class continues below
1 | The binder must implement the ProjectionBinder interface. |
2 | Implement the bind method in the binder. |
3 | Call context.definition(…) to define the projection to use. |
4 | Pass the expected type of the constructor parameter. |
5 | Pass the projection definition instance, which will create the projection at runtime. |
// ... class MyFieldProjectionBinder (continued)
private static class MyProjectionDefinition (1)
implements ProjectionDefinition<String> { (2)
@Override
public SearchProjection<String> create(SearchProjectionFactory<?, ?> factory,
ProjectionDefinitionContext context) {
return factory.field( "title", String.class ) (3)
.toProjection(); (4)
}
}
}
1 | Here the definition class is nested in the binder class, because it is more convenient, but you are obviously free to implement it as you wish: as a lambda expression, in a separate Java file… |
2 | The definition must implement the ProjectionDefinition interface.
One generic type argument must be provided: the type of the projected value,
i.e. the type of the constructor parameter. |
3 | Use the provided SearchProjectionFactory and the projection DSL
to define the appropriate projection. |
4 | Get the resulting projection by calling .toProjection() and return it. |
@ProjectionConstructor
public record MyBookProjection(
@ProjectionBinding(binder = @ProjectionBinderRef( (1)
type = MyFieldProjectionBinder.class
))
String title) {
}
1 | Apply the binder using the @ProjectionBinding annotation. |
The book projection can then be used as any custom projection type,
and its title
parameter will be initialized with values returned by the custom projection definition:
List<MyBookProjection> hits = searchSession.search( Book.class )
.select( MyBookProjection.class )
.where( f -> f.matchAll() )
.fetchHits( 20 );
12.12.2. Multi-valued projections
Features detailed below are incubating: they are still under active development. The usual compatibility policy does not apply: the contract of incubating elements (e.g. types, methods, configuration properties, etc.) may be altered in a backward-incompatible way — or even removed — in subsequent releases. You are encouraged to use incubating features so the development team can get feedback and improve them, but you should be prepared to update code which relies on them as needed. |
You can call .multi()
on the context passed to the projection binder
in order to discover whether the constructor parameter being bound is multi-valued
(according to the same rules as implicit inner projection inference),
and to bind a multi-valued projection.
ProjectionBinder
supporting multi-valued projectionspublic class MyFieldProjectionBinder implements ProjectionBinder {
@Override
public void bind(ProjectionBindingContext context) {
Optional<? extends ProjectionBindingMultiContext> multi = context.multi(); (1)
if ( multi.isPresent() ) {
multi.get().definition( String.class, new MyProjectionDefinition() ); (2)
}
else {
throw new RuntimeException( "This binder only supports multi-valued constructor parameters" ); (3)
}
}
private static class MyProjectionDefinition
implements ProjectionDefinition<List<String>> { (4)
@Override
public SearchProjection<List<String>> create(SearchProjectionFactory<?, ?> factory,
ProjectionDefinitionContext context) {
return factory.field( "tags", String.class )
.multi() (4)
.toProjection();
}
}
}
1 | multi() returns an optional that contains a context
if and only if the constructor parameter is considered multi-valued. |
2 | Call multi.definition(…) to define the projection to use. |
3 | Here we’re failing for single-valued constructor parameters, but we could theoreticall fall back to a single-valued projection. |
4 | The projection definition, being multi-valued,
must implement ProjectionDefinition<List<T>> ,
where T is the exepected type of projected values,
and must configure returned projections accordingly. |
@ProjectionConstructor
public record MyBookProjection(
@ProjectionBinding(binder = @ProjectionBinderRef( (1)
type = MyFieldProjectionBinder.class
))
List<String> tags) {
}
1 | Apply the binder using the @ProjectionBinding annotation. |
The book projection can then be used as any custom projection type,
and its tags
parameter will be initialized with values returned by the custom projection definition:
List<MyBookProjection> hits = searchSession.search( Book.class )
.select( MyBookProjection.class )
.where( f -> f.matchAll() )
.fetchHits( 20 );
12.12.3. Composing projection constructors
Features detailed below are incubating: they are still under active development. The usual compatibility policy does not apply: the contract of incubating elements (e.g. types, methods, configuration properties, etc.) may be altered in a backward-incompatible way — or even removed — in subsequent releases. You are encouraged to use incubating features so the development team can get feedback and improve them, but you should be prepared to update code which relies on them as needed. |
You can call .createObjectDefinition( "someFieldPath", SomeType.class )
on the context passed to the projection binder
in order to retrieve the definition of an object projection
based on the projection constructor mapping of SomeType
.
This effectively allows using projection constructors within projection binders,
simply by passing the resulting definition to .definition(…)
or by delegating to it in a custom projection definition.
Other methods exposed on the binding context work similarly:
-
.createObjectDefinitionMulti(…)
returns a multi-valued object projection definition. -
.createCompositeDefinition(…)
returns a (single-valued) composite projection definition (which, on contrary to an object projection, is not bound to an object field in the index).
Below is an example using .createObjectDefinition(…)
to delegate to another projection constructor.
A similar result can be achieved without a custom projection binder,
simply by relying on implicit inner projection inference
or by using @ObjectProjection .
This is just to keep the example simple.
|
ProjectionBinder
that delegates to a projection constructorpublic class MyObjectFieldProjectionBinder implements ProjectionBinder {
@Override
public void bind(ProjectionBindingContext context) {
var authorProjection = context.createObjectDefinition( (1)
"author", (2)
MyBookProjection.MyAuthorProjection.class, (3)
TreeFilterDefinition.includeAll() (4)
);
context.definition( (5)
MyBookProjection.MyAuthorProjection.class,
authorProjection
);
}
}
1 | Call createObjectDefinition(…) to create a definition to delegate to. |
2 | Pass the name of the object field to project on. |
3 | Pass the projected type. |
4 | Pass the filter for nested projections; here we’re not filtering at all.
This controls the same feature as includePaths /excludePaths /includeDepths in @ObjectProjection . |
5 | Call definition(…) and pass the definition created just above. |
@ProjectionConstructor
public record MyBookProjection(
@ProjectionBinding(binder = @ProjectionBinderRef( (1)
type = MyObjectFieldProjectionBinder.class
))
MyAuthorProjection author) {
@ProjectionConstructor (2)
public record MyAuthorProjection(String name) {
}
}
1 | Apply the binder using the @ProjectionBinding annotation. |
2 | Make sure the projected type passed to createObjectDefinition(…) has a projection constructor. |
The book projection can then be used as any custom projection type,
and its author
parameter will be initialized with values returned by the custom projection definition:
List<MyBookProjection> hits = searchSession.search( Book.class )
.select( MyBookProjection.class )
.where( f -> f.matchAll() )
.fetchHits( 20 );
12.12.4. Passing parameters
There are two ways to pass parameters to property bridges:
-
One is (mostly) limited to string parameters, but is trivial to implement.
-
The other can allow any type of parameters, but requires you to declare your own annotations.
Simple, string parameters
You can pass string parameters to the @ProjectionBinderRef
annotation and then use them later in the binder:
ProjectionBinder
using the @ProjectionBinderRef
annotationpublic class MyFieldProjectionBinder implements ProjectionBinder {
@Override
public void bind(ProjectionBindingContext context) {
String fieldName = context.param( "fieldName", String.class ); (1)
context.definition(
String.class,
new MyProjectionDefinition( fieldName ) (2)
);
}
private static class MyProjectionDefinition
implements ProjectionDefinition<String> {
private final String fieldName;
public MyProjectionDefinition(String fieldName) { (2)
this.fieldName = fieldName;
}
@Override
public SearchProjection<String> create(SearchProjectionFactory<?, ?> factory,
ProjectionDefinitionContext context) {
return factory.field( fieldName, String.class ) (3)
.toProjection();
}
}
}
1 | Use the binding context to get the parameter value.
The |
2 | Pass the parameter value as an argument to the definition constructor. |
3 | Use the parameter value in the projection definition. |
@ProjectionConstructor
public record MyBookProjection(
@ProjectionBinding(binder = @ProjectionBinderRef(
type = MyFieldProjectionBinder.class,
params = @Param( name = "fieldName", value = "title" )
)) String title) { (1)
}
1 | Define the binder to use on the constructor parameter,
setting the fieldName parameter. |
Parameters with custom annotations
You can pass parameters of any type to the bridge by defining a custom annotation with attributes:
PropertyBinder
using a custom annotation@Retention(RetentionPolicy.RUNTIME) (1)
@Target({ ElementType.PARAMETER }) (2)
@MethodParameterMapping(processor = @MethodParameterMappingAnnotationProcessorRef( (3)
type = MyFieldProjectionBinding.Processor.class
))
@Documented (4)
public @interface MyFieldProjectionBinding {
String fieldName() default ""; (5)
class Processor (6)
implements MethodParameterMappingAnnotationProcessor<MyFieldProjectionBinding> { (7)
@Override
public void process(MethodParameterMappingStep mapping, MyFieldProjectionBinding annotation,
MethodParameterMappingAnnotationProcessorContext context) {
MyFieldProjectionBinder binder = new MyFieldProjectionBinder(); (8)
if ( !annotation.fieldName().isEmpty() ) { (9)
binder.fieldName( annotation.fieldName() );
}
mapping.projection( binder ); (10)
}
}
}
1 | Define an annotation with RUNTIME retention.
Any other retention policy will cause the annotation to be ignored by Hibernate Search. |
2 | Since we will be mapping a projection definition to a projection constructor, allow the annotation to target method parameters (constructors are methods). |
3 | Mark this annotation as a method parameter mapping, and instruct Hibernate Search to apply the given processor whenever it finds this annotation. It is also possible to reference the processor by its CDI/Spring bean name. |
4 | Optionally, mark the annotation as documented, so that it is included in the javadoc of your entities. |
5 | Define an attribute of type String to specify the field name. |
6 | Here the processor class is nested in the annotation class, because it is more convenient, but you are obviously free to implement it in a separate Java file. |
7 | The processor must implement the MethodParameterMappingAnnotationProcessor interface,
setting its generic type argument to the type of the corresponding annotation. |
8 | In the annotation processor, instantiate the binder. |
9 | Process the annotation attributes and pass the data to the binder.
Here we’re using a setter, but passing the data through the constructor would work, too. |
10 | Apply the binder to the constructor parameter. |
public class MyFieldProjectionBinder implements ProjectionBinder {
private String fieldName = "name";
public MyFieldProjectionBinder fieldName(String fieldName) { (1)
this.fieldName = fieldName;
return this;
}
@Override
public void bind(ProjectionBindingContext context) {
context.definition(
String.class,
new MyProjectionDefinition( fieldName ) (2)
);
}
private static class MyProjectionDefinition
implements ProjectionDefinition<String> {
private final String fieldName;
public MyProjectionDefinition(String fieldName) { (2)
this.fieldName = fieldName;
}
@Override
public SearchProjection<String> create(SearchProjectionFactory<?, ?> factory,
ProjectionDefinitionContext context) {
return factory.field( fieldName, String.class ) (3)
.toProjection();
}
}
}
1 | Implement setters in the binder. Alternatively, we could expose a parameterized constructor. |
2 | In the bind method, use the value of parameters.
Here we pass the parameter value as an argument to the definition constructor. |
3 | Use the parameter value in the projection definition. |
@ProjectionConstructor
public record MyBookProjection(
@MyFieldProjectionBinding(fieldName = "title") (1)
String title) {
}
1 | Apply the binder using its custom annotation,
setting the fieldName parameter. |
12.12.5. Injecting beans into the binder
With compatible frameworks, Hibernate Search supports injecting beans into:
-
the
MethodParameterMappingAnnotationProcessor
if you use custom annotations. -
the
ProjectionBinder
if you use the@ProjectionBinding
annotation.
This only applies to beans instantiated
through Hibernate Search’s bean resolution.
As a rule of thumb, if you need to call new MyBinder() explicitly at some point,
the binder won’t get auto-magically injected.
|
The context passed to the property binder’s bind
method
also exposes a beanResolver()
method to access the bean resolver and instantiate beans explicitly.
See Bean injection for more details.
12.12.6. Programmatic mapping
You can apply a projection binder through the programmatic mapping too.
Just pass an instance of the binder to .projection(…)
. You can pass arguments either through the binder’s constructor, or through setters.
ProjectionBinder
with .projection(…)
TypeMappingStep myBookProjectionMapping = mapping.type( MyBookProjection.class );
myBookProjectionMapping.mainConstructor().projectionConstructor();
myBookProjectionMapping.mainConstructor().parameter( 0 )
.projection( new MyFieldProjectionBinder().fieldName( "title" ) );
12.12.7. Other incubating features
Features detailed below are incubating: they are still under active development. The usual compatibility policy does not apply: the contract of incubating elements (e.g. types, methods, configuration properties, etc.) may be altered in a backward-incompatible way — or even removed — in subsequent releases. You are encouraged to use incubating features so the development team can get feedback and improve them, but you should be prepared to update code which relies on them as needed. |
The context passed to the projection binder’s bind
method
exposes a constructorParameter()
method that gives access to metadata about the constructor parameter being bound.
The metadata can be used to inspect the constructor parameter in details:
-
Getting the name of the constructor parameter.
-
Checking the type of the constructor parameter.
Similarly, the context used for multi-valued projection binding
exposes a containerElement()
method that gives access to the type of elements
of the (multi-valued) constructor parameter type.
See the javadoc for more information.
The name of the constructor parameter is only available:
|
Below is an example of the simplest use of this metadata, getting the constructor parameter name and using it as a field name.
ProjectionBinder
public class MyFieldProjectionBinder implements ProjectionBinder {
@Override
public void bind(ProjectionBindingContext context) {
var constructorParam = context.constructorParameter(); (1)
context.definition(
String.class,
new MyProjectionDefinition( constructorParam.name().orElseThrow() ) (2)
);
}
private static class MyProjectionDefinition
implements ProjectionDefinition<String> {
private final String fieldName;
private MyProjectionDefinition(String fieldName) {
this.fieldName = fieldName;
}
@Override
public SearchProjection<String> create(SearchProjectionFactory<?, ?> factory,
ProjectionDefinitionContext context) {
return factory.field( fieldName, String.class ) (3)
.toProjection();
}
}
}
1 | Use the binding context to get the constructor parameter. |
2 | Pass the name of the constructor parameter to the projection definition. |
3 | Use the name of the constructor parameter as the projected field name. |
@ProjectionConstructor
public record MyBookProjection(
@ProjectionBinding(binder = @ProjectionBinderRef( (1)
type = MyFieldProjectionBinder.class
))
String title) {
}
1 | Apply the binder using the @ProjectionBinding annotation. |
13. Managing the index schema
13.1. Basics
Before indexes can be used for indexing or searching, they must be created on disk (Lucene) or in the remote cluster (Elasticsearch). With Elasticsearch in particular, this creation may not be obvious since it requires to describe the schema for each index, which includes in particular:
-
the definition of every analyzer or normalizer used in this index;
-
the definition of every single field used in this index, including in particular its type, the analyzer assigned to it, whether it requires doc values, etc.
Hibernate Search has all the necessary information to generate this schema automatically, so it is possible to delegate the task of managing the schema to Hibernate Search.
13.2. Automatic schema management on startup/shutdown
The property hibernate.search.schema_management.strategy
can be set to one of the following values
in order to define what to do with the indexes and their schema on startup and shutdown.
Strategy | Definition | Warnings |
---|---|---|
A strategy that does not do anything on startup or shutdown. Indexes and their schema will not be created nor deleted on startup or shutdown. Hibernate Search will not even check that the index actually exists. |
With Elasticsearch, indexes and their schema will have to be created explicitly before startup. |
|
A strategy that does not change indexes nor their schema, but checks that indexes exist and validates their schema on startup. An exception will be thrown on startup if:
"Compatible" differences such as extra fields are ignored. |
Indexes and their schema will have to be created explicitly before startup. With the Lucene backend, validation is limited to checking that the indexes exist, because local Lucene indexes don’t have a schema. |
|
A strategy that creates missing indexes and their schema on startup, and validates the schema of existing indexes. With the Elasticsearch backend only, an exception will be thrown on startup if some indexes already exist but their schema does not match the requirements of the Hibernate Search mapping: missing fields, fields with incorrect type, missing analyzer definitions or normalizer definitions, … "Compatible" differences such as extra fields are ignored. |
With the Lucene backend, validation is limited to checking that the indexes exist, because local Lucene indexes don’t have a schema. |
|
This strategy is unfit for production environments, due to several limitations including the impossibility to change the type of an existing field or the requirement to close indexes while updating analyzer definitions (which is not possible at all on AWS). With the Lucene backend, schema update is a no-op, because local Lucene indexes don’t have a schema. |
||
A strategy that drops existing indexes and re-creates them and their schema on startup. |
All indexed data will be lost on startup. |
|
A strategy that drops existing indexes and re-creates them and their schema on startup, then drops the indexes on shutdown. |
All indexed data will be lost on startup and shutdown. |
13.3. Manual schema management
Schema management does not have to happen automatically on startup and shutdown.
Using the SearchSchemaManager
interface,
it is possible to trigger schema management operations explicitly
after Hibernate Search has started.
The most common use case is to set the automatic schema management strategy to After schema management operations are complete, you will often want to populate indexes. To that end, use the mass indexer. |
The SearchSchemaManager
interface exposes the following methods.
Method | Definition | Warnings |
---|---|---|
Does not change indexes nor their schema, but checks that indexes exist and validates their schema. |
With the Lucene backend, validation is limited to checking that the indexes exist, because local Lucene indexes don’t have a schema. |
|
Creates missing indexes and their schema, but does not touch existing indexes and assumes their schema is correct without validating it. |
||
Creates missing indexes and their schema, and validates the schema of existing indexes. |
With the Lucene backend, validation is limited to checking that the indexes exist, because local Lucene indexes don’t have a schema. |
|
Creates missing indexes and their schema, and updates the schema of existing indexes if possible. |
With the Elasticsearch backend, updating a schema may fail. With the Elasticsearch backend, updating a schema may close indexes while updating analyzer definitions (which is not possible at all on Amazon OpenSearch Service). With the Lucene backend, schema update is a no-op, because local Lucene indexes don’t have a schema. (it just creates missing indexes). |
|
Drops existing indexes. |
||