Hibernate.orgCommunity Documentation
All the metadata information needed to index entities is described through annotations. There is no need for xml mapping files. In fact there is currently no xml configuration option available (see HSEARCH-210). You can still use Hibernate mapping files for the basic Hibernate configuration, but the Hibernate Search specific configuration has to be expressed via annotations.
First, we must declare a persistent class as indexable. This is
done by annotating the class with @Indexed
(all
entities not annotated with @Indexed
will be ignored
by the indexing process):
Example 4.1. Making a class indexable using the
@Indexed
annotation
@Entity
@Indexed(index="indexes/essays")
public class Essay {
...
}
The index
attribute tells Hibernate what the
Lucene directory name is (usually a directory on your file system). It
is recommended to define a base directory for all Lucene indexes using
the hibernate.search.default.indexBase
property in
your configuration file. Alternatively you can specify a base directory
per indexed entity by specifying
hibernate.search.<index>.indexBase,
where
<index>
is the fully qualified classname of the
indexed entity. Each entity instance will be represented by a Lucene
Document
inside the given index (aka
Directory).
For each property (or attribute) of your entity, you have the
ability to describe how it will be indexed. The default (no annotation
present) means that the property is ignored by the indexing process.
@Field
does declare a property as indexed. When
indexing an element to a Lucene document you can specify how it is
indexed:
name
: describe under which name, the
property should be stored in the Lucene Document. The default value
is the property name (following the JavaBeans convention)
store
: describe whether or not the
property is stored in the Lucene index. You can store the value
Store.YES
(consuming more space in the index but
allowing projection, see Section 5.1.2.5, “Projection” for more
information), store it in a compressed way
Store.COMPRESS
(this does consume more CPU), or
avoid any storage Store.NO
(this is the default
value). When a property is stored, you can retrieve its original
value from the Lucene Document. This is not related to whether the
element is indexed or not.
index: describe how the element is indexed and the type of
information store. The different values are
Index.NO
(no indexing, ie cannot be found by a
query), Index.TOKENIZED
(use an analyzer to
process the property), Index.UN_TOKENIZED
(no
analyzer pre-processing), Index.NO_NORMS
(do not
store the normalization data). The default value is
TOKENIZED
.
termVector: describes collections of term-frequency pairs. This attribute enables term vectors being stored during indexing so they are available within documents. The default value is TermVector.NO.
The different values of this attribute are:
Value | Definition |
---|---|
TermVector.YES | Store the term vectors of each document. This produces two synchronized arrays, one contains document terms and the other contains the term's frequency. |
TermVector.NO | Do not store term vectors. |
TermVector.WITH_OFFSETS | Store the term vector and token offset information. This is the same as TermVector.YES plus it contains the starting and ending offset position information for the terms. |
TermVector.WITH_POSITIONS | Store the term vector and token position information. This is the same as TermVector.YES plus it contains the ordinal positions of each occurrence of a term in a document. |
TermVector.WITH_POSITION_OFFSETS | Store the term vector, token position and offset information. This is a combination of the YES, WITH_OFFSETS and WITH_POSITIONS. |
Whether or not you want to store the original data in the index depends on how you wish to use the index query result. For a regular Hibernate Search usage storing is not necessary. However you might want to store some fields to subsequently project them (see Section 5.1.2.5, “Projection” for more information).
Whether or not you want to tokenize a property depends on whether you wish to search the element as is, or by the words it contains. It make sense to tokenize a text field, but probably not a date field.
Fields used for sorting must not be tokenized.
Finally, the id property of an entity is a special property used
by Hibernate Search to ensure index unicity of a given entity. By
design, an id has to be stored and must not be tokenized. To mark a
property as index id, use the @DocumentId
annotation.
If you are using Hibernate Annotations and you have specified @Id you
can omit @DocumentId. The chosen entity id will also be used as document
id.
Example 4.2. Adding @DocumentId
ad
@Field
annotations to an indexed entity
@Entity @Indexed(index="indexes/essays") public class Essay { ... @Id @DocumentId public Long getId() { return id; } @Field(name="Abstract", index=Index.TOKENIZED, store=Store.YES) public String getSummary() { return summary; } @Lob @Field(index=Index.TOKENIZED) public String getText() { return text; } }
Example 4.2, “Adding @DocumentId ad
@Field annotations to an indexed entity” define an index with
three fields: id
, Abstract
and
text
. Note that by default the field name is
decapitalized, following the JavaBean specification
Sometimes one has to map a property multiple times per index, with
slightly different indexing strategies. For example, sorting a query by
field requires the field to be UN_TOKENIZED
. If one
wants to search by words in this property and still sort it, one need to
index it twice - once tokenized and once untokenized. @Fields allows to
achieve this goal.
Example 4.3. Using @Fields to map a property multiple times
@Entity @Indexed(index = "Book" ) public class Book { @Fields( { @Field(index = Index.TOKENIZED), @Field(name = "summary_forSort", index = Index.UN_TOKENIZED, store = Store.YES) } ) public String getSummary() { return summary; } ... }
In Example 4.3, “Using @Fields to map a property multiple times” the field
summary
is indexed twice, once as
summary
in a tokenized way, and once as
summary_forSort
in an untokenized way. @Field
supports 2 attributes useful when @Fields is used:
analyzer: defines a @Analyzer annotation per field rather than per property
bridge: defines a @FieldBridge annotation per field rather than per property
See below for more information about analyzers and field bridges.
Associated objects as well as embedded objects can be indexed as
part of the root entity index. This is useful if you expect to search a
given entity based on properties of associated objects. In the following
example the aim is to return places where the associated city is Atlanta
(In the Lucene query parser language, it would translate into
address.city:Atlanta
).
Example 4.4. Using @IndexedEmbedded to index associations
@Entity @Indexed public class Place { @Id @GeneratedValue @DocumentId private Long id; @Field( index = Index.TOKENIZED ) private String name; @OneToOne( cascade = { CascadeType.PERSIST, CascadeType.REMOVE } ) @IndexedEmbedded private Address address; .... } @Entity public class Address { @Id @GeneratedValue private Long id; @Field(index=Index.TOKENIZED) private String street; @Field(index=Index.TOKENIZED) private String city; @ContainedIn @OneToMany(mappedBy="address") private Set<Place> places; ... }
In this example, the place fields will be indexed in the
Place
index. The Place
index
documents will also contain the fields address.id
,
address.street
, and address.city
which you will be able to query. This is enabled by the
@IndexedEmbedded
annotation.
Be careful. Because the data is denormalized in the Lucene index
when using the @IndexedEmbedded
technique,
Hibernate Search needs to be aware of any change in the
Place
object and any change in the
Address
object to keep the index up to date. To
make sure the
Lucene
document is updated when it's Place
Address
changes,
you need to mark the other side of the bidirectional relationship with
@ContainedIn
.
@ContainedIn
is only useful on associations
pointing to entities as opposed to embedded (collection of)
objects.
Let's make our example a bit more complex:
Example 4.5. Nested usage of @IndexedEmbedded
and
@ContainedIn
@Entity @Indexed public class Place { @Id @GeneratedValue @DocumentId private Long id; @Field( index = Index.TOKENIZED ) private String name; @OneToOne( cascade = { CascadeType.PERSIST, CascadeType.REMOVE } ) @IndexedEmbedded private Address address; .... } @Entity public class Address { @Id @GeneratedValue private Long id; @Field(index=Index.TOKENIZED) private String street; @Field(index=Index.TOKENIZED) private String city; @IndexedEmbedded(depth = 1, prefix = "ownedBy_") private Owner ownedBy; @ContainedIn @OneToMany(mappedBy="address") private Set<Place> places; ... } @Embeddable public class Owner { @Field(index = Index.TOKENIZED) private String name; ... }
Any @*ToMany, @*ToOne
and
@Embedded
attribute can be annotated with
@IndexedEmbedded
. The attributes of the associated
class will then be added to the main entity index. In the previous
example, the index will contain the following fields
id
name
address.street
address.city
address.ownedBy_name
The default prefix is propertyName.
, following
the traditional object navigation convention. You can override it using
the prefix
attribute as it is shown on the
ownedBy
property.
The prefix cannot be set to the empty string.
The depth
property is necessary when the object
graph contains a cyclic dependency of classes (not instances). For
example, if Owner
points to
Place
. Hibernate Search will stop including
Indexed embedded attributes after reaching the expected depth (or the
object graph boundaries are reached). A class having a self reference is
an example of cyclic dependency. In our example, because
depth
is set to 1, any
@IndexedEmbedded
attribute in Owner (if any) will be
ignored.
Using @IndexedEmbedded
for object associations
allows you to express queries such as:
Return places where name contains JBoss and where address city is Atlanta. In Lucene query this would be
+name:jboss +address.city:atlanta
Return places where name contains JBoss and where owner's name contain Joe. In Lucene query this would be
+name:jboss +address.orderBy_name:joe
In a way it mimics the relational join operation in a more efficient way (at the cost of data duplication). Remember that, out of the box, Lucene indexes have no notion of association, the join operation is simply non-existent. It might help to keep the relational model normalized while benefiting from the full text index speed and feature richness.
An associated object can itself (but does not have to) be
@Indexed
When @IndexedEmbedded points to an entity, the association has to
be directional and the other side has to be annotated
@ContainedIn
(as seen in the previous example). If
not, Hibernate Search has no way to update the root index when the
associated entity is updated (in our example, a Place
index document has to be updated when the associated
Address
instance is updated).
Sometimes, the object type annotated by
@IndexedEmbedded
is not the object type targeted
by Hibernate and Hibernate Search. This is especially the case when
interfaces are used in lieu of their implementation. For this reason you
can override the object type targeted by Hibernate Search using the
targetElement
parameter.
Example 4.6. Using the targetElement
property of
@IndexedEmbedded
@Entity
@Indexed
public class Address {
@Id
@GeneratedValue
@DocumentId
private Long id;
@Field(index= Index.TOKENIZED)
private String street;
@IndexedEmbedded(depth = 1, prefix = "ownedBy_", targetElement = Owner.class)
@Target(Owner.class)
private Person ownedBy;
...
}
@Embeddable
public class Owner implements Person { ... }
Lucene has the notion of boost factor. It's a
way to give more weight to a field or to an indexed element over others
during the indexation process. You can use @Boost
at
the @Field, method or class level.
Example 4.7. Using different ways of increasing the weight of an indexed element using a boost factor
@Entity @Indexed(index="indexes/essays") @Boost(1.7f) public class Essay { ... @Id @DocumentId public Long getId() { return id; } @Field(name="Abstract", index=Index.TOKENIZED, store=Store.YES, boost=@Boost(2f)) @Boost(1.5f) public String getSummary() { return summary; } @Lob @Field(index=Index.TOKENIZED, boost=@Boost(1.2f)) public String getText() { return text; } @Field public String getISBN() { return isbn; } }
In our example, Essay
's probability to
reach the top of the search list will be multiplied by 1.7. The
summary
field will be 3.0 (2 * 1.5 -
@Field.boost
and @Boost
on a property are cumulative) more important than the
isbn
field. The text
field will be 1.2 times more important than the
isbn
field. Note that this explanation in
strictest terms is actually wrong, but it is simple and close enough to
reality for all practical purposes. Please check the Lucene
documentation or the excellent Lucene In Action
from Otis Gospodnetic and Erik Hatcher.
The @Boost
annotation used in Section 4.1.4, “Boost factor” defines a static boost factor
which is is independent of the state of of the indexed entity at
runtime. However, there are usecases in which the boost factor may
depends on the actual state of the entity. In this case you can use the
@DynamicBoost
annotation together with an
accompanying custom BoostStrategy
.
Example 4.8. Dynamic boost examle
public enum PersonType { NORMAL, VIP } @Entity @Indexed @DynamicBoost(impl = VIPBoostStrategy.class) public class Person { private PersonType type; // .... } public class VIPBoostStrategy implements BoostStrategy { public float defineBoost(Object value) { Person person = ( Person ) value; if ( person.getType().equals( PersonType.VIP ) ) { return 2.0f; } else { return 1.0f; } } }
In Example 4.8, “Dynamic boost examle” a dynamic
boost is defined on class level specifying
VIPBoostStrategy
as implementation of the
BoostStrategy
interface to be used at indexing
time. You can place the @DynamicBoost
either at class
or field level. Depending on the placement of the annotation either the
whole entity is passed to the defineBoost
method or just the annotated field/property value. It's up to you to
cast the passed object to the correct type. In the example all indexed
values of a VIP person would be double as important as the values of a
normal person.
The specified BoostStrategy
implementation must define a public no-arg constructor.
Of course you can mix and match @Boost
and
@DynamicBoost
annotations in your entity. All defined
boost factors are cummulative as described in Section 4.1.4, “Boost factor”.
The default analyzer class used to index tokenized fields is
configurable through the hibernate.search.analyzer
property. The default value for this property is
org.apache.lucene.analysis.standard.StandardAnalyzer
.
You can also define the analyzer class per entity, property and even per @Field (useful when multiple fields are indexed from a single property).
Example 4.9. Different ways of specifying an analyzer
@Entity @Indexed @Analyzer(impl = EntityAnalyzer.class) public class MyEntity { @Id @GeneratedValue @DocumentId private Integer id; @Field(index = Index.TOKENIZED) private String name; @Field(index = Index.TOKENIZED) @Analyzer(impl = PropertyAnalyzer.class) private String summary; @Field(index = Index.TOKENIZED, analyzer = @Analyzer(impl = FieldAnalyzer.class) private String body; ... }
In this example, EntityAnalyzer
is used to
index all tokenized properties (eg. name
), except
summary
and body
which are indexed
with PropertyAnalyzer
and
FieldAnalyzer
respectively.
Mixing different analyzers in the same entity is most of the time a bad practice. It makes query building more complex and results less predictable (for the novice), especially if you are using a QueryParser (which uses the same analyzer for the whole query). As a rule of thumb, for any given field the same analyzer should be used for indexing and querying.
Analyzers can become quite complex to deal with for which reason
Hibernate Search introduces the notion of analyzer definitions. An
analyzer definition can be reused by many
@Analyzer
declarations. An analyzer definition
is composed of:
a name: the unique string used to refer to the definition
a list of char filters: each char filter is responsible to pre-process input characters before the tokenization. Char filters can add, change or remove characters; one common usage is for characters normalization
a tokenizer: responsible for tokenizing the input stream into individual words
a list of filters: each filter is responsible to remove, modify or sometimes even add words into the stream provided by the tokenizer
This separation of tasks - a list of char filters, and a tokenizer followed by a list of
filters - allows for easy reuse of each individual component and let
you build your customized analyzer in a very flexible way (just like
Lego). Generally speaking the char filters
do some
pre-processing in the character input, then the Tokenizer
starts
the tokenizing process by turning the character input into tokens which
are then further processed by the TokenFilter
s.
Hibernate Search supports this infrastructure by utilizing the Solr
analyzer framework. Make sure to add solr-core.jar and
solr-solrj.jar
to your classpath to
use analyzer definitions. In case you also want to use the
snowball stemmer also include the
lucene-snowball.jar.
Other Solr analyzers might
depend on more libraries. For example, the
PhoneticFilterFactory
depends on commons-codec. Your
distribution of Hibernate Search provides these dependencies in its
lib
directory.
Example 4.10. @AnalyzerDef
and the Solr
framework
@AnalyzerDef(name="customanalyzer", charFilters = { @CharFilterDef(factory = MappingCharFilterFactory.class, params = { @Parameter(name = "mapping", value = "org/hibernate/search/test/analyzer/solr/mapping-chars.properties") }) }, tokenizer = @TokenizerDef(factory = StandardTokenizerFactory.class), filters = { @TokenFilterDef(factory = ISOLatin1AccentFilterFactory.class), @TokenFilterDef(factory = LowerCaseFilterFactory.class), @TokenFilterDef(factory = StopFilterFactory.class, params = { @Parameter(name="words", value= "org/hibernate/search/test/analyzer/solr/stoplist.properties" ), @Parameter(name="ignoreCase", value="true") }) }) public class Team { ... }
A char filter is defined by its factory which is responsible for building the char filter and using the optional list of parameters. In our example, a mapping char filter is used, and will replace characters in the input based on the rules specified in the mapping file. A tokenizer is also defined by its factory. This example use the standard tokenizer. A filter is defined by its factory which is responsible for creating the filter instance using the optional parameters. In our example, the StopFilter filter is built reading the dedicated words property file and is expected to ignore case. The list of parameters is dependent on the tokenizer or filter factory.
Filters and char filters are applied in the order they are defined in the
@AnalyzerDef
annotation. Make sure to think
twice about this order.
Once defined, an analyzer definition can be reused by an
@Analyzer
declaration using the definition name
rather than declaring an implementation class.
Example 4.11. Referencing an analyzer by name
@Entity
@Indexed
@AnalyzerDef(name="customanalyzer", ... )
public class Team {
@Id
@DocumentId
@GeneratedValue
private Integer id;
@Field
private String name;
@Field
private String location;
@Field @Analyzer(definition = "customanalyzer")
private String description;
}
Analyzer instances declared by
@AnalyzerDef
are available by their name in the
SearchFactory
.
Analyzer analyzer = fullTextSession.getSearchFactory().getAnalyzer("customanalyzer");
This is quite useful wen building queries. Fields in queries should be analyzed with the same analyzer used to index the field so that they speak a common "language": the same tokens are reused between the query and the indexing process. This rule has some exceptions but is true most of the time. Respect it unless you know what you are doing.
Solr and Lucene come with a lot of useful default char filters, tokenizers and filters. You can find a complete list of char filter factories, tokenizer factories and filter factories at http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters. Let check a few of them.
Table 4.1. Some of the available char filters
Factory | Description | parameters |
---|---|---|
MappingCharFilterFactory | Replaces one or more characters with one or more characters, based on mappings specified in the resource file |
|
HTMLStripCharFilterFactory | Remove HTML standard tags, keeping the text | none |
Table 4.2. Some of the available tokenizers
Factory | Description | parameters |
---|---|---|
StandardTokenizerFactory | Use the Lucene StandardTokenizer | none |
HTMLStripStandardTokenizerFactory | Remove HTML tags, keep the text and pass it to a StandardTokenizer. @Deprecated, use the HTMLStripCharFilterFactory instead | none |
Table 4.3. Some of the available filters
Factory | Description | parameters |
---|---|---|
StandardFilterFactory | Remove dots from acronyms and 's from words | none |
LowerCaseFilterFactory | Lowercase words | none |
StopFilterFactory | remove words (tokens) matching a list of stop words |
ignoreCase: true if
|
SnowballPorterFilterFactory | Reduces a word to it's root in a given language. (eg. protect, protects, protection share the same root). Using such a filter allows searches matching related words. | language : Danish, Dutch, English,
Finnish, French, German, Italian, Norwegian, Portuguese,
Russian, Spanish, Swedish and a few more |
ISOLatin1AccentFilterFactory | remove accents for languages like French | none |
We recommend to check all the implementations of
org.apache.solr.analysis.TokenizerFactory
and
org.apache.solr.analysis.TokenFilterFactory
in
your IDE to see the implementations available.
So far all the introduced ways to specify an analyzer were
static. However, there are use cases where it is useful to select an
analyzer depending on the current state of the entity to be indexed,
for example in multilingual applications. For an
BlogEntry
class for example the analyzer could
depend on the language property of the entry. Depending on this
property the correct language specific stemmer should be chosen to
index the actual text.
To enable this dynamic analyzer selection Hibernate Search
introduces the AnalyzerDiscriminator
annotation. The following example demonstrates the usage of this
annotation:
Example 4.12. Usage of @AnalyzerDiscriminator in order to select an analyzer depending on the entity state
@Entity @Indexed @AnalyzerDefs({ @AnalyzerDef(name = "en", tokenizer = @TokenizerDef(factory = StandardTokenizerFactory.class), filters = { @TokenFilterDef(factory = LowerCaseFilterFactory.class), @TokenFilterDef(factory = EnglishPorterFilterFactory.class ) }), @AnalyzerDef(name = "de", tokenizer = @TokenizerDef(factory = StandardTokenizerFactory.class), filters = { @TokenFilterDef(factory = LowerCaseFilterFactory.class), @TokenFilterDef(factory = GermanStemFilterFactory.class) }) }) public class BlogEntry { @Id @GeneratedValue @DocumentId private Integer id; @Field @AnalyzerDiscriminator(impl = LanguageDiscriminator.class) private String language; @Field private String text; private Set<BlogEntry> references; // standard getter/setter ... }
public class LanguageDiscriminator implements Discriminator { public String getAnalyzerDefinitionName(Object value, Object entity, String field) { if ( value == null || !( entity instanceof Article ) ) { return null; } return (String) value; } }
The prerequisite for using
@AnalyzerDiscriminator
is that all analyzers
which are going to be used are predefined via
@AnalyzerDef
definitions. If this is the case
one can place the @AnalyzerDiscriminator
annotation either on the class or on a specific property of the entity
for which to dynamically select an analyzer. Via the
impl
parameter of the
AnalyzerDiscriminator
you specify a concrete
implementation of the Discriminator
interface.
It is up to you to provide an implementation for this interface. The
only method you have to implement is
getAnalyzerDefinitionName()
which gets called
for each field added to the Lucene document. The entity which is
getting indexed is also passed to the interface method. The
value
parameter is only set if the
AnalyzerDiscriminator
is placed on property
level instead of class level. In this case the value represents the
current value of this property.
An implemention of the Discriminator
interface has to return the name of an existing analyzer definition if
the analyzer should be set dynamically or null
if the default analyzer should not be overridden. The given example
assumes that the language parameter is either 'de' or 'en' which
matches the specified names in the
@AnalyzerDef
s.
The @AnalyzerDiscriminator
is currently
still experimental and the API might still change. We are hoping for
some feedback from the community about the usefulness and usability
of this feature.
During indexing time, Hibernate Search is using analyzers under the hood for you. In some situations, retrieving analyzers can be handy. If your domain model makes use of multiple analyzers (maybe to benefit from stemming, use phonetic approximation and so on), you need to make sure to use the same analyzers when you build your query.
This rule can be broken but you need a good reason for it. If you are unsure, use the same analyzers.
You can retrieve the scoped analyzer for a given entity used at indexing time by Hibernate Search. A scoped analyzer is an analyzer which applies the right analyzers depending on the field indexed: multiple analyzers can be defined on a given entity each one working on an individual field, a scoped analyzer unify all these analyzers into a context-aware analyzer. While the theory seems a bit complex, using the right analyzer in a query is very easy.
Example 4.13. Using the scoped analyzer when building a full-text query
org.apache.lucene.queryParser.QueryParser parser = new QueryParser( "title", fullTextSession.getSearchFactory().getAnalyzer( Song.class ) ); org.apache.lucene.search.Query luceneQuery = parser.parse( "title:sky Or title_stemmed:diamond" ); org.hibernate.Query fullTextQuery = fullTextSession.createFullTextQuery( luceneQuery, Song.class ); List result = fullTextQuery.list(); //return a list of managed objects
In the example above, the song title is indexed in two fields:
the standard analyzer is used in the field title
and a stemming analyzer is used in the field
title_stemmed
. By using the analyzer provided by
the search factory, the query uses the appropriate analyzer depending
on the field targeted.
If your query targets more that one query and you wish to use
your standard analyzer, make sure to describe it using an analyzer
definition. You can retrieve analyzers by their definition name using
searchFactory.getAnalyzer(String)
.
In Lucene all index fields have to be represented as Strings. For
this reason all entity properties annotated with @Field
have to be indexed in a String form. For most of your properties,
Hibernate Search does the translation job for you thanks to a built-in set
of bridges. In some cases, though you need a more fine grain control over
the translation process.
Hibernate Search comes bundled with a set of built-in bridges between a Java property type and its full text representation.
null elements are not indexed. Lucene does not support null elements and this does not make much sense either.
String are indexed as is
Numbers are converted in their String representation. Note that numbers cannot be compared by Lucene (ie used in ranged queries) out of the box: they have to be padded
Using a Range query is debatable and has drawbacks, an alternative approach is to use a Filter query which will filter the result query to the appropriate range.
Hibernate Search will support a padding mechanism
Dates are stored as yyyyMMddHHmmssSSS in GMT time (200611072203012 for Nov 7th of 2006 4:03PM and 12ms EST). You shouldn't really bother with the internal format. What is important is that when using a DateRange Query, you should know that the dates have to be expressed in GMT time.
Usually, storing the date up to the millisecond is not
necessary. @DateBridge
defines the appropriate
resolution you are willing to store in the index (
). The date pattern will then be truncated
accordingly.@DateBridge(resolution=Resolution.DAY)
@Entity
@Indexed
public class Meeting {
@Field(index=Index.UN_TOKENIZED)
@DateBridge(resolution=Resolution.MINUTE)
private Date date;
...
A Date whose resolution is lower than
MILLISECOND
cannot be a
@DocumentId
URI and URL are converted to their string representation
Class are converted to their fully qualified class name. The thread context classloader is used when the class is rehydrated
Sometimes, the built-in bridges of Hibernate Search do not cover some of your property types, or the String representation used by the bridge does not meet your requirements. The following paragraphs describe several solutions to this problem.
The simplest custom solution is to give Hibernate Search an
implementation of your expected
Object
to
String
bridge. To do so you need to implements
the org.hibernate.search.bridge.StringBridge
interface. All implementations have to be thread-safe as they are used
concurrently.
Example 4.14. Implementing your own
StringBridge
/** * Padding Integer bridge. * All numbers will be padded with 0 to match 5 digits * * @author Emmanuel Bernard */ public class PaddedIntegerBridge implements StringBridge { private int PADDING = 5; public String objectToString(Object object) { String rawInteger = ( (Integer) object ).toString(); if (rawInteger.length() > PADDING) throw new IllegalArgumentException( "Try to pad on a number too big" ); StringBuilder paddedInteger = new StringBuilder( ); for ( int padIndex = rawInteger.length() ; padIndex < PADDING ; padIndex++ ) { paddedInteger.append('0'); } return paddedInteger.append( rawInteger ).toString(); } }
Then any property or field can use this bridge thanks to the
@FieldBridge
annotation
@FieldBridge(impl = PaddedIntegerBridge.class)
private Integer length;
Parameters can be passed to the Bridge implementation making it
more flexible. The Bridge implementation implements a
ParameterizedBridge
interface, and the
parameters are passed through the @FieldBridge
annotation.
Example 4.15. Passing parameters to your bridge implementation
public class PaddedIntegerBridge implements StringBridge, ParameterizedBridge { public static String PADDING_PROPERTY = "padding"; private int padding = 5; //default public void setParameterValues(Map parameters) { Object padding = parameters.get( PADDING_PROPERTY ); if (padding != null) this.padding = (Integer) padding; } public String objectToString(Object object) { String rawInteger = ( (Integer) object ).toString(); if (rawInteger.length() > padding) throw new IllegalArgumentException( "Try to pad on a number too big" ); StringBuilder paddedInteger = new StringBuilder( ); for ( int padIndex = rawInteger.length() ; padIndex < padding ; padIndex++ ) { paddedInteger.append('0'); } return paddedInteger.append( rawInteger ).toString(); } } //property @FieldBridge(impl = PaddedIntegerBridge.class, params = @Parameter(name="padding", value="10") ) private Integer length;
The ParameterizedBridge
interface can be
implemented by StringBridge
,
TwoWayStringBridge
,
FieldBridge
implementations.
All implementations have to be thread-safe, but the parameters are set during initialization and no special care is required at this stage.
If you expect to use your bridge implementation on an id
property (ie annotated with @DocumentId
), you need
to use a slightly extended version of StringBridge
named TwoWayStringBridge
. Hibernate Search
needs to read the string representation of the identifier and generate
the object out of it. There is no difference in the way the
@FieldBridge
annotation is used.
Example 4.16. Implementing a TwoWayStringBridge which can for example be used for id properties
public class PaddedIntegerBridge implements TwoWayStringBridge, ParameterizedBridge {
public static String PADDING_PROPERTY = "padding";
private int padding = 5; //default
public void setParameterValues(Map parameters) {
Object padding = parameters.get( PADDING_PROPERTY );
if (padding != null) this.padding = (Integer) padding;
}
public String objectToString(Object object) {
String rawInteger = ( (Integer) object ).toString();
if (rawInteger.length() > padding)
throw new IllegalArgumentException( "Try to pad on a number too big" );
StringBuilder paddedInteger = new StringBuilder( );
for ( int padIndex = rawInteger.length() ; padIndex < padding ; padIndex++ ) {
paddedInteger.append('0');
}
return paddedInteger.append( rawInteger ).toString();
}
public Object stringToObject(String stringValue) {
return new Integer(stringValue);
}
}
//id property
@DocumentId
@FieldBridge(impl = PaddedIntegerBridge.class,
params = @Parameter(name="padding", value="10")
private Integer id;
It is critically important for the two-way process to be idempotent (ie object = stringToObject( objectToString( object ) ) ).
Some use cases require more than a simple object to string
translation when mapping a property to a Lucene index. To give you the
greatest possible flexibility you can also implement a bridge as a
FieldBridge
. This interface gives you a
property value and let you map it the way you want in your Lucene
Document
. The interface is very similar in its
concept to the Hibernate UserType
s.
You can for example store a given property in two different document fields:
Example 4.17. Implementing the FieldBridge interface in order to a given property into multiple document fields
/** * Store the date in 3 different fields - year, month, day - to ease Range Query per * year, month or day (eg get all the elements of December for the last 5 years). * @author Emmanuel Bernard */ public class DateSplitBridge implements FieldBridge { private final static TimeZone GMT = TimeZone.getTimeZone("GMT"); public void set(String name, Object value, Document document, LuceneOptions luceneOptions) { Date date = (Date) value; Calendar cal = GregorianCalendar.getInstance(GMT); cal.setTime(date); int year = cal.get(Calendar.YEAR); int month = cal.get(Calendar.MONTH) + 1; int day = cal.get(Calendar.DAY_OF_MONTH); // set year luceneOptions.addFieldToDocument( name + ".year", String.valueOf( year ), document ); // set month and pad it if needed luceneOptions.addFieldToDocument( name + ".month", month < 10 ? "0" : "" + String.valueOf( month ), document ); // set day and pad it if needed luceneOptions.addFieldToDocument( name + ".day", day < 10 ? "0" : "" + String.valueOf( day ), document ); } } //property @FieldBridge(impl = DateSplitBridge.class) private Date date;
In the previous example the fields where not added directly to Document
but we where delegating this task to the LuceneOptions
helper; this will apply the
options you have selected on @Field
, like Store
or TermVector
options, or apply the choosen @Boost
value. It is especially useful to encapsulate the complexity of COMPRESS
implementations so it's recommended to delegate to LuceneOptions
to add fields to the
Document
, but nothing stops you from editing
the Document
directly and ignore the LuceneOptions
in case you need to.
Classes like LuceneOptions
are created to shield your application from
changes in Lucene API and simplify your code. Use them if you can, but if you need more flexibility
you're not required to.
It is sometimes useful to combine more than one property of a
given entity and index this combination in a specific way into the
Lucene index. The @ClassBridge
and
@ClassBridge
annotations can be defined at the
class level (as opposed to the property level). In this case the
custom field bridge implementation receives the entity instance as the
value parameter instead of a particular property. Though not shown in
this example, @ClassBridge
supports the
termVector
attribute discussed in section
Section 4.1.1, “Basic mapping”.
Example 4.18. Implementing a class bridge
@Entity @Indexed @ClassBridge(name="branchnetwork", index=Index.TOKENIZED, store=Store.YES, impl = CatFieldsClassBridge.class, params = @Parameter( name="sepChar", value=" " ) ) public class Department { private int id; private String network; private String branchHead; private String branch; private Integer maxEmployees ... } public class CatFieldsClassBridge implements FieldBridge, ParameterizedBridge { private String sepChar; public void setParameterValues(Map parameters) { this.sepChar = (String) parameters.get( "sepChar" ); } public void set(String name, Object value, Document document, LuceneOptions luceneOptions) { // In this particular class the name of the new field was passed // from the name field of the ClassBridge Annotation. This is not // a requirement. It just works that way in this instance. The // actual name could be supplied by hard coding it below. Department dep = (Department) value; String fieldValue1 = dep.getBranch(); if ( fieldValue1 == null ) { fieldValue1 = ""; } String fieldValue2 = dep.getNetwork(); if ( fieldValue2 == null ) { fieldValue2 = ""; } String fieldValue = fieldValue1 + sepChar + fieldValue2; Field field = new Field( name, fieldValue, luceneOptions.getStore(), luceneOptions.getIndex(), luceneOptions.getTermVector() ); field.setBoost( luceneOptions.getBoost() ); document.add( field ); } }
In this example, the particular
CatFieldsClassBridge
is applied to the
department
instance, the field bridge then
concatenate both branch and network and index the
concatenation.
This part of the documentation is a work in progress.
You can provide your own id for Hibernate Search if you are extending the internals. You will have to generate a unique value so it can be given to Lucene to be indexed. This will have to be given to Hibernate Search when you create an org.hibernate.search.Work object - the document id is required in the constructor.
Unlike conventional Hibernate Search API and @DocumentId, this annotation is used on the class and not a field. You also can provide your own bridge implementation when you put in this annotation by calling the bridge() which is on @ProvidedId. Also, if you annotate a class with @ProvidedId, your subclasses will also get the annotation - but it is not done by using the java.lang.annotations.@Inherited. Be sure however, to not use this annotation with @DocumentId as your system will break.
Example 4.19. Providing your own id
@ProvidedId (bridge = org.my.own.package.MyCustomBridge) @Indexed public class MyClass{ @Field String MyString; ... }
This feature is considered experimental. While stable code-wise, the API is subject to change in the future.
Although the recommended approach for mapping indexed entities is to use annotations, it is sometimes more convenient to use a different approach:
the same entity is mapped differently depending on deployment needs (customization for clients)
some automatization process requires the dynamic mapping of many entities sharing a common traits
While it has been a popular demand in the past, the Hibernate team never found the idea of an XML alternative to annotations appealing due to it's heavy duplication, lack of code refactoring safety, because it did not cover all the use case spectrum and because we are in the 21st century :)
Th idea of a programmatic API was much more appealing and has now become a reality. You can programmatically and safely define your mapping using a programmatic API: you define entities and fields as indexable by using mapping classes which effectively mirror the annotation concepts in Hibernate Search. Note that fan(s) of XML approach can design their own schema and use the programmatic API to create the mapping while parsing the XML stream.
In order to use the programmatic model you must first construct a
SearchMapping
object. This object is passed to
Hibernate Search via a property set to the Configuration
object. The property key is
hibernate.search.model_mapping
or it's type-safe
representation Environment.MODEL_MAPPING
.
SearchMapping mapping = new SearchMapping(); [...] configuration.setProperty( Environment.MODEL_MAPPING, mapping ); //or in JPA SearchMapping mapping = new SearchMapping(); [...] Map<String,String> properties = new HashMap<String,String)(1); properties.put( Environment.MODEL_MAPPING, mapping ); EntityManagerFactory emf = Persistence.createEntityManagerFactory( "userPU", properties );
The SearchMapping
is the root object which
contains all the necessary indexable entities and fields. From there, the
SearchMapping
object exposes a fluent (and thus
intuitive) API to express your mappings: it contextually exposes the
relevant mapping options in a type-safe way, just let your IDE
autocompletion feature guide you through.
Today, the programmatic API cannot be used on a class annotated with
Hibernate Search annotations, chose one approach or the other. Also note
that the same default values apply in annotations and the programmatic
API. For example, the @Field.name
is defaulted to
the property name and does not have to be set.
Each core concept of the programmatic API has a corresponding example to depict how the same definition would look using annotation. Therefore seeing an annotation example of the programmatic approach should give you a clear picture of what Hibernate Search will build with the marked entities and associated properties.
The first concept of the programmatic API is to define an entity
as indexable. Using the annotation approach a user would mark the entity
as @Indexed
, the following example demonstrates
how to programmatically achieve this.
Example 4.20. Marking an entity indexable
SearchMapping mapping = new SearchMapping(); mapping.entity(Address.class) .indexed() .indexName("Address_Index"); //optional cfg.getProperties().put( "hibernate.search.model_mapping", mapping );
As you can see you must first create a
SearchMapping
object which is the root object
that is then passed to the Configuration
object as property. You must declare an entity and if you wish to
make that entity as indexable then you must call the
indexed()
method. The indexed()
method has an optional indexName(String
indexName)
which can be used to change the default
index name that is created by Hibernate Search. Using the annotation
model the above can be achieved as:
Example 4.21. Annotation example of indexing entity
@Entity @Indexed(index="Address_Index") public class Address { .... }
To set a property as a document id:
Example 4.22. Enabling document id with programmatic model
SearchMapping mapping = new SearchMapping(); mapping.entity(Address.class).indexed() .property("addressId", ElementType.FIELD) //field access .documentId() .name("id"); cfg.getProperties().put( "hibernate.search.model_mapping", mapping);
The above is equivalent to annotating a property in the entity
as @DocumentId
as seen in the following
example:
Example 4.23. DocumentId annotation definition
@Entity @Indexed public class Address { @Id @GeneratedValue @DocumentId(name="id") private Long addressId; .... }
The next section demonstrates how to programmatically define
analyzers.
Analyzers can be programmatically defined using the
analyzerDef(String analyzerDef, Class<? extends
TokenizerFactory> tokenizerFactory)
method. This method
also enables you to define filters for the analyzer definition. Each
filter that you define can optionally take in parameters as seen in the
following example :
Example 4.24. Defining analyzers using programmatic model
SearchMapping mapping = new SearchMapping();
mapping
.analyzerDef( "ngram", StandardTokenizerFactory.class )
.filter( LowerCaseFilterFactory.class )
.filter( NGramFilterFactory.class )
.param( "minGramSize", "3" )
.param( "maxGramSize", "3" )
.analyzerDef( "en", StandardTokenizerFactory.class )
.filter( LowerCaseFilterFactory.class )
.filter( EnglishPorterFilterFactory.class )
.analyzerDef( "de", StandardTokenizerFactory.class )
.filter( LowerCaseFilterFactory.class )
.filter( GermanStemFilterFactory.class )
.entity(Address.class).indexed()
.property("addressId", ElementType.METHOD) //getter access
.documentId()
.name("id");
cfg.getProperties().put( "hibernate.search.model_mapping", mapping );
The analyzer mapping defined above is equivalent to the
annotation model using @AnalyzerDef
in
conjunction with @AnalyzerDefs
:
Example 4.25. Analyzer definition using annotation
@Indexed @Entity @AnalyzerDefs({ @AnalyzerDef(name = "ngram", tokenizer = @TokenizerDef(factory = StandardTokenizerFactory.class), filters = { @TokenFilterDef(factory = LowerCaseFilterFactory.class), @TokenFilterDef(factory = NGramFilterFactory.class, params = { @Parameter(name = "minGramSize",value = "3"), @Parameter(name = "maxGramSize",value = "3") }) }), @AnalyzerDef(name = "en", tokenizer = @TokenizerDef(factory = StandardTokenizerFactory.class), filters = { @TokenFilterDef(factory = LowerCaseFilterFactory.class), @TokenFilterDef(factory = EnglishPorterFilterFactory.class) }), @AnalyzerDef(name = "de", tokenizer = @TokenizerDef(factory = StandardTokenizerFactory.class), filters = { @TokenFilterDef(factory = LowerCaseFilterFactory.class), @TokenFilterDef(factory = GermanStemFilterFactory.class) }) }) public class Address { ... }
The programmatic API provides easy mechanism for defining full
text filter definitions which is available via
@FullTextFilterDef
and
@FullTextFilterDefs
. Note that contrary to the
annotation equivalent, full text filter definitions are a global
construct and are not tied to an entity. The next example depicts the
creation of full text filter definition using the
fullTextFilterDef
method.
Example 4.26. Defining full text definition programmatically
SearchMapping mapping = new SearchMapping();
mapping
.analyzerDef( "en", StandardTokenizerFactory.class )
.filter( LowerCaseFilterFactory.class )
.filter( EnglishPorterFilterFactory.class )
.fullTextFilterDef("security", SecurityFilterFactory.class)
.cache(FilterCacheModeType.INSTANCE_ONLY)
.entity(Address.class)
.indexed()
.property("addressId", ElementType.METHOD)
.documentId()
.name("id")
.property("street1", ElementType.METHOD)
.field()
.analyzer("en")
.store(Store.YES)
.field()
.name("address_data")
.analyzer("en")
.store(Store.NO);
cfg.getProperties().put( "hibernate.search.model_mapping", mapping );
The previous example can effectively been seen as annotating
your entity with @FullTextFilterDef
like
below:
Example 4.27. Using annotation to define full text filter definition
@Entity @Indexed @AnalyzerDefs({ @AnalyzerDef(name = "en", tokenizer = @TokenizerDef(factory = StandardTokenizerFactory.class), filters = { @TokenFilterDef(factory = LowerCaseFilterFactory.class), @TokenFilterDef(factory = EnglishPorterFilterFactory.class) }) }) @FullTextFilterDefs({ @FullTextFilterDef(name = "security", impl = SecurityFilterFactory.class, cache = FilterCacheModeType.INSTANCE_ONLY) }) public class Address { @Id @GeneratedValue @DocumentId(name="id") pubblic Long getAddressId() {...}; @Fields({ @Field(index=Index.TOKENIZED, store=Store.YES, analyzer=@Analyzer(definition="en")), @Field(name="address_data", analyzer=@Analyzer(definition="en")) }) public String getAddress1() {...}; ...... }
When defining fields for indexing using the programmatic API, call
field()
on the property(String
propertyName, ElementType elementType)
method. From
field()
you can specify the name,
index
, store
,
bridge
and analyzer
definitions.
Example 4.28. Indexing fields using programmatic API
SearchMapping mapping = new SearchMapping();
mapping
.analyzerDef( "en", StandardTokenizerFactory.class )
.filter( LowerCaseFilterFactory.class )
.filter( EnglishPorterFilterFactory.class )
.entity(Address.class).indexed()
.property("addressId", ElementType.METHOD)
.documentId()
.name("id")
.property("street1", ElementType.METHOD)
.field()
.analyzer("en")
.store(Store.YES)
.index(Index.TOKENIZED) //no useful here as it's the default
.field()
.name("address_data")
.analyzer("en");
cfg.getProperties().put( "hibernate.search.model_mapping", mapping );
The above example of marking fields as indexable is equivalent
to defining fields using @Field
as seen
below:
Example 4.29. Indexing fields using annotation
@Entity @Indexed @AnalyzerDefs({ @AnalyzerDef(name = "en", tokenizer = @TokenizerDef(factory = StandardTokenizerFactory.class), filters = { @TokenFilterDef(factory = LowerCaseFilterFactory.class), @TokenFilterDef(factory = EnglishPorterFilterFactory.class) }) }) public class Address { @Id @GeneratedValue @DocumentId(name="id") private Long getAddressId() {...}; @Fields({ @Field(index=Index.TOKENIZED, store=Store.YES, analyzer=@Analyzer(definition="en")), @Field(name="address_data", analyzer=@Analyzer(definition="en")) }) public String getAddress1() {...} ...... }
In this section you will see how to programmatically define
entities to be embedded into the indexed entity similar to using the
@IndexEmbedded
model. In order to define this you
must mark the property as indexEmbedded.
The is
the option to add a prefix to the embedded entity definition and this
can be done by calling prefix
as seen in the
example below:
Example 4.30. Programmatically defining embedded entites
SearchMapping mapping = new SearchMapping();
mappping
.entity(ProductCatalog.class)
.indexed()
.property("catalogId", ElementType.METHOD)
.documentId()
.name("id")
.property("title", ElementType.METHOD)
.field()
.index(Index.TOKENIZED)
.store(Store.NO)
.property("description", ElementType.METHOD)
.field()
.index(Index.TOKENIZED)
.store(Store.NO)
.property("items", ElementType.METHOD)
.indexEmbedded()
.prefix("catalog.items"); //optional
cfg.getProperties().put( "hibernate.search.model_mapping", mapping );
The next example shows the same definition using annotation
(@IndexEmbedded
):
Example 4.31. Using @IndexEmbedded
@Entity @Indexed public class ProductCatalog { @Id @GeneratedValue @DocumentId(name="id") public Long getCatalogId() {...} @Field(store=Store.NO, index=Index.TOKENIZED) public String getTitle() {...} @Field(store=Store.NO, index=Index.TOKENIZED) public String getDescription(); @OneToMany(fetch = FetchType.LAZY) @IndexColumn(name = "list_position") @Cascade(org.hibernate.annotations.CascadeType.ALL) @IndexEmbedded(prefix="catalog.items") public List<Item> getItems() {...} ... }
@ContainedIn
can be define as seen in the
example below:
Example 4.32. Programmatically defining ContainedIn
SearchMapping mapping = new SearchMapping();
mappping
.entity(ProductCatalog.class)
.indexed()
.property("catalogId", ElementType.METHOD)
.documentId()
.property("title", ElementType.METHOD)
.field()
.property("description", ElementType.METHOD)
.field()
.property("items", ElementType.METHOD)
.indexEmbedded()
.entity(Item.class)
.property("description", ElementType.METHOD)
.field()
.property("productCatalog", ElementType.METHOD)
.containedIn();
cfg.getProperties().put( "hibernate.search.model_mapping", mapping );
This is equivalent to defining
@ContainedIn
in your entity:
Example 4.33. Annotation approach for ContainedIn
@Entity @Indexed public class ProductCatalog { @Id @GeneratedValue @DocumentId public Long getCatalogId() {...} @Field public String getTitle() {...} @Field public String getDescription() {...} @OneToMany(fetch = FetchType.LAZY) @IndexColumn(name = "list_position") @Cascade(org.hibernate.annotations.CascadeType.ALL) @IndexEmbedded private List<Item> getItems() {...} ... } @Entity public class Item { @Id @GeneratedValue private Long itemId; @Field public String getDescription() {...} @ManyToOne( cascade = { CascadeType.PERSIST, CascadeType.REMOVE } ) @ContainedIn public ProductCatalog getProductCatalog() {...} ... }
In order to define a calendar or date bridge mapping, call the
dateBridge(Resolution resolution)
or
calendarBridge(Resolution resolution)
methods
after you have defined a field()
in the
SearchMapping
hierarchy.
Example 4.34. Programmatic model for defining calendar/date bridge
SearchMapping mapping = new SearchMapping(); mapping .entity(Address.class) .indexed() .property("addressId", ElementType.FIELD) .documentId() .property("street1", ElementType.FIELD() .field() .property("createdOn", ElementType.FIELD) .field() .dateBridge(Resolution.DAY) .property("lastUpdated", ElementType.FIELD) .calendarBridge(Resolution.DAY); cfg.getProperties().put( "hibernate.search.model_mapping", mapping );
See below for defining the above using
@CalendarBridge
and
@DateBridge
:
Example 4.35. @CalendarBridge and @DateBridge definition
@Entity @Indexed public class Address { @Id @GeneratedValue @DocumentId private Long addressId; @Field private String address1; @Field @DateBridge(resolution=Resolution.DAY) private Date createdOn; @CalendarBridge(resolution=Resolution.DAY) private Calendar lastUpdated; ... }
It is possible to associate bridges to programmatically defined
fields. When you define a field()
programmatically you can use the bridge(Class<?>
impl)
to associate a FieldBridge
implementation class. The bridge method also provides
optional methods to include any parameters required for the bridge
class. The below shows an example of programmatically defining a
bridge:
Example 4.36. Defining field bridges programmatically
SearchMapping mapping = new SearchMapping();
mapping
.entity(Address.class)
.indexed()
.property("addressId", ElementType.FIELD)
.documentId()
.property("street1", ElementType.FIELD)
.field()
.field()
.name("street1_abridged")
.bridge( ConcatStringBridge.class )
.param( "size", "4" );
cfg.getProperties().put( "hibernate.search.model_mapping", mapping );
The above can equally be defined using annotations, as seen in the next example.
Example 4.37. Defining field bridges using annotation
@Entity @Indexed public class Address { @Id @GeneratedValue @DocumentId(name="id") private Long addressId; @Fields({ @Field, @Field(name="street1_abridged", bridge = @FieldBridge( impl = ConcatStringBridge.class, params = @Parameter( name="size", value="4" )) }) private String address1; ... }
You can define class bridges on entities programmatically. This is shown in the next example:
Example 4.38. Defining class briges using API
SearchMapping mapping = new SearchMapping();
mapping
.entity(Departments.class)
.classBridge(CatDeptsFieldsClassBridge.class)
.name("branchnetwork")
.index(Index.TOKENIZED)
.store(Store.YES)
.param("sepChar", " ")
.classBridge(EquipmentType.class)
.name("equiptype")
.index(Index.TOKENIZED)
.store(Store.YES)
.param("C", "Cisco")
.param("D", "D-Link")
.param("K", "Kingston")
.param("3", "3Com")
.indexed();
cfg.getProperties().put( "hibernate.search.model_mapping", mapping );
The above is similar to using @ClassBridge
as seen in the next example:
Example 4.39. Using @ClassBridge
@Entity @Indexed @ClassBridges ( { @ClassBridge(name="branchnetwork", index= Index.TOKENIZED, store= Store.YES, impl = CatDeptsFieldsClassBridge.class, params = @Parameter( name="sepChar", value=" " ) ), @ClassBridge(name="equiptype", index= Index.TOKENIZED, store= Store.YES, impl = EquipmentType.class, params = {@Parameter( name="C", value="Cisco" ), @Parameter( name="D", value="D-Link" ), @Parameter( name="K", value="Kingston" ), @Parameter( name="3", value="3Com" ) }) }) public class Departments { .... }
You can apply a dynamic boost factor on either a field or a whole entity:
Example 4.40. DynamicBoost mapping using programmatic model
SearchMapping mapping = new SearchMapping(); mapping .entity(DynamicBoostedDescLibrary.class) .indexed() .dynamicBoost(CustomBoostStrategy.class) .property("libraryId", ElementType.FIELD) .documentId().name("id") .property("name", ElementType.FIELD) .dynamicBoost(CustomFieldBoostStrategy.class); .field() .store(Store.YES) cfg.getProperties().put( "hibernate.search.model_mapping", mapping );
The next example shows the equivalent mapping using the
@DynamicBoost
annotation:
Example 4.41. Using the @DynamicBoost
@Entity @Indexed @DynamicBoost(impl = CustomBoostStrategy.class) public class DynamicBoostedDescriptionLibrary { @Id @GeneratedValue @DocumentId private int id; private float dynScore; @Field(store = Store.YES) @DynamicBoost(impl = CustomFieldBoostStrategy.class) private String name; public DynamicBoostedDescriptionLibrary() { dynScore = 1.0f; } ....... }