Basic Types
Basic value types usually map a single database column, to a single, non-aggregated Java type. Hibernate provides a number of built-in basic types, which follow the natural mappings recommended by the JDBC specifications.
Internally Hibernate uses a registry of basic types when it needs to resolve a specific org.hibernate.type.Type
.
Hibernate-provided BasicTypes
Hibernate type (org.hibernate.type package) | JDBC type | Java type | BasicTypeRegistry key(s) |
---|---|---|---|
StringType |
VARCHAR |
java.lang.String |
string, java.lang.String |
MaterializedClob |
CLOB |
java.lang.String |
materialized_clob |
TextType |
LONGVARCHAR |
java.lang.String |
text |
CharacterType |
CHAR |
char, java.lang.Character |
char, java.lang.Character |
BooleanType |
BIT |
boolean, java.lang.Boolean |
boolean, java.lang.Boolean |
NumericBooleanType |
INTEGER, 0 is false, 1 is true |
boolean, java.lang.Boolean |
numeric_boolean |
YesNoType |
CHAR, 'N'/'n' is false, 'Y'/'y' is true. The uppercase value is written to the database. |
boolean, java.lang.Boolean |
yes_no |
TrueFalseType |
CHAR, 'F'/'f' is false, 'T'/'t' is true. The uppercase value is written to the database. |
boolean, java.lang.Boolean |
true_false |
ByteType |
TINYINT |
byte, java.lang.Byte |
byte, java.lang.Byte |
ShortType |
SMALLINT |
short, java.lang.Short |
short, java.lang.Short |
IntegerTypes |
INTEGER |
int, java.lang.Integer |
int, java.lang.Integer |
LongType |
BIGINT |
long, java.lang.Long |
long, java.lang.Long |
FloatType |
FLOAT |
float, java.lang.Float |
float, java.lang.Float |
DoubleType |
DOUBLE |
double, java.lang.Double |
double, java.lang.Double |
BigIntegerType |
NUMERIC |
java.math.BigInteger |
big_integer, java.math.BigInteger |
BigDecimalType |
NUMERIC |
java.math.BigDecimal |
big_decimal, java.math.bigDecimal |
TimestampType |
TIMESTAMP |
java.sql.Timestamp |
timestamp, java.sql.Timestamp |
TimeType |
TIME |
java.sql.Time |
time, java.sql.Time |
DateType |
DATE |
java.sql.Date |
date, java.sql.Date |
CalendarType |
TIMESTAMP |
java.util.Calendar |
calendar, java.util.Calendar |
CalendarDateType |
DATE |
java.util.Calendar |
calendar_date |
CalendarTimeType |
TIME |
java.util.Calendar |
calendar_time |
CurrencyType |
java.util.Currency |
VARCHAR |
currency, java.util.Currency |
LocaleType |
VARCHAR |
java.util.Locale |
locale, java.utility.locale |
TimeZoneType |
VARCHAR, using the TimeZone ID |
java.util.TimeZone |
timezone, java.util.TimeZone |
UrlType |
VARCHAR |
java.net.URL |
url, java.net.URL |
ClassType |
VARCHAR (class FQN) |
java.lang.Class |
class, java.lang.Class |
BlobType |
BLOB |
java.sql.Blob |
blog, java.sql.Blob |
ClobType |
CLOB |
java.sql.Clob |
clob, java.sql.Clob |
BinaryType |
VARBINARY |
byte[] |
binary, byte[] |
MaterializedBlobType |
BLOB |
byte[] |
materized_blob |
ImageType |
LONGVARBINARY |
byte[] |
image |
WrapperBinaryType |
VARBINARY |
java.lang.Byte[] |
wrapper-binary, Byte[], java.lang.Byte[] |
CharArrayType |
VARCHAR |
char[] |
characters, char[] |
CharacterArrayType |
VARCHAR |
java.lang.Character[] |
wrapper-characters, Character[], java.lang.Character[] |
UUIDBinaryType |
BINARY |
java.util.UUID |
uuid-binary, java.util.UUID |
UUIDCharType |
CHAR, can also read VARCHAR |
java.util.UUID |
uuid-char |
PostgresUUIDType |
PostgreSQL UUID, through Types#OTHER, which complies to the PostgreSQL JDBC driver definition |
java.util.UUID |
pg-uuid |
SerializableType |
VARBINARY |
implementors of java.lang.Serializable |
Unlike the other value types, multiple instances of this type are registered. It is registered once under java.io.Serializable, and registered under the specific java.io.Serializable implementation class names. |
StringNVarcharType |
NVARCHAR |
java.lang.String |
nstring |
NTextType |
LONGNVARCHAR |
java.lang.String |
ntext |
NClobType |
NCLOB |
java.sql.NClob |
nclob, java.sql.NClob |
MaterializedNClobType |
NCLOB |
java.lang.String |
materialized_nclob |
PrimitiveCharacterArrayNClobType |
NCHAR |
char[] |
N/A |
CharacterNCharType |
NCHAR |
java.lang.Character |
ncharacter |
CharacterArrayNClobType |
NCLOB |
java.lang.Character[] |
N/A |
Hibernate type (org.hibernate.type package) | JDBC type | Java type | BasicTypeRegistry key(s) |
---|---|---|---|
DurationType |
BIGINT |
java.time.Duration |
Duration, java.time.Duration |
InstantType |
TIMESTAMP |
java.time.Instant |
Instant, java.time.Instant |
LocalDateTimeType |
TIMESTAMP |
java.time.LocalDateTime |
LocalDateTime, java.time.LocalDateTime |
LocalDateType |
DATE |
java.time.LocalDate |
LocalDate, java.time.LocalDate |
LocalTimeType |
TIME |
java.time.LocalTime |
LocalTime, java.time.LocalTime |
OffsetDateTimeType |
TIMESTAMP |
java.time.OffsetDateTime |
OffsetDateTime, java.time.OffsetDateTime |
OffsetTimeType |
TIME |
java.time.OffsetTime |
OffsetTime, java.time.OffsetTime |
OffsetTimeType |
TIMESTAMP |
java.time.ZonedDateTime |
ZonedDateTime, java.time.ZonedDateTime |
To use these hibernate-java8 types just add the |
These mappings are managed by a service inside Hibernate called the org.hibernate.type.BasicTypeRegistry
, which essentially maintains a map of org.hibernate.type.BasicType
(a org.hibernate.type.Type
specialization) instances keyed by a name.
That is the purpose of the "BasicTypeRegistry key(s)" column in the previous tables.
The @Basic
annotation
Strictly speaking, a basic type is denoted with with the javax.persistence.Basic
annotation.
Generally speaking, the @Basic
annotation can be ignored, as it is assumed by default.
Both of the following examples are ultimately the same.
@Basic
declared explicitly@Entity(name = "Product")
public class Product {
@Id
@Basic
private Integer id;
@Basic
private String sku;
@Basic
private String name;
@Basic
private String description;
}
@Basic
being implicitly implied@Entity(name = "Product")
public class Product {
@Id
private Integer id;
private String sku;
private String name;
private String description;
}
The JPA specification strictly limits the Java types that can be marked as basic to the following listing:
If provider portability is a concern, you should stick to just these basic types.
Note that JPA 2.1 did add the notion of a |
The @Basic
annotation defines 2 attributes.
optional
- boolean (defaults to true)-
Defines whether this attribute allows nulls. JPA defines this as "a hint", which essentially means that it effect is specifically required. As long as the type is not primitive, Hibernate takes this to mean that the underlying column should be
NULLABLE
. fetch
- FetchType (defaults to EAGER)-
Defines whether this attribute should be fetched eagerly or lazily. JPA says that EAGER is a requirement to the provider (Hibernate) that the value should be fetched when the owner is fetched, while LAZY is merely a hint that the value be fetched when the attribute is accessed. Hibernate ignores this setting for basic types unless you are using bytecode enhancement. See the BytecodeEnhancement for additional information on fetching and on bytecode enhancement.
The @Column
annotation
JPA defines rules for implicitly determining the name of tables and columns. For a detailed discussion of implicit naming see Naming.
For basic type attributes, the implicit naming rule is that the column name is the same as the attribute name. If that implicit naming rule does not meet your requirements, you can explicitly tell Hibernate (and other providers) the column name to use.
@Entity(name = "Product")
public class Product {
@Id
private Integer id;
private String sku;
private String name;
@Column( name = "NOTES" )
private String description;
}
Here we use @Column
to explicitly map the description
attribute to the NOTES
column, as opposed to the implicit column name description
.
The @Column
annotation defines other mapping information as well. See its Javadocs for details.
BasicTypeRegistry
We said before that a Hibernate type is not a Java type, nor a SQL type, but that it understands both and performs the marshalling between them.
But looking at the basic type mappings from the previous examples,
how did Hibernate know to use its org.hibernate.type.StringType
for mapping for java.lang.String
attributes,
or its org.hibernate.type.IntegerType
for mapping java.lang.Integer
attributes?
The answer lies in a service inside Hibernate called the org.hibernate.type.BasicTypeRegistry
, which essentially maintains a map of org.hibernate.type.BasicType
(a org.hibernate.type.Type
specialization) instances keyed by a name.
We will see later, in the Explicit BasicTypes section, that we can explicitly tell Hibernate which BasicType to use for a particular attribute. But first let’s explore how implicit resolution works and how applications can adjust implicit resolution.
A thorough discussion of the |
As an example, take a String attribute such as we saw before with Product#sku.
Since there was no explicit type mapping, Hibernate looks to the BasicTypeRegistry
to find the registered mapping for java.lang.String
.
This goes back to the "BasicTypeRegistry key(s)" column we saw in the tables at the start of this chapter.
As a baseline within BasicTypeRegistry
, Hibernate follows the recommended mappings of JDBC for Java types.
JDBC recommends mapping Strings to VARCHAR, which is the exact mapping that StringType
handles.
So that is the baseline mapping within BasicTypeRegistry
for Strings.
Applications can also extend (add new BasicType
registrations) or override (replace an existing BasicType
registration) using one of the
MetadataBuilder#applyBasicType
methods or the MetadataBuilder#applyTypes
method during bootstrap.
For more details, see Custom BasicTypes section.
Explicit BasicTypes
Sometimes you want a particular attribute to be handled differently.
Occasionally Hibernate will implicitly pick a BasicType
that you do not want (and for some reason you do not want to adjust the BasicTypeRegistry
).
In these cases you must explicitly tell Hibernate the BasicType
to use, via the org.hibernate.annotations.Type
annotation.
@org.hibernate.annotations.Type
@Entity(name = "Product")
public class Product {
@Id
private Integer id;
private String sku;
@org.hibernate.annotations.Type( type = "nstring" )
private String name;
@org.hibernate.annotations.Type( type = "materialized_nclob" )
private String description;
}
This tells Hibernate to store the Strings as nationalized data. This is just for illustration purposes; for better ways to indicate nationalized character data see Mapping Nationalized Character Data section.
Additionally, the description is to be handled as a LOB. Again, for better ways to indicate LOBs see Mapping LOBs section.
The org.hibernate.annotations.Type#type
attribute can name any of the following:
-
Fully qualified name of any
org.hibernate.type.Type
implementation -
Any key registered with
BasicTypeRegistry
-
The name of any known type definitions
Custom BasicTypes
Hibernate makes it relatively easy for developers to create their own basic type mappings type.
For example, you might want to persist properties of type java.util.BigInteger
to VARCHAR
columns, or support completely new types.
There are two approaches to developing a custom type:
-
implementing a
BasicType
and registering it -
implement a
UserType
which doesn’t require type registration
As a means of illustrating the different approaches, let’s consider a use case where we need to support a java.util.BitSet
mapping that’s stored as a VARCHAR.
Implementing a BasicType
The first approach is to directly implement the BasicType
interface.
Because the |
First, we need to extend the AbstractSingleColumnStandardBasicType
like this:
BasicType
implementationpublic class BitSetType
extends AbstractSingleColumnStandardBasicType<BitSet>
implements DiscriminatorType<BitSet> {
public static final BitSetType INSTANCE = new BitSetType();
public BitSetType() {
super( VarcharTypeDescriptor.INSTANCE, BitSetTypeDescriptor.INSTANCE );
}
@Override
public BitSet stringToObject(String xml) throws Exception {
return fromString( xml );
}
@Override
public String objectToSQLString(BitSet value, Dialect dialect) throws Exception {
return toString( value );
}
@Override
public String getName() {
return "bitset";
}
}
The AbstractSingleColumnStandardBasicType
requires an sqlTypeDescriptor
and a javaTypeDescriptor
.
The sqlTypeDescriptor
is VarcharTypeDescriptor.INSTANCE
because the database column is a VARCHAR.
On the Java side, we need to use a BitSetTypeDescriptor
instance which can be implemented like this:
AbstractTypeDescriptor
implementationpublic class BitSetTypeDescriptor extends AbstractTypeDescriptor<BitSet> {
private static final String DELIMITER = ",";
public static final BitSetTypeDescriptor INSTANCE = new BitSetTypeDescriptor();
public BitSetTypeDescriptor() {
super( BitSet.class );
}
@Override
public String toString(BitSet value) {
StringBuilder builder = new StringBuilder();
for ( long token : value.toLongArray() ) {
if ( builder.length() > 0 ) {
builder.append( DELIMITER );
}
builder.append( Long.toString( token, 2 ) );
}
return builder.toString();
}
@Override
public BitSet fromString(String string) {
if ( string == null || string.isEmpty() ) {
return null;
}
String[] tokens = string.split( DELIMITER );
long[] values = new long[tokens.length];
for ( int i = 0; i < tokens.length; i++ ) {
values[i] = Long.valueOf( tokens[i], 2 );
}
return BitSet.valueOf( values );
}
@SuppressWarnings({"unchecked"})
public <X> X unwrap(BitSet value, Class<X> type, WrapperOptions options) {
if ( value == null ) {
return null;
}
if ( BitSet.class.isAssignableFrom( type ) ) {
return (X) value;
}
if ( String.class.isAssignableFrom( type ) ) {
return (X) toString( value);
}
throw unknownUnwrap( type );
}
public <X> BitSet wrap(X value, WrapperOptions options) {
if ( value == null ) {
return null;
}
if ( String.class.isInstance( value ) ) {
return fromString( (String) value );
}
if ( BitSet.class.isInstance( value ) ) {
return (BitSet) value;
}
throw unknownWrap( value.getClass() );
}
}
The unwrap
method is used when passing a BitSet
as a PreparedStatement
bind parameter, while the wrap
method is used to transform the JDBC column value object (e.g. String
in our case) to the actual mapping object type (e.g. BitSet
in this example).
The BasicType
must be registered, and this can be done at bootstrapping time:
BasicType
implementationconfiguration.registerTypeContributor( (typeContributions, serviceRegistry) -> {
typeContributions.contributeType( BitSetType.INSTANCE );
} );
or using the MetadataBuilder
ServiceRegistry standardRegistry =
new StandardServiceRegistryBuilder().build();
MetadataSources sources = new MetadataSources( standardRegistry );
MetadataBuilder metadataBuilder = sources.getMetadataBuilder();
metadataBuilder.applyBasicType( BitSetType.INSTANCE );
With the new BitSetType
being registered as bitset
, the entity mapping looks like this:
BasicType
mapping@Entity(name = "Product")
public static class Product {
@Id
private Integer id;
@Type( type = "bitset" )
private BitSet bitSet;
public Integer getId() {
return id;
}
public void setId(Integer id) {
this.id = id;
}
public BitSet getBitSet() {
return bitSet;
}
public void setBitSet(BitSet bitSet) {
this.bitSet = bitSet;
}
}
To validate this new BasicType
implementation, we can test it as follows:
BasicType
BitSet bitSet = BitSet.valueOf( new long[] {1, 2, 3} );
doInHibernate( this::sessionFactory, session -> {
Product product = new Product( );
product.setId( 1 );
product.setBitSet( bitSet );
session.persist( product );
} );
doInHibernate( this::sessionFactory, session -> {
Product product = session.get( Product.class, 1 );
assertEquals(bitSet, product.getBitSet());
} );
When executing this unit test, Hibernate generates the following SQL statements:
BasicType
DEBUG SQL:92 -
insert
into
Product
(bitSet, id)
values
(?, ?)
TRACE BasicBinder:65 - binding parameter [1] as [VARCHAR] - [{0, 65, 128, 129}]
TRACE BasicBinder:65 - binding parameter [2] as [INTEGER] - [1]
DEBUG SQL:92 -
select
bitsettype0_.id as id1_0_0_,
bitsettype0_.bitSet as bitSet2_0_0_
from
Product bitsettype0_
where
bitsettype0_.id=?
TRACE BasicBinder:65 - binding parameter [1] as [INTEGER] - [1]
TRACE BasicExtractor:61 - extracted value ([bitSet2_0_0_] : [VARCHAR]) - [{0, 65, 128, 129}]
As you can see, the BitSetType
takes care of the Java-to-SQL and SQL-to-Java type conversion.
Implementing a UserType
The second approach is to implement the UserType
interface.
UserType
implementationpublic class BitSetUserType implements UserType {
public static final BitSetUserType INSTANCE = new BitSetUserType();
private static final Logger log = Logger.getLogger( BitSetUserType.class );
@Override
public int[] sqlTypes() {
return new int[] {StringType.INSTANCE.sqlType()};
}
@Override
public Class returnedClass() {
return String.class;
}
@Override
public boolean equals(Object x, Object y)
throws HibernateException {
return Objects.equals( x, y );
}
@Override
public int hashCode(Object x)
throws HibernateException {
return Objects.hashCode( x );
}
@Override
public Object nullSafeGet(
ResultSet rs, String[] names, SessionImplementor session, Object owner)
throws HibernateException, SQLException {
String columnName = names[0];
String columnValue = (String) rs.getObject( columnName );
log.debugv("Result set column {0} value is {1}", columnName, columnValue);
return columnValue == null ? null :
BitSetTypeDescriptor.INSTANCE.fromString( columnValue );
}
@Override
public void nullSafeSet(
PreparedStatement st, Object value, int index, SessionImplementor session)
throws HibernateException, SQLException {
if ( value == null ) {
log.debugv("Binding null to parameter {0} ",index);
st.setNull( index, Types.VARCHAR );
}
else {
String stringValue = BitSetTypeDescriptor.INSTANCE.toString( (BitSet) value );
log.debugv("Binding {0} to parameter {1} ", stringValue, index);
st.setString( index, stringValue );
}
}
@Override
public Object deepCopy(Object value)
throws HibernateException {
return value == null ? null :
BitSet.valueOf( BitSet.class.cast( value ).toLongArray() );
}
@Override
public boolean isMutable() {
return true;
}
@Override
public Serializable disassemble(Object value)
throws HibernateException {
return (BitSet) deepCopy( value );
}
@Override
public Object assemble(Serializable cached, Object owner)
throws HibernateException {
return deepCopy( cached );
}
@Override
public Object replace(Object original, Object target, Object owner)
throws HibernateException {
return deepCopy( original );
}
}
The entity mapping looks as follows:
UserType
mapping@Entity(name = "Product")
public static class Product {
@Id
private Integer id;
@Type( type = "bitset" )
private BitSet bitSet;
public Integer getId() {
return id;
}
public void setId(Integer id) {
this.id = id;
}
public BitSet getBitSet() {
return bitSet;
}
public void setBitSet(BitSet bitSet) {
this.bitSet = bitSet;
}
}
In this example, the UserType
is registered under the bitset
name, and this is done like this:
UserType
implementationconfiguration.registerTypeContributor( (typeContributions, serviceRegistry) -> {
typeContributions.contributeType( BitSetUserType.INSTANCE, "bitset");
} );
or using the MetadataBuilder
ServiceRegistry standardRegistry =
new StandardServiceRegistryBuilder().build();
MetadataSources sources = new MetadataSources( standardRegistry );
MetadataBuilder metadataBuilder = sources.getMetadataBuilder();
metadataBuilder.applyBasicType( BitSetUserType.INSTANCE, "bitset" );
Like Without registration, the
|
When running the previous test case against the BitSetUserType
entity mapping, Hibernate executed the following SQL statements:
BasicType
DEBUG SQL:92 -
insert
into
Product
(bitSet, id)
values
(?, ?)
DEBUG BitSetUserType:71 - Binding 1,10,11 to parameter 1
TRACE BasicBinder:65 - binding parameter [2] as [INTEGER] - [1]
DEBUG SQL:92 -
select
bitsetuser0_.id as id1_0_0_,
bitsetuser0_.bitSet as bitSet2_0_0_
from
Product bitsetuser0_
where
bitsetuser0_.id=?
TRACE BasicBinder:65 - binding parameter [1] as [INTEGER] - [1]
DEBUG BitSetUserType:56 - Result set column bitSet2_0_0_ value is 1,10,11
Mapping enums
Hibernate supports the mapping of Java enums as basic value types in a number of different ways.
@Enumerated
The original JPA-compliant way to map enums was via the @Enumerated
and @MapKeyEnumerated
for map keys annotations which works on the principle that the enum values are stored according to one of 2 strategies indicated by javax.persistence.EnumType
:
ORDINAL
-
- stored according to the enum value’s ordinal position within the enum class, as indicated by java.lang.Enum#ordinal
STRING
-
- stored according to the enum value’s name, as indicated by java.lang.Enum#name
Assuming the following enumeration:
PhoneType
enumerationpublic enum PhoneType {
LAND_LINE,
MOBILE;
}
In the ORDINAL example, the phone_type
column is defined as an (nullable) INTEGER type and would hold:
NULL
-
For null values
0
-
For the
LAND_LINE
enum 1
-
For the
MOBILE
enum
@Enumerated(ORDINAL)
example@Entity(name = "Phone")
public static class Phone {
@Id
private Long id;
@Column(name = "phone_number")
private String number;
@Enumerated(EnumType.ORDINAL)
@Column(name = "phone_type")
private PhoneType type;
//Getters and setters are omitted for brevity
}
When persisting this entity, Hibernate generates the following SQL statement:
@Enumerated(ORDINAL)
mappingPhone phone = new Phone( );
phone.setId( 1L );
phone.setNumber( "123-456-78990" );
phone.setType( PhoneType.MOBILE );
entityManager.persist( phone );
INSERT INTO Phone (phone_number, phone_type, id)
VALUES ('123-456-78990', 2, 1)
In the STRING example, the phone_type
column is defined as an (nullable) VARCHAR type and would hold:
NULL
-
For null values
LAND_LINE
-
For the
LAND_LINE
enum MOBILE
-
For the
MOBILE
enum
@Enumerated(STRING)
example@Entity(name = "Phone")
public static class Phone {
@Id
private Long id;
@Column(name = "phone_number")
private String number;
@Enumerated(EnumType.STRING)
@Column(name = "phone_type")
private PhoneType type;
//Getters and setters are omitted for brevity
}
Persisting the same entity like in the @Enumerated(ORDINAL)
example, Hibernate generates the following SQL statement:
@Enumerated(STRING)
mappingINSERT INTO Phone (phone_number, phone_type, id)
VALUES ('123-456-78990', 'MOBILE', 1)
AttributeConverter
Let’s consider the following Gender
enum which stores its values using the 'M'
and 'F'
codes.
public enum Gender {
MALE( 'M' ),
FEMALE( 'F' );
private final char code;
Gender(char code) {
this.code = code;
}
public static Gender fromCode(char code) {
if ( code == 'M' || code == 'm' ) {
return MALE;
}
if ( code == 'F' || code == 'f' ) {
return FEMALE;
}
throw new UnsupportedOperationException(
"The code " + code + " is not supported!"
);
}
public char getCode() {
return code;
}
}
You can map enums in a JPA compliant way using a JPA 2.1 AttributeConverter.
AttributeConverter
example@Entity(name = "Person")
public static class Person {
@Id
private Long id;
private String name;
@Convert( converter = GenderConverter.class )
public Gender gender;
//Getters and setters are omitted for brevity
}
@Converter
public static class GenderConverter
implements AttributeConverter<Gender, Character> {
public Character convertToDatabaseColumn( Gender value ) {
if ( value == null ) {
return null;
}
return value.getCode();
}
public Gender convertToEntityAttribute( Character value ) {
if ( value == null ) {
return null;
}
return Gender.fromCode( value );
}
}
Here, the gender column is defined as a CHAR type and would hold:
NULL
-
For null values
'M'
-
For the
MALE
enum 'F'
-
For the
FEMALE
enum
For additional details on using AttributeConverters, see JPA 2.1 AttributeConverters section.
JPA explicitly disallows the use of an AttributeConverter with an attribute marked as |
Custom type
You can also map enums using a Hibernate custom type mapping.
Let’s again revisit the Gender enum example, this time using a custom Type to store the more standardized 'M'
and 'F'
codes.
@Entity(name = "Person")
public static class Person {
@Id
private Long id;
private String name;
@Type( type = "org.hibernate.userguide.mapping.basic.GenderType" )
public Gender gender;
//Getters and setters are omitted for brevity
}
public class GenderType extends AbstractSingleColumnStandardBasicType<Gender> {
public static final GenderType INSTANCE = new GenderType();
public GenderType() {
super(
CharTypeDescriptor.INSTANCE,
GenderJavaTypeDescriptor.INSTANCE
);
}
public String getName() {
return "gender";
}
@Override
protected boolean registerUnderJavaType() {
return true;
}
}
public class GenderJavaTypeDescriptor extends AbstractTypeDescriptor<Gender> {
public static final GenderJavaTypeDescriptor INSTANCE =
new GenderJavaTypeDescriptor();
protected GenderJavaTypeDescriptor() {
super( Gender.class );
}
public String toString(Gender value) {
return value == null ? null : value.name();
}
public Gender fromString(String string) {
return string == null ? null : Gender.valueOf( string );
}
public <X> X unwrap(Gender value, Class<X> type, WrapperOptions options) {
return CharacterTypeDescriptor.INSTANCE.unwrap(
value == null ? null : value.getCode(),
type,
options
);
}
public <X> Gender wrap(X value, WrapperOptions options) {
return Gender.fromCode(
CharacterTypeDescriptor.INSTANCE.wrap( value, options )
);
}
}
Again, the gender column is defined as a CHAR type and would hold:
NULL
-
For null values
'M'
-
For the
MALE
enum 'F'
-
For the
FEMALE
enum
For additional details on using custom types, see Custom BasicTypes section.
Mapping LOBs
Mapping LOBs (database Large Objects) come in 2 forms, those using the JDBC locator types and those materializing the LOB data.
JDBC LOB locators exist to allow efficient access to the LOB data. They allow the JDBC driver to stream parts of the LOB data as needed, potentially freeing up memory space. However they can be unnatural to deal with and have certain limitations. For example, a LOB locator is only portably valid during the duration of the transaction in which it was obtained.
The idea of materialized LOBs is to trade-off the potential efficiency (not all drivers handle LOB data efficiently) for a more natural programming paradigm using familiar Java types such as String or byte[], etc for these LOBs.
Materialized deals with the entire LOB contents in memory, whereas LOB locators (in theory) allow streaming parts of the LOB contents into memory as needed.
The JDBC LOB locator types include:
-
java.sql.Blob
-
java.sql.Clob
-
java.sql.NClob
Mapping materialized forms of these LOB values would use more familiar Java types such as String
, char[]
, byte[]
, etc.
The trade off for more familiar is usually performance.
For a first look, let’s assume we have a CLOB
column that we would like to map (NCLOB
character LOB
data will be covered in Mapping Nationalized Character Data section).
CREATE TABLE Product (
id INTEGER NOT NULL
image clob
name VARCHAR(255)
PRIMARY KEY ( id )
)
Let’s first map this using the @Lob
JPA annotation and the java.sql.Clob
type:
CLOB
mapped to java.sql.Clob
@Entity(name = "Product")
public static class Product {
@Id
private Integer id;
private String name;
@Lob
private Clob warranty;
//Getters and setters are omitted for brevity
}
To persist such an entity, you have to create a Clob
using plain JDBC:
java.sql.Clob
entityString warranty = "My product warranty";
final Product product = new Product();
product.setId( 1 );
product.setName( "Mobile phone" );
session.doWork( connection -> {
product.setWarranty( ClobProxy.generateProxy( warranty ) );
} );
entityManager.persist( product );
To retrieve the Clob
content, you need to transform the underlying java.io.Reader
:
java.sql.Clob
entityProduct product = entityManager.find( Product.class, productId );
try (Reader reader = product.getWarranty().getCharacterStream()) {
assertEquals( "My product warranty", toString( reader ) );
}
We could also map the CLOB in a materialized form. This way, we can either use a String
or a char[]
.
CLOB
mapped to String
@Entity(name = "Product")
public static class Product {
@Id
private Integer id;
private String name;
@Lob
private String warranty;
//Getters and setters are omitted for brevity
}
How JDBC deals with However, some drivers are trickier (e.g. PostgreSQL JDBC drivers), and, in such cases, you may have to do some extra to get LOBs working. Such discussions are beyond the scope of this guide. |
We might even want the materialized data as a char array (for some crazy reason).
char[]
mapping@Entity(name = "Product")
public static class Product {
@Id
private Integer id;
private String name;
@Lob
private char[] warranty;
//Getters and setters are omitted for brevity
}
BLOB
data is mapped in a similar fashion.
CREATE TABLE Product (
id INTEGER NOT NULL ,
image blob ,
name VARCHAR(255) ,
PRIMARY KEY ( id )
)
Let’s first map this using the JDBC java.sql.Blob
type.
BLOB
mapped to java.sql.Blob
@Entity(name = "Product")
public static class Product {
@Id
private Integer id;
private String name;
@Lob
private Blob image;
//Getters and setters are omitted for brevity
}
To persist such an entity, you have to create a Blob
using plain JDBC:
java.sql.Blob
entitybyte[] image = new byte[] {1, 2, 3};
final Product product = new Product();
product.setId( 1 );
product.setName( "Mobile phone" );
session.doWork( connection -> {
product.setImage( BlobProxy.generateProxy( image ) );
} );
entityManager.persist( product );
To retrieve the Blob
content, you need to transform the underlying java.io.Reader
:
java.sql.Blob
entityProduct product = entityManager.find( Product.class, productId );
try (InputStream inputStream = product.getImage().getBinaryStream()) {
assertArrayEquals(new byte[] {1, 2, 3}, toBytes( inputStream ) );
}
We could also map the BLOB in a materialized form (e.g. byte[]
).
BLOB
mapped to byte[]
@Entity(name = "Product")
public static class Product {
@Id
private Integer id;
private String name;
@Lob
private byte[] image;
//Getters and setters are omitted for brevity
}
Mapping Nationalized Character Data
JDBC 4 added the ability to explicitly handle nationalized character data. To this end it added specific nationalized character data types.
-
NCHAR
-
NVARCHAR
-
LONGNVARCHAR
-
NCLOB
NVARCHAR
- SQLCREATE TABLE Product (
id INTEGER NOT NULL ,
name VARCHAR(255) ,
warranty NVARCHAR(255) ,
PRIMARY KEY ( id )
)
To map a specific attribute to a nationalized variant data type, Hibernate defines the @Nationalized
annotation.
NVARCHAR
mapping@Entity(name = "Product")
public static class Product {
@Id
private Integer id;
private String name;
@Nationalized
private String warranty;
//Getters and setters are omitted for brevity
}
Just like with CLOB
, Hibernate can also deal with NCLOB
SQL data types:
NCLOB
- SQLCREATE TABLE Product (
id INTEGER NOT NULL ,
name VARCHAR(255) ,
warranty nclob ,
PRIMARY KEY ( id )
)
Hibernate can map the NCLOB
to a java.sql.NClob
NCLOB
mapped to java.sql.NClob
@Entity(name = "Product")
public static class Product {
@Id
private Integer id;
private String name;
@Lob
@Nationalized
// Clob also works, because NClob extends Clob.
// The database type is still NCLOB either way and handled as such.
private NClob warranty;
//Getters and setters are omitted for brevity
}
To persist such an entity, you have to create a NClob
using plain JDBC:
java.sql.NClob
entityString warranty = "My product warranty";
final Product product = new Product();
product.setId( 1 );
product.setName( "Mobile phone" );
session.doWork( connection -> {
product.setWarranty( connection.createNClob() );
product.getWarranty().setString( 1, warranty );
} );
entityManager.persist( product );
To retrieve the NClob
content, you need to transform the underlying java.io.Reader
:
java.sql.NClob
entityProduct product = entityManager.find( Product.class, productId );
try (Reader reader = product.getWarranty().getCharacterStream()) {
assertEquals( "My product warranty", toString( reader ) );
}
We could also map the NCLOB
in a materialized form. This way, we can either use a String
or a char[]
.
NCLOB
mapped to String
@Entity(name = "Product")
public static class Product {
@Id
private Integer id;
private String name;
@Lob
@Nationalized
private String warranty;
//Getters and setters are omitted for brevity
}
We might even want the materialized data as a char array.
char[]
mapping@Entity(name = "Product")
public static class Product {
@Id
private Integer id;
private String name;
@Lob
@Nationalized
private char[] warranty;
//Getters and setters are omitted for brevity
}
If you application and database are entirely nationalized you may instead want to enable nationalized character data as the default.
You can do this via the |
Mapping UUID Values
Hibernate also allows you to map UUID values, again in a number of ways.
The default UUID mapping is as binary because it represents more efficient storage.
However many applications prefer the readability of character storage.
To switch the default mapping, simply call |
UUID as binary
As mentioned, the default mapping for UUID attributes.
Maps the UUID to a byte[]
using java.util.UUID#getMostSignificantBits
and java.util.UUID#getLeastSignificantBits
and stores that as BINARY
data.
Chosen as the default simply because it is generally more efficient from storage perspective.
UUID as (var)char
Maps the UUID to a String using java.util.UUID#toString
and java.util.UUID#fromString
and stores that as CHAR
or VARCHAR
data.
PostgeSQL-specific UUID
When using one of the PostgreSQL Dialects, this becomes the default UUID mapping |
Maps the UUID using PostgreSQL’s specific UUID data type.
The PostgreSQL JDBC driver chooses to map its UUID type to the OTHER
code.
Note that this can cause difficulty as the driver chooses to map many different data types to OTHER
.
UUID as identifier
Hibernate supports using UUID values as identifiers, and they can even be generated on user’s behalf. For details, see the discussion of generators in Identifier generators.
Mapping Date/Time Values
Hibernate allows various Java Date/Time classes to be mapped as persistent domain model entity properties. The SQL standard defines three Date/Time types:
- DATE
-
Represents a calendar date by storing years, months and days. The JDBC equivalent is
java.sql.Date
- TIME
-
Represents the time of a day and it stores hours, minutes and seconds. The JDBC equivalent is
java.sql.Time
- TIMESTAMP
-
It stores both a DATE and a TIME plus nanoseconds. The JDBC equivalent is
java.sql.Timestamp
To avoid dependencies on the |
While the java.sql
classes define a direct association to the SQL Date/Time data types,
the java.util
or java.time
properties need to explicitly mark the SQL type correlation with the @Temporal
annotation.
This way, a java.util.Date
or a java.util.Calendar
cn be mapped to either an SQL DATE
, TIME
or TIMESTAMP
type.
Considering the following entity:
java.util.Date
mapped as DATE
@Entity(name = "DateEvent")
public static class DateEvent {
@Id
@GeneratedValue
private Long id;
@Column(name = "`timestamp`")
@Temporal(TemporalType.DATE)
private Date timestamp;
//Getters and setters are omitted for brevity
}
When persisting such entity:
java.util.Date
mappingDateEvent dateEvent = new DateEvent( new Date() );
entityManager.persist( dateEvent );
Hibernate generates the following INSERT statement:
INSERT INTO DateEvent ( timestamp, id )
VALUES ( '2015-12-29', 1 )
Only the year, month and the day field were saved into the database.
If we change the @Temporal
type to TIME
:
java.util.Date
mapped as TIME
@Column(name = "`timestamp`")
@Temporal(TemporalType.TIME)
private Date timestamp;
Hibernate will issue an INSERT statement containing the hour, minutes and seconds.
INSERT INTO DateEvent ( timestamp, id )
VALUES ( '16:51:58', 1 )
When the @Temporal
type is set to TIMESTAMP
:
java.util.Date
mapped as TIMESTAMP
@Column(name = "`timestamp`")
@Temporal(TemporalType.TIMESTAMP)
private Date timestamp;
Hibernate will include both the DATE
, the TIME
and the nanoseconds in the INSERT statement:
INSERT INTO DateEvent ( timestamp, id )
VALUES ( '2015-12-29 16:54:04.544', 1 )
Just like the |
Mapping Java 8 Date/Time Values
Java 8 came with a new Date/Time API, offering support for instant dates, intervals, local and zoned Date/Time immutable instances, bundled in the java.time
package.
Hibernate added support for the new Date/Time API in a new module, which must be included with the following Maven dependency:
hibernate-java8
Maven dependency<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-java8</artifactId>
<version>${hibernate.version}</version>
</dependency>
The mapping between the standard SQL Date/Time types and the supported Java 8 Date/Time class types looks as follows;
- DATE
-
java.time.LocalDate
- TIME
-
java.time.LocalTime
,java.time.OffsetTime
- TIMESTAMP
-
java.time.Instant
,java.time.LocalDateTime
,java.time.OffsetDateTime
andjava.time.ZonedDateTime
Because the mapping between Java 8 Date/Time classes and the SQL types is implicit, there is not need to specify the org.hibernate.AnnotationException: @Temporal should only be set on a java.util.Date or java.util.Calendar property |
JPA 2.1 AttributeConverters
Although Hibernate has long been offering custom types, as a JPA 2.1 provider, it also supports `AttributeConverter`s as well.
With a custom AttributeConverter
, the application developer can map a given JDBC type to an entity basic type.
In the following example, the java.util.Period
is going to be mapped to a VARCHAR
database column.
java.util.Period
custom AttributeConverter
@Converter
public class PeriodStringConverter
implements AttributeConverter<Period, String> {
@Override
public String convertToDatabaseColumn(Period attribute) {
return attribute.toString();
}
@Override
public Period convertToEntityAttribute(String dbData) {
return Period.parse( dbData );
}
}
To make use of this custom converter, the @Convert
annotation must decorate the entity attribute.
java.util.Period
AttributeConverter
mapping@Entity(name = "Event")
public static class Event {
@Id
@GeneratedValue
private Long id;
@Convert(converter = PeriodStringConverter.class)
@Column(columnDefinition = "")
private Period span;
//Getters and setters are omitted for brevity
}
When persisting such entity, Hibernate will do the type conversion based on the AttributeConverter
logic:
AttributeConverter
INSERT INTO Event ( span, id )
VALUES ( 'P1Y2M3D', 1 )
SQL quoted identifiers
You can force Hibernate to quote an identifier in the generated SQL by enclosing the table or column name in backticks in the mapping document. While traditionally, Hibernate used backticks for escaping SQL reserved keywords, JPA uses double quotes instead.
Once the reserved keywords are escaped, Hibernate will use the correct quotation style for the SQL Dialect
.
This is usually double quotes, but SQL Server uses brackets and MySQL uses backticks.
@Entity(name = "Product")
public static class Product {
@Id
private Long id;
@Column(name = "`name`")
private String name;
@Column(name = "`number`")
private String number;
//Getters and setters are omitted for brevity
}
@Entity(name = "Product")
public static class Product {
@Id
private Long id;
@Column(name = "\"name\"")
private String name;
@Column(name = "\"number\"")
private String number;
//Getters and setters are omitted for brevity
}
Because name
and number
are reserved words, the Product
entity mapping uses backtricks to quote these column names.
When saving the following Product entity
, Hibernate generates the following SQL insert statement:
Product product = new Product();
product.setId( 1L );
product.setName( "Mobile phone" );
product.setNumber( "123-456-7890" );
entityManager.persist( product );
INSERT INTO Product ("name", "number", id)
VALUES ('Mobile phone', '123-456-7890', 1)
Global quoting
Hibernate can also quote all identifiers (e.g. table, columns) using the following configuration property:
<property
name="hibernate.globally_quoted_identifiers"
value="true"
/>
This way, we don’t need to manually quote any identifier:
@Entity(name = "Product")
public static class Product {
@Id
private Long id;
private String name;
private String number;
//Getters and setters are omitted for brevity
}
When persisting a Product
entity, Hibernate is going to quote all identifiers as in the following example:
INSERT INTO "Product" ("name", "number", "id")
VALUES ('Mobile phone', '123-456-7890', 1)
As you can see, both the table name and all the column have been quoted.
For more about quoting-related configuration properties, check out the Mapping configurations section as well.
Generated properties
Generated properties are properties that have their values generated by the database.
Typically, Hibernate applications needed to refresh
objects that contain any properties for which the database was generating values.
Marking properties as generated, however, lets the application delegate this responsibility to Hibernate.
When Hibernate issues an SQL INSERT or UPDATE for an entity that has defined generated properties, it immediately issues a select to retrieve the generated values.
Properties marked as generated must additionally be non-insertable and non-updateable.
Only @Version
and @Basic
types can be marked as generated.
never
(the default)-
the given property value is not generated within the database.
insert
-
the given property value is generated on insert, but is not regenerated on subsequent updates. Properties like creationTimestamp fall into this category.
always
-
the property value is generated both on insert and on update.
To mark a property as generated, use The Hibernate specific @Generated
annotation.
@ValueGenerationType meta-annotation
Hibernate 4.3 introduced the @ValueGenerationType
meta-annotation, which is a new approach to declaring generated attributes or customizing generators.
@Generated
has been retrofitted to use the @ValueGenerationType
meta-annotation.
But @ValueGenerationType
exposes more features than what @Generated
currently supports, and,
to leverage some of those features, you’d simply wire up a new generator annotation.
As you’ll see in the following examples, the @ValueGenerationType
meta-annotation is used when declaring the custom annotation used to mark the entity properties that need a specific generation strategy.
The actual generation logic must be implemented in class that implements the AnnotationValueGeneration
interface.
Database-generated values
For example, let’s say we want the timestamps to be generated by calls to the standard ANSI SQL function current_timestamp
(rather than triggers or DEFAULT values):
ValueGenerationType
mapping for database generation@Entity(name = "Event")
public static class Event {
@Id
@GeneratedValue
private Long id;
@Column(name = "`timestamp`")
@FunctionCreationTimestamp
private Date timestamp;
public Event() {}
public Long getId() {
return id;
}
public Date getTimestamp() {
return timestamp;
}
}
@ValueGenerationType(generatedBy = FunctionCreationValueGeneration.class)
@Retention(RetentionPolicy.RUNTIME)
public @interface FunctionCreationTimestamp {}
public static class FunctionCreationValueGeneration
implements AnnotationValueGeneration<FunctionCreationTimestamp> {
@Override
public void initialize(FunctionCreationTimestamp annotation, Class<?> propertyType) {
}
/**
* Generate value on INSERT
* @return when to generate the value
*/
public GenerationTiming getGenerationTiming() {
return GenerationTiming.INSERT;
}
/**
* Returns null because the value is generated by the database.
* @return null
*/
public ValueGenerator<?> getValueGenerator() {
return null;
}
/**
* Returns true because the value is generated by the database.
* @return true
*/
public boolean referenceColumnInSql() {
return true;
}
/**
* Returns the database-generated value
* @return database-generated value
*/
public String getDatabaseGeneratedReferencedColumnValue() {
return "current_timestamp";
}
}
When persisting an Event
entity, Hibernate generates the following SQL statement:
INSERT INTO Event ("timestamp", id)
VALUES (current_timestamp, 1)
As you can see, the current_timestamp
value was used for assigning the timestamp
column value.
In-memory-generated values
If the timestamp value needs to be generated in-memory, the following mapping must be used instead:
ValueGenerationType
mapping for in-memory value generation@Entity(name = "Event")
public static class Event {
@Id
@GeneratedValue
private Long id;
@Column(name = "`timestamp`")
@FunctionCreationTimestamp
private Date timestamp;
public Event() {}
public Long getId() {
return id;
}
public Date getTimestamp() {
return timestamp;
}
}
@ValueGenerationType(generatedBy = FunctionCreationValueGeneration.class)
@Retention(RetentionPolicy.RUNTIME)
public @interface FunctionCreationTimestamp {}
public static class FunctionCreationValueGeneration
implements AnnotationValueGeneration<FunctionCreationTimestamp> {
@Override
public void initialize(FunctionCreationTimestamp annotation, Class<?> propertyType) {
}
/**
* Generate value on INSERT
* @return when to generate the value
*/
public GenerationTiming getGenerationTiming() {
return GenerationTiming.INSERT;
}
/**
* Returns the in-memory generated value
* @return {@code true}
*/
public ValueGenerator<?> getValueGenerator() {
return (session, owner) -> new Date( );
}
/**
* Returns false because the value is generated by the database.
* @return false
*/
public boolean referenceColumnInSql() {
return false;
}
/**
* Returns null because the value is generated in-memory.
* @return null
*/
public String getDatabaseGeneratedReferencedColumnValue() {
return null;
}
}
When persisting an Event
entity, Hibernate generates the following SQL statement:
INSERT INTO Event ("timestamp", id)
VALUES ('Tue Mar 01 10:58:18 EET 2016', 1)
As you can see, the new Date()
object value was used for assigning the timestamp
column value.
Column transformers: read and write expressions
Hibernate allows you to customize the SQL it uses to read and write the values of columns mapped to @Basic
types.
For example, if your database provides a set of data encryption functions, you can invoke them for individual columns like in the following example.
@ColumnTransformer
example@Entity(name = "Employee")
public static class Employee {
@Id
private Long id;
@NaturalId
private String username;
@Column(name = "pswd")
@ColumnTransformer(
read = "decrypt( 'AES', '00', pswd )",
write = "encrypt('AES', '00', ?)"
)
private String password;
private int accessLevel;
@ManyToOne(fetch = FetchType.LAZY)
private Department department;
@ManyToMany(mappedBy = "employees")
private List<Project> projects = new ArrayList<>();
//Getters and setters omitted for brevity
}
You can use the plural form |
If a property uses more than one column, you must use the forColumn
attribute to specify which column, the expressions are targeting.
@ColumnTransformer
forColumn
attribute usage@Entity(name = "Savings")
public static class Savings {
@Id
private Long id;
@Type(type = "org.hibernate.userguide.mapping.basic.MonetaryAmountUserType")
@Columns(columns = {
@Column(name = "money"),
@Column(name = "currency")
})
@ColumnTransformer(
forColumn = "money",
read = "money / 100",
write = "? * 100"
)
private MonetaryAmount wallet;
//Getters and setters omitted for brevity
}
Hibernate applies the custom expressions automatically whenever the property is referenced in a query. This functionality is similar to a derived-property @Formula with two differences:
-
The property is backed by one or more columns that are exported as part of automatic schema generation.
-
The property is read-write, not read-only.
The write
expression, if specified, must contain exactly one '?' placeholder for the value.
@ColumnTransformer
and a composite typedoInJPA( this::entityManagerFactory, entityManager -> {
Savings savings = new Savings( );
savings.setId( 1L );
savings.setWallet( new MonetaryAmount( BigDecimal.TEN, Currency.getInstance( Locale.US ) ) );
entityManager.persist( savings );
} );
doInJPA( this::entityManagerFactory, entityManager -> {
Savings savings = entityManager.find( Savings.class, 1L );
assertEquals( 10, savings.getWallet().getAmount().intValue());
} );
INSERT INTO Savings (money, currency, id)
VALUES (10 * 100, 'USD', 1)
SELECT
s.id as id1_0_0_,
s.money / 100 as money2_0_0_,
s.currency as currency3_0_0_
FROM
Savings s
WHERE
s.id = 1
@Formula
Sometimes, you want the Database to do some computation for you rather than in the JVM, you might also create some kind of virtual column. You can use a SQL fragment (aka formula) instead of mapping a property into a column. This kind of property is read only (its value is calculated by your formula fragment)
You should be aware that the |
@Formula
mapping usage@Entity(name = "Account")
public static class Account {
@Id
private Long id;
private Double credit;
private Double rate;
@Formula(value = "credit * rate")
private Double interest;
//Getters and setters omitted for brevity
}
When loading the Account
entity, Hibernate is going to calculate the interest
property using the configured @Formula
:
@Formula
mappingdoInJPA( this::entityManagerFactory, entityManager -> {
Account account = new Account( );
account.setId( 1L );
account.setCredit( 5000d );
account.setRate( 1.25 / 100 );
entityManager.persist( account );
} );
doInJPA( this::entityManagerFactory, entityManager -> {
Account account = entityManager.find( Account.class, 1L );
assertEquals( Double.valueOf( 62.5d ), account.getInterest());
} );
INSERT INTO Account (credit, rate, id)
VALUES (5000.0, 0.0125, 1)
SELECT
a.id as id1_0_0_,
a.credit as credit2_0_0_,
a.rate as rate3_0_0_,
a.credit * a.rate as formula0_0_
FROM
Account a
WHERE
a.id = 1
The SQL fragment can be as complex as you want and even include subselects. |
@Where
Sometimes, you want to filter out entities using a custom SQL criteria.
This can be achieved using the @Where
annotation, which can be applied to entities, as you can see in the following mapping.
@Where
mapping usagepublic enum AccountType {
DEBIT,
CREDIT
}
@Entity(name = "Client")
public static class Client {
@Id
private Long id;
private String name;
@OneToMany(mappedBy = "client")
private List<Account> debitAccounts = new ArrayList<>( );
@OneToMany(mappedBy = "client")
private List<Account> creditAccounts = new ArrayList<>( );
//Getters and setters omitted for brevity
}
@Entity(name = "Account")
@Where( clause = "active = true" )
public static class Account {
@Id
private Long id;
@ManyToOne
private Client client;
@Column(name = "account_type")
@Enumerated(EnumType.STRING)
private AccountType type;
private Double amount;
private Double rate;
private boolean active;
//Getters and setters omitted for brevity
}
If the database contains the following entities:
@Where
mappingdoInJPA( this::entityManagerFactory, entityManager -> {
Client client = new Client();
client.setId( 1L );
client.setName( "John Doe" );
entityManager.persist( client );
Account account1 = new Account( );
account1.setId( 1L );
account1.setType( AccountType.CREDIT );
account1.setAmount( 5000d );
account1.setRate( 1.25 / 100 );
account1.setActive( true );
account1.setClient( client );
client.getCreditAccounts().add( account1 );
entityManager.persist( account1 );
Account account2 = new Account( );
account2.setId( 2L );
account2.setType( AccountType.DEBIT );
account2.setAmount( 0d );
account2.setRate( 1.05 / 100 );
account2.setActive( false );
account2.setClient( client );
client.getDebitAccounts().add( account2 );
entityManager.persist( account2 );
Account account3 = new Account( );
account3.setType( AccountType.DEBIT );
account3.setId( 3L );
account3.setAmount( 250d );
account3.setRate( 1.05 / 100 );
account3.setActive( true );
account3.setClient( client );
client.getDebitAccounts().add( account3 );
entityManager.persist( account3 );
} );
INSERT INTO Client (name, id)
VALUES ('John Doe', 1)
INSERT INTO Account (active, amount, client_id, rate, account_type, id)
VALUES (true, 5000.0, 1, 0.0125, 'CREDIT', 1)
INSERT INTO Account (active, amount, client_id, rate, account_type, id)
VALUES (false, 0.0, 1, 0.0105, 'DEBIT', 2)
INSERT INTO Account (active, amount, client_id, rate, account_type, id)
VALUES (true, 250.0, 1, 0.0105, 'DEBIT', 3)
When executing an Account
entity query, Hibernate is going to filter out all records that are not active.
@Where
doInJPA( this::entityManagerFactory, entityManager -> {
List<Account> accounts = entityManager.createQuery(
"select a from Account a", Account.class)
.getResultList();
assertEquals( 2, accounts.size());
} );
SELECT
a.id as id1_0_,
a.active as active2_0_,
a.amount as amount3_0_,
a.client_id as client_i6_0_,
a.rate as rate4_0_,
a.account_type as account_5_0_
FROM
Account a
WHERE ( a.active = true )
@Filter
The @Filter
annotation is another way to filter out entities or collections using a custom SQL criteria, for both entities and collections.
Unlike the @Where
annotation, @Filter
allows you to parameterize the filter clause at runtime.
@Filter
mapping usagepublic enum AccountType {
DEBIT,
CREDIT
}
@Entity(name = "Client")
public static class Client {
@Id
private Long id;
private String name;
@OneToMany(mappedBy = "client")
@Filter(name="activeAccount", condition="active = :active")
private List<Account> accounts = new ArrayList<>( );
//Getters and setters omitted for brevity
}
@Entity(name = "Account")
@FilterDef(name="activeAccount", parameters=@ParamDef( name="active", type="boolean" ) )
@Filter(name="activeAccount", condition="active = :active")
public static class Account {
@Id
private Long id;
@ManyToOne
private Client client;
@Column(name = "account_type")
@Enumerated(EnumType.STRING)
private AccountType type;
private Double amount;
private Double rate;
private boolean active;
//Getters and setters omitted for brevity
}
If the database contains the following entities:
@Filter
mappingdoInJPA( this::entityManagerFactory, entityManager -> {
Client client = new Client();
client.setId( 1L );
client.setName( "John Doe" );
entityManager.persist( client );
Account account1 = new Account( );
account1.setId( 1L );
account1.setType( AccountType.CREDIT );
account1.setAmount( 5000d );
account1.setRate( 1.25 / 100 );
account1.setActive( true );
account1.setClient( client );
client.getAccounts().add( account1 );
entityManager.persist( account1 );
Account account2 = new Account( );
account2.setId( 2L );
account2.setType( AccountType.DEBIT );
account2.setAmount( 0d );
account2.setRate( 1.05 / 100 );
account2.setActive( false );
account2.setClient( client );
client.getAccounts().add( account2 );
entityManager.persist( account2 );
Account account3 = new Account( );
account3.setType( AccountType.DEBIT );
account3.setId( 3L );
account3.setAmount( 250d );
account3.setRate( 1.05 / 100 );
account3.setActive( true );
account3.setClient( client );
client.getAccounts().add( account3 );
entityManager.persist( account3 );
} );
INSERT INTO Client (name, id)
VALUES ('John Doe', 1)
INSERT INTO Account (active, amount, client_id, rate, account_type, id)
VALUES (true, 5000.0, 1, 0.0125, 'CREDIT', 1)
INSERT INTO Account (active, amount, client_id, rate, account_type, id)
VALUES (false, 0.0, 1, 0.0105, 'DEBIT', 2)
INSERT INTO Account (active, amount, client_id, rate, account_type, id)
VALUES (true, 250.0, 1, 0.0105, 'DEBIT', 3)
By default, without explicitly enabling the filter, Hibernate is going to fetch all Account
entities.
If the filter is enabled and the filter parameter value is provided,
then Hibernate is going to apply the filtering criteria to the associated Account
entities.
@Filter
doInJPA( this::entityManagerFactory, entityManager -> {
List<Account> accounts = entityManager.createQuery(
"select a from Account a", Account.class)
.getResultList();
assertEquals( 3, accounts.size());
} );
doInJPA( this::entityManagerFactory, entityManager -> {
log.infof( "Activate filter [%s]", "activeAccount");
entityManager
.unwrap( Session.class )
.enableFilter( "activeAccount" )
.setParameter( "active", true);
List<Account> accounts = entityManager.createQuery(
"select a from Account a", Account.class)
.getResultList();
assertEquals( 2, accounts.size());
} );
SELECT
a.id as id1_0_,
a.active as active2_0_,
a.amount as amount3_0_,
a.client_id as client_i6_0_,
a.rate as rate4_0_,
a.account_type as account_5_0_
FROM
Account a
-- Activate filter [activeAccount]
SELECT
a.id as id1_0_,
a.active as active2_0_,
a.amount as amount3_0_,
a.client_id as client_i6_0_,
a.rate as rate4_0_,
a.account_type as account_5_0_
FROM
Account a
WHERE
a.active = true
Jut like with entities, collections can be filtered as well, but only if the filter is explicilty enabled on the currently running Hibernate Session
.
This way, when fetching the accounts
collections, Hibernate is going to apply the @Filter
clause filtering criteria to the associated collection entries.
@Filter
doInJPA( this::entityManagerFactory, entityManager -> {
Client client = entityManager.find( Client.class, 1L );
assertEquals( 3, client.getAccounts().size() );
} );
doInJPA( this::entityManagerFactory, entityManager -> {
log.infof( "Activate filter [%s]", "activeAccount");
entityManager
.unwrap( Session.class )
.enableFilter( "activeAccount" )
.setParameter( "active", true);
Client client = entityManager.find( Client.class, 1L );
assertEquals( 2, client.getAccounts().size() );
} );
SELECT
c.id as id1_1_0_,
c.name as name2_1_0_
FROM
Client c
WHERE
c.id = 1
SELECT
a.id as id1_0_,
a.active as active2_0_,
a.amount as amount3_0_,
a.client_id as client_i6_0_,
a.rate as rate4_0_,
a.account_type as account_5_0_
FROM
Account a
WHERE
a.client_id = 1
-- Activate filter [activeAccount]
SELECT
c.id as id1_1_0_,
c.name as name2_1_0_
FROM
Client c
WHERE
c.id = 1
SELECT
a.id as id1_0_,
a.active as active2_0_,
a.amount as amount3_0_,
a.client_id as client_i6_0_,
a.rate as rate4_0_,
a.account_type as account_5_0_
FROM
Account a
WHERE
accounts0_.active = true
and a.client_id = 1
The main advantage of |
It’s not possible to combine the If caching was allowed for a currently filtered collection, then the second-level cache would store only a subset of the whole collection. Afterward, every other Session will get the filtered collection from the cache, even if the Session-level filters have not been explicitly activated. For this reason, the second-level collection cache is limited to storing whole collections, and not subsets. |
@Any mapping
There is one more type of property mapping.
The @Any
mapping defines a polymorphic association to classes from multiple tables.
This type of mapping requires more than one column.
The first column contains the type of the associated entity.
The remaining columns contain the identifier.
It is impossible to specify a foreign key constraint for this kind of association. This is not the usual way of mapping polymorphic associations and you should use this only in special cases (e.g. audit logs, user session data, etc). |
The @Any
annotation describes the column holding the metadata information.
To link the value of the metadata information and an actual entity type, the @AnyDef
and @AnyDefs
annotations are used.
The metaType
attribute allows the application to specify a custom type that maps database column values to persistent classes that have identifier properties of the type specified by idType
.
You must specify the mapping from values of the metaType
to class names.
For the next examples, consider the following Property
class hierarchy:
Property
class hierarchypublic interface Property<T> {
String getName();
T getValue();
}
@Entity
@Table(name="integer_property")
public class IntegerProperty implements Property<Integer> {
@Id
private Long id;
@Column(name = "`name`")
private String name;
@Column(name = "`value`")
private Integer value;
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
@Override
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public Integer getValue() {
return value;
}
public void setValue(Integer value) {
this.value = value;
}
}
@Entity
@Table(name="string_property")
public class StringProperty implements Property<String> {
@Id
private Long id;
@Column(name = "`name`")
private String name;
@Column(name = "`value`")
private String value;
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
@Override
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public String getValue() {
return value;
}
public void setValue(String value) {
this.value = value;
}
}
A PropertyHolder
can reference any such property, and, because each Property
belongs to a separate table, the @Any
annotation is, therefore, required.
@Any
mapping usage@Entity
@Table( name = "property_holder" )
public class PropertyHolder {
@Id
private Long id;
@Any(
metaDef = "PropertyMetaDef",
metaColumn = @Column( name = "property_type" )
)
@JoinColumn( name = "property_id" )
private Property property;
//Getters and setters are omitted for brevity
}
CREATE TABLE property_holder (
id BIGINT NOT NULL,
property_type VARCHAR(255),
property_id BIGINT,
PRIMARY KEY ( id )
)
As you can see, there are two columns used to reference a Property
instance: property_id
and property_type
.
The property_id
is used to match the id
column of either the string_property
or integer_property
tables,
while the property_type
is used to match the string_property
or the integer_property
table.
The table resolving mapping is defined by the metaDef
attribute which references an @AnyMetaDef
mapping.
Although the @AnyMetaDef
mapping could be set right next to the @Any
annotation,
it’s good practice to reuse it, therefore it makes sense to configure it on a class or package-level basis.
The package-info.java
contains the @AnyMetaDef
mapping:
@Any
mapping usage@AnyMetaDefs(
@AnyMetaDef( name= "PropertyMetaDef", metaType = "string", idType = "long",
metaValues = {
@MetaValue(value = "S", targetEntity = StringProperty.class),
@MetaValue(value = "I", targetEntity = IntegerProperty.class)
}
)
)
package org.hibernate.userguide.mapping.basic.any;
import org.hibernate.annotations.AnyMetaDef;
import org.hibernate.annotations.AnyMetaDefs;
import org.hibernate.annotations.MetaValue;
It is recommended to place the |
To see how the @Any
annotation in action, consider the following example:
@Any
mapping usagedoInHibernate( this::sessionFactory, session -> {
IntegerProperty ageProperty = new IntegerProperty();
ageProperty.setId( 1L );
ageProperty.setName( "age" );
ageProperty.setValue( 23 );
StringProperty nameProperty = new StringProperty();
nameProperty.setId( 1L );
nameProperty.setName( "name" );
nameProperty.setValue( "John Doe" );
session.persist( ageProperty );
session.persist( nameProperty );
PropertyHolder namePropertyHolder = new PropertyHolder();
namePropertyHolder.setId( 1L );
namePropertyHolder.setProperty( nameProperty );
session.persist( namePropertyHolder );
} );
doInHibernate( this::sessionFactory, session -> {
PropertyHolder propertyHolder = session.get( PropertyHolder.class, 1L );
assertEquals("name", propertyHolder.getProperty().getName());
assertEquals("John Doe", propertyHolder.getProperty().getValue());
} );
INSERT INTO integer_property
( "name", "value", id )
VALUES ( 'age', 23, 1 )
INSERT INTO string_property
( "name", "value", id )
VALUES ( 'name', 'John Doe', 1 )
INSERT INTO property_holder
( property_type, property_id, id )
VALUES ( 'S', 1, 1 )
SELECT ph.id AS id1_1_0_,
ph.property_type AS property2_1_0_,
ph.property_id AS property3_1_0_
FROM property_holder ph
WHERE ph.id = 1
SELECT sp.id AS id1_2_0_,
sp."name" AS name2_2_0_,
sp."value" AS value3_2_0_
FROM string_property sp
WHERE sp.id = 1
@ManyToAny
mapping
The @Any
mapping is useful to emulate a @ManyToOne
association when there can be multiple target entities.
To emulate a @OneToMany
association, the @ManyToAny
annotation must be used.
In the following example, the PropertyRepository
entity has a collection of Property
entities.
The repository_properties
link table holds the associations between PropertyRepository
and Property
entities.
@ManyToAny
mapping usage@Entity
@Table( name = "property_repository" )
public class PropertyRepository {
@Id
private Long id;
@ManyToAny(
metaDef = "PropertyMetaDef",
metaColumn = @Column( name = "property_type" )
)
@Cascade( { org.hibernate.annotations.CascadeType.ALL })
@JoinTable(name = "repository_properties",
joinColumns = @JoinColumn(name = "repository_id"),
inverseJoinColumns = @JoinColumn(name = "property_id")
)
private List<Property<?>> properties = new ArrayList<>( );
//Getters and setters are omitted for brevity
}
CREATE TABLE property_repository (
id BIGINT NOT NULL,
PRIMARY KEY ( id )
)
CREATE TABLE repository_properties (
repository_id BIGINT NOT NULL,
property_type VARCHAR(255),
property_id BIGINT NOT NULL
)
To see how the @ManyToAny
annotation works, consider the following example:
@Any
mapping usagedoInHibernate( this::sessionFactory, session -> {
IntegerProperty ageProperty = new IntegerProperty();
ageProperty.setId( 1L );
ageProperty.setName( "age" );
ageProperty.setValue( 23 );
StringProperty nameProperty = new StringProperty();
nameProperty.setId( 1L );
nameProperty.setName( "name" );
nameProperty.setValue( "John Doe" );
session.persist( ageProperty );
session.persist( nameProperty );
PropertyRepository propertyRepository = new PropertyRepository();
propertyRepository.setId( 1L );
propertyRepository.getProperties().add( ageProperty );
propertyRepository.getProperties().add( nameProperty );
session.persist( propertyRepository );
} );
doInHibernate( this::sessionFactory, session -> {
PropertyRepository propertyRepository = session.get( PropertyRepository.class, 1L );
assertEquals(2, propertyRepository.getProperties().size());
for(Property property : propertyRepository.getProperties()) {
assertNotNull( property.getValue() );
}
} );
INSERT INTO integer_property
( "name", "value", id )
VALUES ( 'age', 23, 1 )
INSERT INTO string_property
( "name", "value", id )
VALUES ( 'name', 'John Doe', 1 )
INSERT INTO property_repository ( id )
VALUES ( 1 )
INSERT INTO repository_properties
( repository_id , property_type , property_id )
VALUES
( 1 , 'I' , 1 )
INSERT INTO repository_properties
( repository_id , property_type , property_id )
VALUES
( 1 , 'S' , 1 )
SELECT pr.id AS id1_1_0_
FROM property_repository pr
WHERE pr.id = 1
SELECT ip.id AS id1_0_0_ ,
integerpro0_."name" AS name2_0_0_ ,
integerpro0_."value" AS value3_0_0_
FROM integer_property integerpro0_
WHERE integerpro0_.id = 1
SELECT sp.id AS id1_3_0_ ,
sp."name" AS name2_3_0_ ,
sp."value" AS value3_3_0_
FROM string_property sp
WHERE sp.id = 1