This guide discusses migration to Hibernate ORM version 6.2. For migration from earlier versions, see any other pertinent migration guides as well.
DDL type changes
OffsetTime mapping changes
OffsetTime
now depends on @TimeZoneStorage
and the hibernate.timezone.default_storage
setting.
Since the default for this setting is now TimeZoneStorageType.DEFAULT
, this means that the DDL expectations for such columns changed.
If the target database supports time zone types natively like H2, Oracle, SQL Server and DB2 z/OS,
the type code SqlTypes.TIME_WITH_TIMEZONE
is now used, which maps to the DDL type time with time zone
.
Due to this change, schema validation errors could occur on existing databases.
The migration to time with time zone
requires a migration expression like cast(old as time with time zone)
which will interpret the previous time as local time and compute the offset for the time with time zone
based on the current date
and time zone settings of your database session.
If the target database does not support time zone types natively, Hibernate behaves just like before.
To retain backwards compatibility, configure the setting hibernate.timezone.default_storage
to NORMALIZE
.
UUID mapping changes on MariaDB
On MariaDB, the type code SqlTypes.UUID
now by default refers to the DDL type uuid
, whereas before it was using binary(16)
.
Due to this change, schema validation errors could occur on existing databases.
The migration to uuid
requires a migration expression like cast(old as uuid)
.
To retain backwards compatibility, configure the setting hibernate.type.preferred_uuid_jdbc_type
to BINARY
.
UUID mapping changes on SQL Server
On SQL Server, the type code SqlTypes.UUID
now by default refers to the DDL type uniqueidentifier
, whereas before it was using binary(16)
.
Due to this change, schema validation errors could occur on existing databases.
The migration to uuid
requires a migration expression like cast(old as uuid)
.
To retain backwards compatibility, configure the setting hibernate.type.preferred_uuid_jdbc_type
to BINARY
.
JSON mapping changes on Oracle
On Oracle 12.1+, the type code SqlTypes.JSON
now by default refers to the DDL type blob
and on 21+ to json
, whereas before it was using clob
.
Due to this change, schema validation errors could occur on existing databases.
The migration to blob
and json
requires a migration expression like cast(old as blob)
and cast(old as json)
respectively.
To get the old behavior, annotate the column with @Column(definition = "clob")
.
This change was done because blob
and json
are way more efficient and because we don’t expect wide usage of SqlTypes.JSON
yet.
JSON mapping changes on H2
On H2 1.4.200+, the type code SqlTypes.JSON
now by default refers to the DDL type json
, whereas before it was using clob
.
Due to this change, schema validation errors could occur on existing databases.
The migration to json
requires a migration expression like cast(old as json)
.
Note that this change in behavior is backwards compatible and you do not need to change your schema,
unless you are running into schema validation errors and want to fix them.
To get the old behavior, annotate the column with @Column(definition = "clob")
.
This change was done because the native json
type is more efficient and because we don’t expect wide usage of SqlTypes.JSON
yet.
Datatype for enums
Hibernate 6.1 changed the implicit SQL datatype for mapping enums from TINYINT
to SMALLINT
to account for
Java supporting up to 32K enum entries which would overflow a TINYINT
. However, almost no one is developing
enums with that many entries. Starting in 6.2, the choice of implicit SQL datatype for storing enums is sensitive
to the number of entries defined on the enum class. Enums with more than 128 entries are stored as SMALLINT
implicitly,
otherwise TINYINT
is used.
- NOTE
-
On MySQL, enums are now stored using the
ENUM
datatype by default
Timezone and offset storage
hibernate.timezone.default_storage
now defaults to DEFAULT
, meaning:
-
if the database/dialect supports it, time zones of date/time values are stored by using the
timestamp with time zone
SQL column type; -
otherwise, time zones of date/time values are not stored, and date/time values are normalized to UTC.
In Hibernate ORM 5, time zones were not stored, but normalized to the time zone set in hibernate.jdbc.time_zone
, the JVM time zone by default.
This discrepancy might lead to incorrect date/time being loaded from the database
for properties of type OffsetDateTime
and ZonedDateTime
if your application was migrated from Hibernate ORM 5 and
was setting hibernate.jdbc.time_zone
to a non-UTC timezone.
To revert to Hibernate ORM 5’s behavior, set the configuration property hibernate.timezone.default_storage
to NORMALIZE
.
Byte[]/Character[] mapping changes
Hibernate historically allowed mapping Byte[]
and Character[]
in a domain model as basic values to
VARBINARY
and (N)VARCHAR
SQL types.
Strictly speaking, this is an inaccurate mapping. Because the Java wrapper types (Byte
and Character
) are used, null
elements are allowed. However, it is not possible to store such domain values as VARBINARY
and (N)VARCHAR
SQL types.
In fact, attempting to store such values leads to errors on previous versions. The legacy support has an implicit contract
that the Byte[]
and Character[]
types are handled exactly the same as the byte[]
and char[]
variants.
Building on the ability to use
structured SQL types (ARRAY
, SQLXML
, …) for storing basic values, 6.2 makes it configurable how to handle mappings of
this type:
- DISALLOW
-
(default) Throw an informative and actionable error
- ALLOW
-
Allows the use of the wrapper arrays stored as structured SQL types (
ARRAY
,SQLXML
, …) to maintain proper null element semantics. - LEGACY
-
Allows the use of the wrapper arrays stored as
VARBINARY
andVARCHAR
, disallowing null elements.
The main idea here is for applications using these types in the domain model to make a conscious decision about how these values are stored.
Some mappings are considered implicit opt-in to the legacy behavior; e.g. using @Lob or @Nationalized
|
For those using such mappings, there are a few options -
-
Migrate the domain model to use
byte[]
andchar[]
instead. -
Specify
hibernate.type.wrapper_array_handling=legacy
to enable the legacy behavior. -
Specify
@JavaType(ByteArrayJavaType.class)
or@JavaType(CharacterArrayJavaType.class)
attribute-by-attribute -
Specify
hibernate.type.wrapper_array_handling=allow
. If the schema is legacy, migrate the database schema to use a structured SQL type. E.g.-
Execute
alter table tbl rename column array_col to array_col_old
to have the old format available -
Execute
alter table tbl add column array_col DATATYPE array
to add the column like the new mapping expects it to be -
Run the query
select t.primary_key, t.array_col_old from table t
to extractbyte[]
orString
-
For every result, load the Hibernate entity by primary key and set the field value to transformed result
Byte[]
orCharacter[]
-
Finally, drop the old column
alter table tbl drop column array_col_old
-
Check constraints for boolean and enum mappings
Check constraints now correctly generated for boolean and enum mappings
UNIQUE constraint for optional one-to-one mappings
Previous versions of Hibernate did not create a UNIQUE constraint on the database for logical[1] one-to-one associations marked as optional. That is not correct from a modeling perspective as the foreign-key should be constrained as unique. Starting in 6.2, those UNIQUE constraints are now created.
Often the association can also be remapped using @ManyToOne
+ @UniqueConstraint
instead.
Column type inference for number(n,0)
in native SQL queries on Oracle
Since Hibernate 6.0, columns of type number
with scale 0 on Oracle were interpreted as boolean
, tinyint
, smallint
, int
, or bigint
,
depending on the precision.
Now, columns of type number
with scale 0 are interpreted as int
or bigint
depending on the precision.
Removal of support for legacy database versions
This version introduces the concept of minimum supported database version for most of the database dialects that Hibernate supports.
This implies that the legacy code for versions that are no longer supported by their vendors, has been removed from the hibernate-core module.
It is, however, still available in the hibernate-community-dialects module, just under a different package,
namely org.hibernate.community.dialect
instead of org.hibernate.dialect
.
Note that this also includes version specific dialects like PostgreSQL81Dialect
, MariaDB102Dialect
etc.
The minimum supported dialect versions are as follows:
Dialect | Minimum supported version |
---|---|
MySQL |
5.7 |
SQL Server 2008 |
10.0 |
DB2 |
10.5 |
DB2i |
7.1 |
DB2z |
12.1 |
MariaDB |
10.3 |
H2 |
1.4.197 |
Derby |
10.14.2 |
Sybase |
16.0 |
CockroachDB |
21.1 |
PostgreSQL |
10.0 |
Oracle |
11.2 |
HSQLDB |
2.6.1 |
Changes to CDI handling
When CDI is available and configured, Hibernate can use the CDI BeanManager
to resolve various
bean references. JPA explicitly defines support for this for both attribute-converters and
entity-listeners.
Hibernate also has the ability to resolve some of its extension points using the CDI BeanManager
.
Version 6.2 adds a new boolean hibernate.cdi.extensions
setting to control this:
- true
-
indicates to use the CDI
BeanManager
to resolve these extensions - false
-
(the default) indicates to not use the CDI
BeanManager
to resolve these extensions
The previous behavior was to always load the extensions from CDI if it was available. However,
this can sometimes lead to timing issues with the BeanManager
not being ready for use when we need
those extension beans. Starting with 6.2, these extensions will only be resolved from the CDI
BeanManager
if hibernate.cdi.extensions
is set to true.
Change enhancement defaults and deprecation
The enableLazyInitialization
and enableDirtyTracking
enhancement tooling options in the ANT task, Maven Plugin and Gradle Plugin,
as well as the respective hibernate.enhancer.enableLazyInitialization
and hibernate.enhancer.enableDirtyTracking
configuration settings,
switched their default values to true
and the settings are now deprecated for removal without replacement.
See HHH-15641 for details.
The global property hibernate.bytecode.use_reflection_optimizer
switched the default value to true
and the setting is now deprecated for removal without replacement. See HHH-15631 for details.
API / SPI / Internal distinction
Dating back to Hibernate 5.x, we have been cleaning up packages to make the distinction between contracts which are considered an API, SPI and internal. We’ve done some more work on that in 6.2 as well.
org.hibernate.cfg package
The org.hibernate.cfg
package has been especially egregious in mixing APIs and internals historically. The only
true API contracts in this package include org.hibernate.cfg.AvailableSettings
and org.hibernate.cfg.Configuration
which have been left in place.
Additionally, while it is considered an internal detail, org.hibernate.cfg.Environment
has also been left in place
as many applications have historically used it rather than org.hibernate.cfg.AvailableSettings
.
A number of contracts are considered deprecated and have been left in place.
The rest have been moved under the org.hibernate.boot
package where they more properly belong.
org.hibernate.loader package
Most of the org.hibernate.loader
package is really an SPI centered around org.hibernate.loader.ast
which supports loading entities and collections by various types of keys - primary-key, unique-key,
foreign-key and natural-key. org.hibernate.loader.ast
has already been previously well-defined
in terms of SPI / internal split.
Changes in integration contracts (SPIs)
SPI is a category of interfaces that we strive to maintain with more stability than internal APIs, but which might change from minor to minor upgrades as the project needs a bit of flexibility.
These are not considered public API so should not affect end-user (application developer’s) code but such changes might break integration with other libraries which integrate with Hibernate ORM.
During the development of Hibernate ORM 6.2 the following SPIs have seen some modifications:
EntityPersister#lock
Changed from EntityPersister#lock(Object, Object, Object, LockMode, SharedSessionContractImplementor)
to EntityPersister#lock(Object, Object, Object, LockMode, EventSource)
.
This should be trivial to fix as EventSource
and SharedSessionContractImplementor
are both contracts of the SessionImpl
; to help transition we recommend using
the methods isEventSource
and asEventSource
, available on the `SharedSessionContractImplementor`contract.
N.B. method asEventSource
will throw an exception for non-compatible type; but because of previous restrictions all invocations to lock
actually had to be compatible:
this is now made cleared with the signature change.
EntityPersister#multiLoad
The same change was applieed to multiLoad(Object[] ids, SharedSessionContractImplementor session, MultiIdLoadOptions loadOptions)
,
now migrated to multiLoad(Object[] ids, EventSource session, MultiIdLoadOptions loadOptions)
The same conversion can be safely applied.
Executable#afterDeserialize
As in the previous two cases, the parameter now accepts EventSource
instead of SharedSessionContractImplementor
.
The same conversion can be safely applied.
JdbcType#getJdbcRecommendedJavaTypeMapping()
The return type of JdbcType#getJdbcRecommendedJavaTypeMapping()
was changed from BasicJavaType
to JavaType
.
Even though this is a source compatible change, it breaks binary backwards compatibility.
We decided that it is fine to do this though, as this is a new minor version.
Query Path comparison
As of 6.2, comparisons of paths are type checked early. This means that a comparison predicate in HQL or JPA Criteria might fail to construct if the types of the left and right hand side are not compatible.
In general, two types T1 and T2 are considered compatible if
-
T1 == T2
-
T1 instanceof T2 or T2 instanceof T1
-
T1 is temporal and T2 is temporal
-
T1 or T2 is unknown
-
T1 can be widened/coerced to T2, or the other way around
Widening/Coercion usually refers to e.g. widening an integer to a long, but can also mean that a string constant can be interpreted as enum when comparing against an enum attribute.
Note that a comparison of a temporal attribute against a string literal worked before
from MyEntity e where e.temporalAttribute > '2020-01-01'
but has to be changed to the proper temporal literal now
from MyEntity e where e.temporalAttribute > date 2020-01-01
Batch Fetching and LockMode
When LockMode is greater than READ Hibernate does not execute the batch fetching so existing uninitialized proxies will not be initialized. This because the lock mode is different from the one of the proxies in the batch fetch queue.
E.g.
` MyEntity proxy = session.getReference( MyEntity.class, 1 ); MyEntity myEntity = session.find(MyEntity.class, 2, LockMode.WRITE); ` only the entity with id equals to 2 will be loaded but the proxy will not be initialized.
Integrating Static Metamodel Generation
The integration of static metamodel generation in a project has changed; the recommended way to do this now is by harnessing the annotation processor classpath. This is true for both gradle and maven.
Native query with joins
A Native query that uses a result set mapping, explicit or implicitly by specifying an entity class as result type to createNativeQuery
,
requires unique select item aliases.
If the native query contains a join to a table with same named columns, a query that e.g. does select * from ..
will lead to an error.
If the desire is to select only columns for the result type entity, prefix the * with a tables alias e.g.
select p.* from …
E.g.
@Entity
class Person {
@Id
private Long id;
@OneToMany(mappedBy = "person")
private Set<Dog> dogs = new HashSet<>( 0 );
}
@Entity
class Dog {
@Id
private Long id;
}
Queries like
session.createNativeQuery(
"SELECT * FROM person p LEFT JOIN dog d on d.person_id = p.id", Person.class )
.getResultList();
have to be changed to
session.createNativeQuery(
"SELECT p.* FROM person p LEFT JOIN dog d on d.person_id = p.id", Person.class )
.getResultList();