JBoss.orgCommunity Documentation

Chapter 12. JPA Connector

12.1. Simple Model

This connector stores a graph of any structure or size in a relational database, using a JPA provider on top of a JDBC driver. Currently this connector relies upon some Hibernate-specific capabilities. The schema of the database is dictated by this connector and is optimized for storing a graph structure. (In other words, this connector does not expose as a graph the data in an existing database with an arbitrary schema.)

The JpaSource class provides a number of JavaBean properties that control its behavior:

autoGenerateSchema

Sets the Hibernate setting dictating what it does with the database schema upon first connection. Valid values are as follows (though the value is not checked):

  • "create" - Create the database schema objects when the EntityManagerFactory is created (actually when Hibernate's SessionFactory is created by the entity manager factory). If a file named "import.sql" exists in the root of the class path (e.g., '/import.sql') Hibernate will read and execute the SQL statements in this file after it has created the database objects. Note that Hibernate first delete all tables, constraints, or any other database object that is going to be created in the process of building the schema.

  • "create-drop" - Same as "create", except that the schema will be dropped after the EntityManagerFactory is closed.

  • "update" - Attempt to update the database structure to the current mapping (but does not read and invoke the SQL statements from "import.sql"). Use with caution.

  • "validate" - Validates the existing schema with the current entities configuration, but does not make any changes to the schema (and does not read and invoke the SQL statements from "import.sql"). This is the default value because it is the least intrusive and safest option, since it will verify the database's schema matches what the connector expects.

  • "disable" - Does nothing and assumes that the database is already properly configured. This should be the setting used in production, as it is a best-practice that DB administrators explicitly configure/upgrade production database schemas (using scripts).

cacheConcurrencyStrategy

Optional property that specifies the cache concurrency strategy to use. When Hibernate, ModeShape's default JPA provider, is used, this value should be one of "read-only", "read-write" (the default), "nonstrict-read-write", or "transactional".

cacheProviderClassName

Optional property that specifies the class name of the cache provider. The default value of an empty string (or null) indicates that no caching should occur. Valid values for this property are JPA implementation-dependent. When using Hibernate, (ModeShape's default JPA provider), this value is used to set the "hibernate.cache.provider_class" property.

cacheTimeToLiveInMilliseconds

Optional property that, if used, defines the maximum time in milliseconds that any information returned by this connector is allowed to be cached before being considered invalid. When not used, this source will not define a specific duration for caching information. The default value is "600000" milliseconds, or 10 minutes.

compressData

An advanced optional boolean property that dictates whether large binary and string values should be stored in a compressed form. This is enabled by default. Setting this value only affects how new records are stored; records can always be read regardless of the value of this setting. The default value is "true".

creatingWorkspaceAllowed

Optional property that defines whether clients can create additional workspaces. The default value is "true".

dialect

The dialect of the database, which must match one of the Hibernate dialect names, and must correspond to the type of driver being used. If not provided, the dialect will be auto-discovered by Hibernate. Because Hibernate does a good job of auto-determining the dialect, it is recommended that you set this only if auto-discovery fails for your database.

However, it is recommended that MySQL users always set this value, as Hibernate's auto-discovery of the dialect does not work for many MySQL installations. For example, the most common MySQL installation is MySQL 5.x with InnoDB, which requires a value of "org.hibernate.dialect.MySQLInnoDBDialect" for the "dialect" property.

dataSourceJndiName

The JNDI name of the JDBC DataSource instance that should be used. If not specified, the other driver properties must be set.

driverClassloaderName

Optional property that defines the name of the ModeShape class loader or classpath that should be used to load the JDBC driver class. This is not required if the DataSource is found in JNDI, or if the driver is on the application's classpath.

driverClassName

The name of the JDBC driver class. This is not required if the DataSource is found in JNDI, but is required otherwise.

driverClassName

The name of the JDBC driver class. This is not required if the DataSource is found in JNDI, but is required otherwise.

garbageCollectionEnabled

Advanced optional property that specifies whether unused BINARY and large STRING values should be removed from the database. The default value is "true", meaning that garbage collection is enabled and results in a periodic running of a query; this setting is appropriate for most applications. However, some applications store files and rarely (if ever) update or change that content, and in these cases this property may be set to "false".

isolationLevel

Optional property that, if used, denotes which of the java.sql.Connection#TRANSACTION_* constants should be used to control the transaction isolation level. Valid values are: "TRANSACTION_READ_COMMITTED", "TRANSACTION_READ_UNCOMMITTED", "TRANSACTION_REPEATABLE_READ", "TRANSACTION_SERIALIZABLE", and "TRANSACTION_NONE". When this property is not used, the default isolation level is set to whichever isolation level was previously set on the connection. Note that not all JDBC drivers support all isolation levels.

largeValueSizeInBytes

An advanced optional property that controls the size of property values at which they are considered to be "large values". Depending upon the model, large property values may be stored in a centralized area and keyed by a secure hash of the value. This is a space and performance optimization that stores each unique large value only once. The default value is "1024" bytes, or 1 kilobyte.

maximumConnectionsInPool

Optional property that specifies the maximum number of connections that may be in the connection pool. The default is "5".

maximumConnectionIdleTimeInSeconds

Optional property that specifies the maximum number of seconds that a connection should remain in the pool before being closed. The default is "600" seconds (or 10 minutes).

maximumSizeOfStatementCache

Optional property that specifies the maximum number of statements that should be cached. Statement caching can be disabled by setting to "0". The default is "100".

minimumConnectionsInPool

Optional property that specifies the minimum number of connections that will be kept in the connection pool. The default is "0".

model

An advanced property that dictates the type of storage schema that is used. Currently, the only supported value is "Simple", which is also the default value.

name

Required property that specifies the name of the repository source, which is used by the RepositoryService when obtaining a RepositoryConnection by name.

nameOfDefaultWorkspace

Optional property that is initialized to an empty string and which defines the name for the workspace that will be used by default if none is specified.

numberOfConnectionsToAcquireAsNeeded

Optional property that defines the number of connections that should be added to the pool when there are not enough to be used. The default is "1".

password

The password that should be used when creating JDBC connections using the JDBC driver class. This is not required or used if the DataSource is found in JNDI.

predefinedWorkspaceNames

Optional property that, if used, defines names of the workspaces that are predefined and need not be created before being used. This can be coupled with a "false" value for the "creatingWorkspaceAllowed" property to allow only the use of only predefined workspaces.

retryLimit

Optional property that, if used, defines the number of times that any single operation on a RepositoryConnection to this source should be retried following a communication failure. The default value is '0'.

rootNodeUuid

Optional property that, if used, specifies the UUID that should be used for the root node of each workspace. If no value is specified, a default UUID is used.

schemaName

Optional property that, if set, specifies the name of the schema in which this repository source will read and write data. If no schema name is specified, then data will be read from the default schema associated with the database connection.

updatesAllowed

Determines whether the content in the database is can be updated ("true"), or if the content may only be read ("false"). The default value is "true".

url

The URL that should be used when creating JDBC connections using the JDBC driver class. This is not required or used if the DataSource is found in JNDI.

username

The username that should be used when creating JDBC connections using the JDBC driver class. This is not required or used if the DataSource is found in JNDI.

One way to configure the JPA connector is to create JcrConfiguration instance with a repository source that uses the JpaSource class. For example:



JcrConfiguration config = ...
config.repositorySource("JPA Store")
      .usingClass(JpaSource.class)
      .setDescription("The database store for our content")
      .setProperty("dataSourceJndiName", "java:/MyDataSource")
      .setProperty("defaultWorkspaceName", "My Default Workspace")
      .setProperty("autoGenerateSchema", "validate");
 

Of course, setting other more advanced properties would entail calling setProperty(...) for each. Since almost all of the properties have acceptable default values, however, we don't need to set very many of them.

Another way to configure the JPA connector is to create JcrConfiguration instance and load an XML configuration file that contains a repository source that uses the JpaSource class. For example a file named configRepository.xml can be created with these contents:



<?xml version="1.0" encoding="UTF-8"?>
<configuration xmlns:mode="http://www.modeshape.org/1.0" xmlns:jcr="http://www.jcp.org/jcr/1.0">
    <!-- 
    Define the sources for the content.  These sources are directly accessible using the 
    ModeShape-specific Graph API.  In fact, this is how the ModeShape JCR implementation works.  You 
    can think of these as being similar to JDBC DataSource objects, except that they expose 
    graph content via the Graph API instead of records via SQL or JDBC. 
    -->
    <mode:sources jcr:primaryType="nt:unstructured">
        <!-- 
        The 'JPA Store' repository is an JPA source with a single default workspace (though 
        others could be created, too).
        -->
        <mode:source jcr:name="JPA Store" 
                    mode:classname="org.modeshape.connector.store.jpa.JpaSource"
                    mode:description="The database store for our content"
                    mode:dataSourceJndiName="java:/MyDataSource"
                    mode:defaultWorkspaceName="default"
                    mode:autoGenerateSchema="validate"/>    
    </mode:sources>
    
    <!-- MIME type detectors and JCR repositories would be defined below --> 
</configuration>
 

The configuration can then be loaded from Java like this:



JcrConfiguration config = new JcrConfiguration().loadFrom("/configRepository.xml");
 

ModeShape users who prefer not to give DDL privileges to the ModeShape database user for this connector can use the ModeShape JPA DDL generation tool to create the proper DDL files for their database dialect. This tool is packaged as a zip in utils/modeshape-jpa-ddl-gen/target/distribution when the Maven assembly profile -Passembly is run. Unzip the contents and run the ddl-gen script with the following syntax:

ddl-gen.sh(.bat) -dialect <dialect name> -model <model_name> [-out <path to output directory>]

The dialect and model parameters should match the value of the dialect and model properties specified for the JPA connector.

Running this executable will create two files in the output directory (or the current directory if no output directory was specified): create.modeshape-jpa-connector.ddl and drop.modeshape-jpa-connector.ddl. The former contains the DDL to create or replace the tables, foreign keys, indices, and sequences needed by the JPA connector and the latter contains the DDL to drop any tables, foreign keys, indices, and sequences needed by the JPA connector.

Note

It is strongly recommended that production users of ModeShape utilize this tool to generate the DDL for production and test databases. After this tool is used, the autoGenerateSchema property on the JpaSource should be set to "disable". This will prevent the schema from being dropped and recreated (or needlessly re-validated) each time that a ModeShape instance starts.

This database schema model stores node properties as opaque records in the same row as transparent values like the node's namespace, local name, and same-name-sibling index. Large property values are stored separately.

The set of tables used in this model includes:

This database model contains two tables that are used in an efficient mechanism to find all of the nodes in the subgraph below a certain node. This process starts by creating a record for the subgraph query, and then proceeds by executing a join to find all the children of the top-level node, and inserting them into the database (in a working area associated with the subgraph query). Then, another join finds all the children of those children and inserts them into the same working area. This continues until the maximum depth has been reached, or until there are no more children (whichever comes first). All of the nodes in the subgraph are then represented by records in the working area, and can be used to quickly and efficient work with the subgraph nodes. When finished, the mechanism deletes the records in the working area associated with the subgraph query.

This subgraph query mechanism is extremely efficient, performing one join/insert statement per level of the subgraph, and is completely independent of the number of nodes in the subgraph. For example, consider a subgraph of node A, where A has 10 children, and each child contains 10 children, and each grandchild contains 10 children. This subgraph has a total of 1111 nodes (1 root + 10 children + 10*10 grandchildren + 10*10*10 great-grandchildren). Finding the nodes in this subgraph would normally require 1 query per node (in other words, 1111 queries). But with this subgraph query mechanism, all of the nodes in the subgraph can be found with 1 insert plus 4 additional join/inserts.

This mechanism has the added benefit that the set of nodes in the subgraph are kept in a working area in the database, meaning they don't have to be pulled into memory.

In the Simple model, subgraph queries are used to efficiently process a number of different requests, including ReadBranchRequest and DeleteBranchRequest. Processing each of these kinds of requests requires knowledge of the subgraph, and in fact all but the ReadBranchRequest need to know the complete subgraph.

Warning

Most DBMS systems have built-in sizes for LOB columns (although many allow DB admins to control the size), and thus do not require any special consideration. However, Apache Derby and IBM DB2 require explicit sizes on LOB columns. Currently, the ModeShape database schema has two such columns: the MODE_SIMPLE_NODE.DATA and MODE_LARGE_VALUES.DATA columns. The sizes of these columns are sufficiently large (1MB and 1GB, respectively), but attempts to store larger values than these sizes will fail.

Therefore, when using IBM DB2 and Apache Derby, determine the appropriate size of these columns for your environment. For production systems, ModeShape recommends using the DDL generation utility (provided with ModeShape, see above) to generate the DDL for your particular DBMS, and its very easy to adjust that file to specify alternative sizes for the two columns. Alternatively, database administrators can alter the two tables by increasing the size of these columns.

Other databases do not seem to be affected by this issue.