JBoss.orgCommunity Documentation

Reference Guide / eXo JCR

Java Content Repository and Extension services


1. eXoJCR
1.1. Introduction in eXoJCR
1.1.1. Data model
1.2. Why use JCR?
1.2.1. What is JCR?
1.2.2. Why use JCR?
1.2.3. What does eXo do?
1.2.4. Further Reading
1.3. eXo JCR Implementation
1.3.1. Related Documents
1.3.2. How it works
1.3.3. Workspace Data Model
1.4. Advantages of eXo JCR
1.4.1. Advantages for application developers
1.4.2. Advantages for managers
1.5. Compatibility Levels
1.5.1. Level 1
1.5.2. Level 2
1.5.3. Optional features
1.6. Using JCR
1.6.1. Obtaining a Repository object
1.6.2. JCR Session common considerations
1.6.3. JCR Application practices
1.7. JCR Service Extensions
1.7.1. Concept
1.7.2. Implementation
1.7.3. Configuration
1.7.4. Related Pages
1.8. eXo JCR Application Model
1.9. NodeType Registration
1.9.1. Interfaces and methods
1.9.2. Node type registration
1.9.3. Changing existing node type
1.9.4. Removing node type
1.9.5. Practical How to
1.10. Registry Service
1.10.1. Concept
1.10.2. The API
1.10.3. Configuration
1.11. Namespace altering
1.11.1. Adding new namespace
1.11.2. Changing existing namespace
1.11.3. Removing existing namespace
1.12. Node Types and Namespaces
1.12.1. Node Types definition
1.12.2. Namespaces definition
1.13. eXo JCR configuration
1.13.1. Related documents
1.13.2. Portal and Standalone configuration
1.13.3. JCR Configuration
1.13.4. Repository service configuration (JCR repositories configuration)
1.13.5. Repository configuration
1.13.6. Workspace configuration
1.13.7. Value Storage plugin configuration (for data container):
1.13.8. Initializer configuration (optional)
1.13.9. Cache configuration
1.13.10. Query Handler configuration
1.13.11. Lock Manager configuration
1.13.12. Help application to prohibit the use of closed sessions
1.13.13. Help application to allow the use of closed datasources
1.13.14. Getting the effective configuration at Runtime of all the repositories
1.13.15. Configuration of workspaces using system properties
1.14. Multi-language support in eXo JCR RDB backend
1.14.1. Oracle
1.14.2. DB2
1.14.3. MySQL
1.14.4. PostgreSQL/PostgrePlus
1.15. How to host several JCR instances on the same database instance?
1.15.1. LockManager configuration
1.15.2. HibernateService configuration
1.16. Search Configuration
1.16.1. XML Configuration
1.16.2. Configuration parameters
1.16.3. Global Search Index
1.16.4. Indexing Adjustments
1.17. JCR Configuration persister
1.17.1. Idea
1.17.2. Usage
1.18. JDBC Data Container Config
1.18.1. General recommendations for database configuration
1.18.2. Isolated-database Configuration
1.18.3. Multi-database Configuration
1.18.4. Single-database configuration
1.18.5. Simple and Complex queries
1.18.6. Forse Query Hints
1.18.7. Notes for Microsoft Windows users
1.19. External Value Storages
1.19.1. Tree File Value Storage
1.19.2. Simple File Value Storage
1.19.3. Content Addressable Value storage (CAS) support
1.19.4. Disabling value storage
1.20. Workspace Data Container
1.20.1. Database's dialects
1.21. REST Services on Groovy
1.21.1. Usage
1.22. Configuring JBoss AS with eXo JCR in cluster
1.22.1. Launching Cluster
1.22.2. Requirements
1.23. Infinispan configuration
1.23.1. Infinispan configuration for indexer, lock manager and data container
1.23.2. JGroups configuration
1.23.3. Shipped Infinispan configuration templates
1.24. LockManager configuration
1.24.1. CacheableLockManagerImpl
1.25. QueryHandler configuration
1.25.1. Indexing in clustered environment
1.25.2. Configuration
1.25.3. Asynchronous reindexing
1.25.4. Advanced tuning
1.26. JBossTransactionsService
1.26.1. Configuration
1.27. TransactionManagerLookup
1.28. RepositoryCreationService
1.28.1. Dependencies
1.28.2. How it works
1.28.3. Configuration
1.28.4. RepositoryCreationService Interface
1.28.5. Conclusions and restrictions
1.29. JCR Query Usecases
1.29.1. Query Lifecycle
1.29.2. Query result settings
1.29.3. Type Constraints
1.29.4. Property Constraints
1.29.5. Path Constraint
1.29.6. Ordering specifying
1.29.7. Section 1.31, “Fulltext Search And Affecting Settings”
1.29.8. Indexing rules and additional features
1.29.9. Query Examples
1.29.10. Tips and tricks
1.30. Searching Repository Content
1.30.1. Bi-directional RangeIterator (since 1.9)
1.30.2. Fuzzy Searches (since 1.0)
1.30.3. SynonymSearch (since 1.9)
1.30.4. High-lighting (Since 1.9)
1.30.5. SpellChecker
1.30.6. Similarity (Since 1.12)
1.31. Fulltext Search And Affecting Settings
1.31.1. Property content indexing
1.31.2. Lucene Analyzers
1.31.3. How are different properties indexed?
1.31.4. Fulltext search query examples
1.31.5. Different analyzers in action
1.32. JCR API Extensions
1.32.1. API and usage
1.32.2. Configuration
1.32.3. Implementation notices
1.33. WebDAV
1.33.1. Configuration
1.33.2. Screenshots
1.33.3. Comparison table of WebDav and JCR commands
1.33.4. Restrictions
1.33.5. Same name sibling
1.34. FTP
1.34.1. Configuration Parameters
1.35. eXo JCR Backup Service
1.35.1. Concept
1.35.2. How it works
1.35.3. Configuration
1.35.4. RDBMS backup
1.35.5. Usage
1.35.6. Restore existing workspace or repository
1.35.7. Restore a workspace or a repository using original configuration
1.35.8. Backup set portability
1.35.9. DB type migration
1.36. HTTPBackupAgent and backup client
1.36.1. HTTPBackupAgent
1.36.2. Backup Client
1.36.3. Backup Client Usage
1.36.4. Full example about creating backup and restoring it for workspace 'backup'
1.36.5. Full example about creating backup and restoring it for repository 'repository'
1.37. How to backup the data of your JCR using an external backup tool in 3 steps?
1.37.1. Step 1: Suspend the Repository
1.37.2. Step 2: Backup the data
1.37.3. Step 3: Resume the Repository
1.38. eXo JCR statistics
1.38.1. Statistics on the Database Access Layer
1.38.2. Statistics on the JCR API accesses
1.38.3. Statistics Manager
1.39. Checking and repairing repository integrity and consistency
1.39.1. Recommendations on how to fix corrupted JCR manually
1.40. Quota Manager
1.40.1. Quota Manager configuration
1.40.2. Quota manager interface overview
1.41. JTA
1.42. The JCA Resource Adapter
1.42.1. The SessionFactory
1.42.2. Configuration
1.42.3. Deployment
1.43. Access Control
1.43.1. Standard Action Permissions
1.43.2. eXo Access Control
1.44. Access Control Extension
1.44.1. Prerequisites
1.44.2. Access Context Action
1.44.3. The Invocation Context
1.44.4. Custom Extended Access Manager
1.44.5. Example of a custom Access Manager
1.45. Link Producer Service
1.46. Binary Values Processing
1.46.1. Configuration
1.46.2. Usage
1.46.3. Value implementations
1.47. JCR Resources:
1.48. JCR Workspace Data Container (architecture contract)
1.48.1. Concepts
1.48.2. Requirements
1.48.3. Value storages API
1.49. How to implement Workspace Data Container
1.49.1. Notes on Value storage usage:
1.50. DBCleanService
1.50.1. Methods of DBCleanService
1.50.2. Need to clean only single workspace
1.50.3. Need to clean the whole repository
1.51. JCR Performance Tuning Guide
1.51.1. JCR Performance and Scalability
1.51.2. Performance Tuning Guide
2. eXoKernel
2.1. ExoContainer info
2.1.1. Container hierarchy
2.2. Service Configuration for Beginners
2.2.1. Requirements
2.2.2. Services
2.2.3. Configuration File
2.2.4. Execution Modes
2.2.5. Containers
2.2.6. Configuration Retrieval
2.2.7. Service instantiation
2.2.8. Miscellaneous
2.2.9. Further Reading
2.3. Service Configuration in Detail
2.3.1. Requirements
2.3.2. Sample Service
2.3.3. Parameters
2.3.4. External Plugin
2.3.5. Import
2.3.6. System properties
2.3.7. Understanding the prefixes supported by the configuration manager
2.4. Container Configuration
2.4.1. Kernel configuration namespace
2.4.2. Understanding how configuration files are loaded
2.4.3. eXo Container hot reloading
2.4.4. System property configuration
2.4.5. Variable Syntaxes
2.4.6. Runtime configuration profiles
2.4.7. Component request life cycle
2.4.8. Thread Context Holder
2.5. Inversion Of Control
2.5.1. How
2.5.2. Injection
2.5.3. Side effects
2.6. Services Wiring
2.6.1. Portal Instance
2.6.2. Introduction to the XML schema of the configuration.xml file
2.6.3. Configuration retrieval and log of this retrieval
2.7. Component Plugin Priority
2.8. Understanding the ListenerService
2.8.1. What is the ListenerService ?
2.8.2. How does it work?
2.8.3. How to configure a listener?
2.8.4. Concrete Example
2.9. Initial Context Binder
2.9.1. API
2.10. Job Scheduler Service
2.10.1. Where is Job Scheduler Service used in eXo Products?
2.10.2. How does Job Scheduler work?
2.10.3. Reference
2.11. eXo Cache
2.11.1. Basic concepts
2.11.2. Advanced concepts
2.11.3. eXo Cache extension
2.11.4. eXo Cache based on Infinispan
2.11.5. eXo Cache based on Spymemcached
2.12. TransactionService
2.12.1. Existing TransactionService implementations
2.13. The data source provider
2.13.1. Configuration
2.14. JNDI naming
2.14.1. Prerequisites
2.14.2. How it works
2.14.3. Configuration examples
2.14.4. Recommendations for Application Developers
2.15. Logs configuration
2.15.1. Logs configuration initializer
2.15.2. Configuration examples
2.15.3. Tips and Troubleshooting
2.16. Manageability
2.16.1. Managed framework API
2.16.2. JMX Management View
2.16.3. Example
2.17. RPC Service
2.17.1. Configuration
2.17.2. The SingleMethodCallCommand
2.18. Extensibility
2.19. Dependency Injection (JSR 330)
2.19.1. Specificities and Limitations
2.19.2. Configuration
2.19.3. Scope Management
2.20. Container Integration
2.20.1. Google Guice
2.20.2. Spring
2.20.3. Weld
2.21. Auto Registration
2.22. Multi-threaded Kernel
2.23. HikariCP connection pool
3. eXoCore
3.1. Database Creator
3.1.1. API
3.1.2. Configuration examples
3.1.3. Examples of DDL script
3.2. Security Service
3.2.1. Framework
3.2.2. Usage
3.3. Organization Service
3.3.1. Organizational Model
3.3.2. Custom Organization Service implementation instructions
3.4. Organization Service Initializer
3.5. Organization Listener
3.5.1. Writing your own listeners
3.5.2. Registering your listeners
3.6. Update ConversationState when user's Membership changed
3.7. DB Schema creator service
3.8. Database Configuration for Hibernate
3.8.1. Generic configuration
3.8.2. Example DB configuration
3.8.3. Caching configuration
3.8.4. Registering custom annotated classes and Hibernate XML files into the service
3.8.5. Disable/Enable an User
3.9. LDAP Configuration
3.9.1. Quickstart
3.9.2. Configuration
3.9.3. Advanced topics
3.10. JCR organization service Configuration
3.10.1. Quickstart
3.10.2. Configuration
3.10.3. Migration
3.10.4. Disable/Enable an user
3.11. Organization Service TCK tests configuration
3.11.1. Maven pom.xml file configuration
3.11.2. Standalone container and Organization Service configuration
3.11.3. Optional Tests
3.12. Tika Document Reader Service
3.12.1. Architecture
3.12.2. Configuration
3.12.3. Old-style DocumentReaders and Tika Parsers
3.12.4. TikaDocumentReader features and notes
3.13. Digest Authentication
3.13.1. Server configuration
3.13.2. OrganizationService implementation requirements
4. eXoWS
4.1. Introduction to the Representational State Transfer (REST)
4.2. Overwrite default providers
4.2.1. Motivation
4.2.2. Usage
4.2.3. Example
4.3. RestServicesList Service
4.3.1. Usage
4.4. Groovy Scripts as REST Services
4.4.1. Loading script and save it in JCR
4.4.2. Instantiation
4.4.3. Deploying newly created Class as RESTful service
4.4.4. Script Lifecycle Management
4.4.5. Getting node UUID example
4.4.6. Groovy script restrictions
4.5. Framework for cross-domain AJAX
4.5.1. Motivation
4.5.2. Scheme (how it works)
4.5.3. A Working Sequence:
4.5.4. How to use it
4.6. JSONP as alternative to the Framework for cross-domain AJAX
5. Frequently Asked Question
5.1. JCR FAQ
5.1.1. Kernel
5.1.2. JCR
6. eXo JCR with GateIn
6.1. How to extend my GateIn instance?
6.1.1. Motivations
6.1.2. Prerequisites
6.1.3. FAQ
6.1.4. Recommendations
6.2. How to use AS Managed DataSource under JBoss AS
6.2.1. Declaring the datasources in the AS
6.2.2. Do not let eXo bind datasources explicitly
1.1. Introduction in eXoJCR
1.1.1. Data model
1.2. Why use JCR?
1.2.1. What is JCR?
1.2.2. Why use JCR?
1.2.3. What does eXo do?
1.2.4. Further Reading
1.3. eXo JCR Implementation
1.3.1. Related Documents
1.3.2. How it works
1.3.3. Workspace Data Model
1.4. Advantages of eXo JCR
1.4.1. Advantages for application developers
1.4.2. Advantages for managers
1.5. Compatibility Levels
1.5.1. Level 1
1.5.2. Level 2
1.5.3. Optional features
1.6. Using JCR
1.6.1. Obtaining a Repository object
1.6.2. JCR Session common considerations
1.6.3. JCR Application practices
1.7. JCR Service Extensions
1.7.1. Concept
1.7.2. Implementation
1.7.3. Configuration
1.7.4. Related Pages
1.8. eXo JCR Application Model
1.9. NodeType Registration
1.9.1. Interfaces and methods
1.9.2. Node type registration
1.9.3. Changing existing node type
1.9.4. Removing node type
1.9.5. Practical How to
1.10. Registry Service
1.10.1. Concept
1.10.2. The API
1.10.3. Configuration
1.11. Namespace altering
1.11.1. Adding new namespace
1.11.2. Changing existing namespace
1.11.3. Removing existing namespace
1.12. Node Types and Namespaces
1.12.1. Node Types definition
1.12.2. Namespaces definition
1.13. eXo JCR configuration
1.13.1. Related documents
1.13.2. Portal and Standalone configuration
1.13.3. JCR Configuration
1.13.4. Repository service configuration (JCR repositories configuration)
1.13.5. Repository configuration
1.13.6. Workspace configuration
1.13.7. Value Storage plugin configuration (for data container):
1.13.8. Initializer configuration (optional)
1.13.9. Cache configuration
1.13.10. Query Handler configuration
1.13.11. Lock Manager configuration
1.13.12. Help application to prohibit the use of closed sessions
1.13.13. Help application to allow the use of closed datasources
1.13.14. Getting the effective configuration at Runtime of all the repositories
1.13.15. Configuration of workspaces using system properties
1.14. Multi-language support in eXo JCR RDB backend
1.14.1. Oracle
1.14.2. DB2
1.14.3. MySQL
1.14.4. PostgreSQL/PostgrePlus
1.15. How to host several JCR instances on the same database instance?
1.15.1. LockManager configuration
1.15.2. HibernateService configuration
1.16. Search Configuration
1.16.1. XML Configuration
1.16.2. Configuration parameters
1.16.3. Global Search Index
1.16.4. Indexing Adjustments
1.17. JCR Configuration persister
1.17.1. Idea
1.17.2. Usage
1.18. JDBC Data Container Config
1.18.1. General recommendations for database configuration
1.18.2. Isolated-database Configuration
1.18.3. Multi-database Configuration
1.18.4. Single-database configuration
1.18.5. Simple and Complex queries
1.18.6. Forse Query Hints
1.18.7. Notes for Microsoft Windows users
1.19. External Value Storages
1.19.1. Tree File Value Storage
1.19.2. Simple File Value Storage
1.19.3. Content Addressable Value storage (CAS) support
1.19.4. Disabling value storage
1.20. Workspace Data Container
1.20.1. Database's dialects
1.21. REST Services on Groovy
1.21.1. Usage
1.22. Configuring JBoss AS with eXo JCR in cluster
1.22.1. Launching Cluster
1.22.2. Requirements
1.23. Infinispan configuration
1.23.1. Infinispan configuration for indexer, lock manager and data container
1.23.2. JGroups configuration
1.23.3. Shipped Infinispan configuration templates
1.24. LockManager configuration
1.24.1. CacheableLockManagerImpl
1.25. QueryHandler configuration
1.25.1. Indexing in clustered environment
1.25.2. Configuration
1.25.3. Asynchronous reindexing
1.25.4. Advanced tuning
1.26. JBossTransactionsService
1.26.1. Configuration
1.27. TransactionManagerLookup
1.28. RepositoryCreationService
1.28.1. Dependencies
1.28.2. How it works
1.28.3. Configuration
1.28.4. RepositoryCreationService Interface
1.28.5. Conclusions and restrictions
1.29. JCR Query Usecases
1.29.1. Query Lifecycle
1.29.2. Query result settings
1.29.3. Type Constraints
1.29.4. Property Constraints
1.29.5. Path Constraint
1.29.6. Ordering specifying
1.29.7. Section 1.31, “Fulltext Search And Affecting Settings”
1.29.8. Indexing rules and additional features
1.29.9. Query Examples
1.29.10. Tips and tricks
1.30. Searching Repository Content
1.30.1. Bi-directional RangeIterator (since 1.9)
1.30.2. Fuzzy Searches (since 1.0)
1.30.3. SynonymSearch (since 1.9)
1.30.4. High-lighting (Since 1.9)
1.30.5. SpellChecker
1.30.6. Similarity (Since 1.12)
1.31. Fulltext Search And Affecting Settings
1.31.1. Property content indexing
1.31.2. Lucene Analyzers
1.31.3. How are different properties indexed?
1.31.4. Fulltext search query examples
1.31.5. Different analyzers in action
1.32. JCR API Extensions
1.32.1. API and usage
1.32.2. Configuration
1.32.3. Implementation notices
1.33. WebDAV
1.33.1. Configuration
1.33.2. Screenshots
1.33.3. Comparison table of WebDav and JCR commands
1.33.4. Restrictions
1.33.5. Same name sibling
1.34. FTP
1.34.1. Configuration Parameters
1.35. eXo JCR Backup Service
1.35.1. Concept
1.35.2. How it works
1.35.3. Configuration
1.35.4. RDBMS backup
1.35.5. Usage
1.35.6. Restore existing workspace or repository
1.35.7. Restore a workspace or a repository using original configuration
1.35.8. Backup set portability
1.35.9. DB type migration
1.36. HTTPBackupAgent and backup client
1.36.1. HTTPBackupAgent
1.36.2. Backup Client
1.36.3. Backup Client Usage
1.36.4. Full example about creating backup and restoring it for workspace 'backup'
1.36.5. Full example about creating backup and restoring it for repository 'repository'
1.37. How to backup the data of your JCR using an external backup tool in 3 steps?
1.37.1. Step 1: Suspend the Repository
1.37.2. Step 2: Backup the data
1.37.3. Step 3: Resume the Repository
1.38. eXo JCR statistics
1.38.1. Statistics on the Database Access Layer
1.38.2. Statistics on the JCR API accesses
1.38.3. Statistics Manager
1.39. Checking and repairing repository integrity and consistency
1.39.1. Recommendations on how to fix corrupted JCR manually
1.40. Quota Manager
1.40.1. Quota Manager configuration
1.40.2. Quota manager interface overview
1.41. JTA
1.42. The JCA Resource Adapter
1.42.1. The SessionFactory
1.42.2. Configuration
1.42.3. Deployment
1.43. Access Control
1.43.1. Standard Action Permissions
1.43.2. eXo Access Control
1.44. Access Control Extension
1.44.1. Prerequisites
1.44.2. Access Context Action
1.44.3. The Invocation Context
1.44.4. Custom Extended Access Manager
1.44.5. Example of a custom Access Manager
1.45. Link Producer Service
1.46. Binary Values Processing
1.46.1. Configuration
1.46.2. Usage
1.46.3. Value implementations
1.47. JCR Resources:
1.48. JCR Workspace Data Container (architecture contract)
1.48.1. Concepts
1.48.2. Requirements
1.48.3. Value storages API
1.49. How to implement Workspace Data Container
1.49.1. Notes on Value storage usage:
1.50. DBCleanService
1.50.1. Methods of DBCleanService
1.50.2. Need to clean only single workspace
1.50.3. Need to clean the whole repository
1.51. JCR Performance Tuning Guide
1.51.1. JCR Performance and Scalability
1.51.2. Performance Tuning Guide

eXo provides JCR implementation called eXo JCR.

This part will show you how to configure and use eXo JCR in GateIn and standalone.

How do you know the data of your website are stored? The images are probably in a file system, the meta data are in some dedicated files - maybe in XML - the text documents and pdfs are stored in different folders with the meta data in an other place (a database?) and in a proprietary structure. How do you manage to update these data and how do you manage the access rights? If your boss asks you to manage different versions of each document or not? The larger your website is, the more you need a Content Management Systems (CMS) which tackles all these issues.

These CMS solutions are sold by different vendors and each vendor provides its own API for interfacing the proprietary content repository. The developers have to deal with this and need to learn the vendor-specific API. If in the future you wish to switch to a different vendor, everything will be different and you will have a new implementation, a new interface, etc.

JCR provides a unique java interface for interacting with both text and binary data, for dealing with any kind and amount of meta data your documents might have. JCR supplies methods for storing, updating, deleting and retrieving your data, independent of the fact if this data is stored in a RDBMS, in a file system or as an XML document - you just don't need to care about. The JCR interface is also defined as classes and methods for searching, versioning, access control, locking, and observation.

Furthermore, an export and import functionality is specified so that a switch to a different vendor is always possible.

eXo fully complies a JCR standard JSR 170; therefore with eXo JCR you can use a vendor-independent API. It means that you could switch any time to a different vendor. Using the standard lowers your lifecycle cost and reduces your long term risk.

Of course eXo does not only offer JCR, but also the complete solution for ECM (Enterprise Content Management) and for WCM (Web Content Management).

eXo Repository Service is a standard eXo service and is a registered IoC component, i.e. can be deployed in some eXo Containers (see Service configuration for details). The relationships between components are shown in the picture below:

eXo Container: some subclasses of org.exoplatform.container.ExoContainer (usually org.exoplatform.container.StandaloneContainer or org.exoplatform.container.PortalContainer) that holds a reference to Repository Service.

  • Repository Service: contains information about repositories. eXo JCR is able to manage many Repositories.

  • Repository: Implementation of javax.jcr.Repository. It holds references to one or more Workspace(s).

  • Workspace: Container of a single rooted tree of Items. (Note that here it is not exactly the same as javax.jcr.Workspace as it is not a per Session object).

Usual JCR application use case includes two initial steps:

  • Obtaining Repository object by getting Repository Service from the current eXo Container (eXo "native" way) or via JNDI lookup if eXo repository is bound to the naming context using (see Service configuration for details).

  • Creating javax.jcr.Session object that calls Repository.login(..).

The following diagram explains which components of eXo JCR implementation are used in a data flow to perform operations specified in JCR API

The Workspace Data Model can be split into 4 levels by data isolation and value from the JCR model point of view.

The Java Content Repository specification JSR-170 has been split into two compliance levels as well as a set of optional features.

Level 1 defines a read-only repository.

Level 2 defines methods for writing content and bidirectional interaction with the repository.

eXo JCR supports JSR-170 level 1 and level 2 and all optional features. The recent JSR-283 is not yet supported.

Level 1 includes read-only functionality for very simple repositories. It is useful to port an existing data repository and convert it to a more advanced form step by step. JCR uses a well-known Session abstraction to access the repository data (similar to the sessions we have in OS, web, etc).

The features of level 1:

(one-shot logout for all opened sessions)

Use org.exoplatform.services.jcr.ext.common.SessionProvider which is responsible for caching/obtaining your JCR Sessions and closing all opened sessions at once.

public class SessionProvider implements SessionLifecycleListener {

  /**
   * Creates a SessionProvider for a certain identity
   * @param cred
   */
  public SessionProvider(Credentials cred)
  
  /**
   * Gets the session from internal cache or creates and caches a new one 
   */
  public Session getSession(String workspaceName, ManageableRepository repository) 
    throws LoginException, NoSuchWorkspaceException, RepositoryException 

  /**
   * Calls a logout() method for all cached sessions
   */
  public void close() 

  /**
   * a Helper for creating a System session provider
   * @return System session
   */
  public static SessionProvider createSystemProvider() 

  /**
   * a Helper for creating an Anonimous session provider
   * @return System session
   */
  public static SessionProvider createAnonimProvider()

    /**
    * Helper for creating  session provider from AccessControlEntry.
    *
    * @return System session
    */
  SessionProvider createProvider(List<AccessControlEntry> accessList)

    /**
    * Remove the session from the cache
    */
  void onCloseSession(ExtendedSession session)

    /**
    * Gets the current repository used
    */
  ManageableRepository getCurrentRepository()

     /**
     * Gets the current workspace used
     */
  String getCurrentWorkspace()

     /**
     * Set the current repository to use
     */
  void setCurrentRepository(ManageableRepository currentRepository)

     /**
     * Set the current workspace to use
     */
  void setCurrentWorkspace(String currentWorkspace)

}

The SessionProvider is per-request or per-user object, depending on your policy. Create it with your application before performing JCR operations, use it to obtain the Sessions and close at the end of an application session(request). See the following example:

// (1) obtain current javax.jcr.Credentials, for example get it from AuthenticationService
Credentials cred = ....

// (2) create SessionProvider for current user
SessionProvider sessionProvider = new SessionProvider(ConversationState.getCurrent());

// NOTE: for creating an Anonymous or System Session use  the corresponding static SessionProvider.create...() method
// Get appropriate Repository as described in "Obtaining Repository object" section for example
ManageableRepository repository = (ManageableRepository) ctx.lookup("repositoryName");

// get an appropriate workspace's session 
Session session = sessionProvider.getSession("workspaceName", repository);

 .........
// your JCR code
 .........

 // Close the session provider
 sessionProvider.close(); 

As shown above, creating the SessionProvider involves multiple steps and you may not want to repeat them each time you need to get a JCR session. In order to avoid all this plumbing code, we provide the SessionProviderService whose goal is to help you to get a SessionProvider object.

The org.exoplatform.services.jcr.ext.app.SessionProviderService interface is defined as follows:

public interface SessionProviderService {
  void setSessionProvider(Object key, SessionProvider sessionProvider);
  SessionProvider getSessionProvider(Object key);
  void removeSessionProvider(Object key);
}

Using this service is pretty straightforward, the main contract of an implemented component is getting a SessionProvider by key. eXo provides two implementations :


For any implementation, your code should follow the following sequence :

  • Call SessionProviderService.setSessionProvider(Object key, SessionProvider sessionProvider) at the beginning of a business request for Stateless application or application's session for Statefull policy.

  • Call SessionProviderService.getSessionProvider(Object key) for obtaining a SessionProvider object

  • Call SessionProviderService.removeSessionProvider(Object key) at the end of a business request for Stateless application or application's session for Statefull policy.

eXo JCR supports observation (JSR-170 8.3), which enables applications to register interest in events that describe changes to a workspace, and then monitor and respond to those events. The standard observation feature allows dispatching events when persistent change to the workspace is made.

eXo JCR also offers a proprietary Extension Action which dispatches and fires an event upon each transient session level change, performed by a client. In other words, the event is triggered when a client's program invokes some updating methods in a session or a workspace (such as: Session.addNode(), Session.setProperty(), Workspace.move() etc.

By default when an action fails, the related exception is simply logged. In case you would like to change the default exception handling, you can implement the interface AdvancedAction. In case the JCR detects that your action is of type AdvancedAction, it will call the method onError instead of simply logging it. A default implementation of the onError method is available in the abstract class AbstractAdvancedAction. It reverts all pending changes of the current JCR session for any kind of event corresponding to a write operation. Then in case the provided exception is an instance of type AdvancedActionException, it will throw it otherwise it will log simply it. An AdvancedActionException will be thrown in case the changes could not be reverted.

One important recommendation should be applied for an extension action implementation. Each action will add its own execution time to standard JCR methods (Session.addNode(), Session.setProperty(), Workspace.move() etc.) execution time. As a consequence, it's necessary to minimize Action.execute(Context) body execution time.

To make the rule, you can use the dedicated Thread in Action.execute(Context) body for a custom logic. But if your application logic requires the action to add items to a created/updated item and you save these changes immediately after the JCR API method call is returned, the suggestion with Thread is not applicable for you in this case.

Add a SessionActionCatalog service and an appropriate AddActionsPlugin (see the example below) configuration to your eXo Container configuration. As usual, the plugin can be configured as in-component-place, which is the case for a Standalone Container or externally, which is a usual case for Root/Portal Container configuration).

Each Action entry is exposed as org.exoplatform.services.jcr.impl.ext.action. ActionConfiguration of actions collection of org.exoplatform.services.jcr.impl.ext.action.AddActionsPlugin$ActionsConfig (see an example below). The mandatory field named actionClassName is the fully qualified name of org.exoplatform.services.command.action.Action implementation - the command will be launched in case the current event matches the criteria. All other fields are criteria. The criteria are *AND*ed together. In other words, for a particular item to be listened to, it must meet ALL the criteria:

* workspace: the comma delimited (ORed) list of workspaces

* eventTypes: a comma delimited (ORed) list of event names (see below) to be listened to. This is the only mandatory field, others are optional and if they are missing they are interpreted as ANY.

* path - a comma delimited (ORed) list of item absolute paths (or within its subtree if isDeep is true, which is the default value)

* nodeTypes - a comma delimited (ORed) list of the current NodeType. Since version 1.6.1 JCR supports the functionalities of nodeType and parentNodeType. This parameter has different semantics, depending on the type of the current item and the operation performed. If the current item is a property it means the parent node type. If the current item is a node, the semantic depends on the event type: ** add node event: the node type of the newly added node. ** add mixin event: the newly added mixing node type of the current node. ** remove mixin event the removed mixin type of the current node. ** other events: the already assigned NodeType(s) of the current node (can be both primary and mixin).

The list of supported Event names: addNode, addProperty, changeProperty, removeProperty, removeNode, addMixin, removeMixin, lock, unlock, checkin, checkout, read, moveNode.

<component>
   <type>org.exoplatform.services.jcr.impl.ext.action.SessionActionCatalog</type>
   <component-plugins>
      <component-plugin>
         <name>addActions</name>
         <set-method>addPlugin</set-method>
         <type>org.exoplatform.services.jcr.impl.ext.action.AddActionsPlugin</type>
         <description>add actions plugin</description>
         <init-params>
            <object-param>
               <name>actions</name>
               <object type="org.exoplatform.services.jcr.impl.ext.action.AddActionsPlugin$ActionsConfig">
               <field  name="actions">
                  <collection type="java.util.ArrayList">
                     <value>
                        <object type="org.exoplatform.services.jcr.impl.ext.action.ActionConfiguration">
                          <field  name="eventTypes"><string>addNode,removeNode</string></field>
                          <field  name="path"><string>/test,/exo:test</string></field>       
                          <field  name="isDeep"><boolean>true</boolean></field>       
                          <field  name="nodeTypes"><string>nt:file,nt:folder,mix:lockable</string></field>       
                          <!-- field  name="workspace"><string>backup</string></field -->
                          <field  name="actionClassName"><string>org.exoplatform.services.jcr.ext.DummyAction</string></field>       
                        </object>
                     </value>
                  </collection>
               </field>
            </object>
          </object-param>
        </init-params>
      </component-plugin>
    </component-plugins>
</component>

eXo JCR implementation supports two ways of Nodetypes registration:

The ExtendedNodeTypeManager (from JCR 1.11) interface provides the following methods related to registering node types:

public static final int IGNORE_IF_EXISTS  = 0;

public static final int FAIL_IF_EXISTS    = 2;

public static final int REPLACE_IF_EXISTS = 4;

 /**
  * Return NodeType for a given InternalQName.
  *
  * @param qname nodetype name
  * @return NodeType
  * @throws NoSuchNodeTypeException if no nodetype found with the name
  * @throws RepositoryException Repository error
  */
NodeType findNodeType(InternalQName qname) throws NoSuchNodeTypeException, RepositoryException;

/**
 * Registers node type using value object.
 * 
 * @param nodeTypeValue
 * @param alreadyExistsBehaviour
 * @throws RepositoryException
 */
NodeType registerNodeType(NodeTypeValue nodeTypeValue, int alreadyExistsBehaviour) throws RepositoryException;

/**
 * Registers all node types using XML binding value objects from xml stream.
 * 
 * @param xml a InputStream
 * @param alreadyExistsBehaviour a int
 * @throws RepositoryException
 */
NodeTypeIterator registerNodeTypes(InputStream xml, int alreadyExistsBehaviour, String contentType)
   throws RepositoryException;

/**
 * Gives the {@link NodeTypeManager}
 * 
 * @throws RepositoryException if another error occurs.
 */
NodeTypeDataManager getNodeTypesHolder() throws RepositoryException;

/**
 * Return <code>NodeTypeValue</code> for a given nodetype name. Used for
 * nodetype update. Value can be edited and registered via
 * <code>registerNodeType(NodeTypeValue nodeTypeValue, int alreadyExistsBehaviour)</code>
 * .
 * 
 * @param ntName nodetype name
 * @return NodeTypeValue
 * @throws NoSuchNodeTypeException if no nodetype found with the name
 * @throws RepositoryException Repository error
 */
NodeTypeValue getNodeTypeValue(String ntName) throws NoSuchNodeTypeException, RepositoryException;

/**
 * Registers or updates the specified <code>Collection</code> of
 * <code>NodeTypeValue</code> objects. This method is used to register or
 * update a set of node types with mutual dependencies. Returns an iterator
 * over the resulting <code>NodeType</code> objects. <p/> The effect of the
 * method is "all or nothing"; if an error occurs, no node types are
 * registered or updated. <p/> Throws an
 * <code>InvalidNodeTypeDefinitionException</code> if a
 * <code>NodeTypeDefinition</code> within the <code>Collection</code> is
 * invalid or if the <code>Collection</code> contains an object of a type
 * other than <code>NodeTypeDefinition</code> . <p/> Throws a
 * <code>NodeTypeExistsException</code> if <code>allowUpdate</code> is
 * <code>false</code> and a <code>NodeTypeDefinition</code> within the
 * <code>Collection</code> specifies a node type name that is already
 * registered. <p/> Throws an
 * <code>UnsupportedRepositoryOperationException</code> if this implementation
 * does not support node type registration.
 * 
 * @param values a collection of <code>NodeTypeValue</code>s
 * @param alreadyExistsBehaviour a int
 * @return the registered node types.
 * @throws InvalidNodeTypeDefinitionException if a
 *           <code>NodeTypeDefinition</code> within the
 *           <code>Collection</code> is invalid or if the
 *           <code>Collection</code> contains an object of a type other than
 *           <code>NodeTypeDefinition</code>.
 * @throws NodeTypeExistsException if <code>allowUpdate</code> is
 *           <code>false</code> and a <code>NodeTypeDefinition</code> within
 *           the <code>Collection</code> specifies a node type name that is
 *           already registered.
 * @throws UnsupportedRepositoryOperationException if this implementation does
 *           not support node type registration.
 * @throws RepositoryException if another error occurs.
 */
public NodeTypeIterator registerNodeTypes(List<NodeTypeValue> values, int alreadyExistsBehaviour)
   throws UnsupportedRepositoryOperationException, RepositoryException;

/**
 * Unregisters the specified node type.
 * 
 * @param name a <code>String</code>.
 * @throws UnsupportedRepositoryOperationException if this implementation does
 *           not support node type registration.
 * @throws NoSuchNodeTypeException if no registered node type exists with the
 *           specified name.
 * @throws RepositoryException if another error occurs.
 */
public void unregisterNodeType(String name) throws UnsupportedRepositoryOperationException, NoSuchNodeTypeException,
   RepositoryException;

/**
 * Unregisters the specified set of node types.<p/> Used to unregister a set
 * of node types with mutual dependencies.
 * 
 * @param names a <code>String</code> array
 * @throws UnsupportedRepositoryOperationException if this implementation does
 *           not support node type registration.
 * @throws NoSuchNodeTypeException if one of the names listed is not a
 *           registered node type.
 * @throws RepositoryException if another error occurs.
 */
public void unregisterNodeTypes(String[] names) throws UnsupportedRepositoryOperationException,
   NoSuchNodeTypeException, RepositoryException;

The NodeTypeValue interface represents a simple container structure used to define node types which are then registered through the ExtendedNodeTypeManager.registerNodeType method. The implementation of this interface does not contain any validation logic.

/**
 * @return Returns the declaredSupertypeNames.
 */
public List<String> getDeclaredSupertypeNames();

/**
 * @param declaredSupertypeNames
 *The declaredSupertypeNames to set.
 */
public void setDeclaredSupertypeNames(List<String> declaredSupertypeNames);

/**
 * @return Returns the mixin.
 */
public boolean isMixin();

/**
 * @param mixin
 *The mixin to set.
 */
public void setMixin(boolean mixin);

/**
 * @return Returns the name.
 */
public String getName();

/**
 * @param name
 *The name to set.
 */
public void setName(String name);

/**
 * @return Returns the orderableChild.
 */
public boolean isOrderableChild();

/**
 * @param orderableChild
 *The orderableChild to set.
 */
public void setOrderableChild(boolean orderableChild);

/**
 * @return Returns the primaryItemName.
 */
public String getPrimaryItemName();

/**
 * @param primaryItemName
 *The primaryItemName to set.
 */
public void setPrimaryItemName(String primaryItemName);

/**
 * @return Returns the declaredChildNodeDefinitionNames.
 */
public List<NodeDefinitionValue> getDeclaredChildNodeDefinitionValues();

/**
 * @param declaredChildNodeDefinitionNames
 *The declaredChildNodeDefinitionNames to set.
 */
public void setDeclaredChildNodeDefinitionValues(List<NodeDefinitionValue> declaredChildNodeDefinitionValues);

/**
 * @return Returns the declaredPropertyDefinitionNames.
 */
public List<PropertyDefinitionValue> getDeclaredPropertyDefinitionValues();

/**
 * @param declaredPropertyDefinitionNames
 *The declaredPropertyDefinitionNames to set.
 */
public void setDeclaredPropertyDefinitionValues(List<PropertyDefinitionValue> declaredPropertyDefinitionValues);

The Registry Service is one of the key parts of the infrastructure built around eXo JCR. Each JCR that is based on service, applications, etc may have its own configuration, settings data and other data that have to be stored persistently and used by the approptiate service or application. ( We call it "Consumer").

The service acts as a centralized collector (Registry) for such data. Naturally, a registry storage is JCR based i.e. stored in some JCR workspace (one per Repository) as an Item tree under /exo:registry node.

Despite the fact that the structure of the tree is well defined (see the scheme below), it is not recommended for other services to manipulate data using JCR API directly for better flexibility. So the Registry Service acts as a mediator between a Consumer and its settings.

The proposed structure of the Registry Service storage is divided into 3 logical groups: services, applications and users:

 exo:registry/          <-- registry "root" (exo:registry)
   exo:services/        <-- service data storage (exo:registryGroup)
     service1/
       Consumer data    (exo:registryEntry)
     ...
   exo:applications/    <-- application data storage (exo:registryGroup)
     app1/
       Consumer data    (exo:registryEntry)
     ...
   exo:users/           <-- user personal data storage (exo:registryGroup)
     user1/
       Consumer data    (exo:registryEntry)
     ...

Each upper level eXo Service may store its configuration in eXo Registry. At first, start from xml-config (in jar etc) and then from Registry. In configuration file, you can add force-xml-configuration parameter to component to ignore reading parameters initialization from RegistryService and to use file instead:

<value-param>
  <name>force-xml-configuration</name>
  <value>true</value>
</value-param>

The main functionality of the Registry Service is pretty simple and straightforward, it is described in the Registry abstract class as the following:

public abstract class Registry
{

   /**
    * Returns Registry node object which wraps Node of "exo:registry" type (the whole registry tree)
    */
   public abstract RegistryNode getRegistry(SessionProvider sessionProvider) throws RepositoryConfigurationException,
      RepositoryException;

   /**
    * Returns existed RegistryEntry which wraps Node of "exo:registryEntry" type
    */
   public abstract RegistryEntry getEntry(SessionProvider sessionProvider, String entryPath)
      throws PathNotFoundException, RepositoryException;

   /**
    * creates an entry in the group. In a case if the group does not exist it will be silently
    * created as well
    */
   public abstract void createEntry(SessionProvider sessionProvider, String groupPath, RegistryEntry entry)
      throws RepositoryException;

   /**
    * updates an entry in the group
    */
   public abstract void recreateEntry(SessionProvider sessionProvider, String groupPath, RegistryEntry entry)
      throws RepositoryException;

   /**
    * removes entry located on entryPath (concatenation of group path / entry name)
    */
   public abstract void removeEntry(SessionProvider sessionProvider, String entryPath) throws RepositoryException;

}

As you can see it looks like a simple CRUD interface for the RegistryEntry object which wraps registry data for some Consumer as a Registry Entry. The Registry Service itself knows nothing about the wrapping data, it is Consumer's responsibility to manage and use its data in its own way.

To create an Entity Consumer you should know how to serialize the data to some XML structure and then create a RegistryEntry from these data at once or populate them in a RegistryEntry object (using RegistryEntry(String entryName) constructor and then obtain and fill a DOM document).

Example of RegistryService using:

    RegistryService regService = (RegistryService) container
    .getComponentInstanceOfType(RegistryService.class);

    RegistryEntry registryEntry = regService.getEntry(sessionProvider,
            RegistryService.EXO_SERVICES + "/my-service");

    Document doc = registryEntry.getDocument();
    
    String mySetting = getElementsByTagName("tagname").item(index).getTextContent();
     .....

Support of node types and namespaces is required by the JSR-170 specification. Beyond the methods required by the specification, eXo JCR has its own API extension for the Node type registration as well as the ability to declaratively define node types in the Repository at the start-up time.

Node type registration extension is declared in org.exoplatform.services.jcr.core.nodetype.ExtendedNodeTypeManager interface

Your custom service can register some neccessary predefined node types at the start-up time. The node definition should be placed in a special XML file (see DTD below) and declared in the service's configuration file thanks to eXo component plugin mechanism, described as follows:

<external-component-plugins>
  <target-component>org.exoplatform.services.jcr.RepositoryService</target-component>
      <component-plugin>
        <name>add.nodeType</name>
        <set-method>addPlugin</set-method>
        <type>org.exoplatform.services.jcr.impl.AddNodeTypePlugin</type>
        <init-params>
          <values-param>
            <name>autoCreatedInNewRepository</name>
            <description>Node types configuration file</description>
            <value>jar:/conf/test/nodetypes-tck.xml</value>
            <value>jar:/conf/test/nodetypes-impl.xml</value>
          </values-param>
    <values-param> 
            <name>repo1</name> 
            <description>Node types configuration file for repository with name repo1</description> 
            <value>jar:/conf/test/nodetypes-test.xml</value> 
          </values-param>
    <values-param> 
            <name>repo2</name> 
            <description>Node types configuration file for repository with name repo2</description> 
            <value>jar:/conf/test/nodetypes-test2.xml</value> 
          </values-param>
        </init-params>
      </component-plugin>

There are two types of registration. The first type is the registration of node types in all created repositories, it is configured in values-param with the name autoCreatedInNewRepository. The second type is registration of node types in specified repository and it is configured in values-param with the name of repository.

Node type definition file format:

  <?xml version="1.0" encoding="UTF-8"?>
  <!DOCTYPE nodeTypes [
   <!ELEMENT nodeTypes (nodeType)*>
      <!ELEMENT nodeType (supertypes?|propertyDefinitions?|childNodeDefinitions?)>

      <!ATTLIST nodeType
         name CDATA #REQUIRED
         isMixin (true|false) #REQUIRED
         hasOrderableChildNodes (true|false)
         primaryItemName CDATA
      >
      <!ELEMENT supertypes (supertype*)>
      <!ELEMENT supertype (CDATA)>
   
      <!ELEMENT propertyDefinitions (propertyDefinition*)>

      <!ELEMENT propertyDefinition (valueConstraints?|defaultValues?)>
      <!ATTLIST propertyDefinition
         name CDATA #REQUIRED
         requiredType (String|Date|Path|Name|Reference|Binary|Double|Long|Boolean|undefined) #REQUIRED
         autoCreated (true|false) #REQUIRED
         mandatory (true|false) #REQUIRED
         onParentVersion (COPY|VERSION|INITIALIZE|COMPUTE|IGNORE|ABORT) #REQUIRED
         protected (true|false) #REQUIRED
         multiple  (true|false) #REQUIRED
      >    
    <!-- For example if you need to set ValueConstraints [], 
      you have to add an empty element <valueConstraints/>. 
      The same order is for other properties like defaultValues, requiredPrimaryTypes etc.
      -->  
      <!ELEMENT valueConstraints (valueConstraint*)>
      <!ELEMENT valueConstraint (CDATA)>
      <!ELEMENT defaultValues (defaultValue*)>
      <!ELEMENT defaultValue (CDATA)>

      <!ELEMENT childNodeDefinitions (childNodeDefinition*)>

      <!ELEMENT childNodeDefinition (requiredPrimaryTypes)>
      <!ATTLIST childNodeDefinition
         name CDATA #REQUIRED
         defaultPrimaryType  CDATA #REQUIRED
         autoCreated (true|false) #REQUIRED
         mandatory (true|false) #REQUIRED
         onParentVersion (COPY|VERSION|INITIALIZE|COMPUTE|IGNORE|ABORT) #REQUIRED
         protected (true|false) #REQUIRED
         sameNameSiblings (true|false) #REQUIRED
      >
      <!ELEMENT requiredPrimaryTypes (requiredPrimaryType+)>
      <!ELEMENT requiredPrimaryType (CDATA)>  
]>

This section provides you the knowledge about eXo JCR configuration in details, including the basic and advanced configuration.

Like other eXo services, eXo JCR can be configured and used in the portal or embedded mode (as a service embedded in GateIn) and in standalone mode.

In Embedded mode, JCR services are registered in the Portal container and the second option is to use a Standalone container. The main difference between these container types is that the first one is intended to be used in a Portal (Web) environment, while the second one can be used standalone (see the comprehensive page Service Configuration for Beginners for more details).

The following setup procedure is used to obtain a Standalone configuration (see more in Container configuration):

  • Configuration that is set explicitly using StandaloneContainer.addConfigurationURL(String url) or StandaloneContainer.addConfigurationPath(String path) before getInstance()

  • Configuration from $base:directory/exo-configuration.xml or $base:directory/conf/exo-configuration.xml file. Where $base:directory is either AS's home directory in case of J2EE AS environment or just the current directory in case of a standalone application.

  • /conf/exo-configuration.xml in the current classloader (e.g. war, ear archive)

  • Configuration from $service_jar_file/conf/portal/configuration.xml. WARNING: Don't rely on some concrete jar's configuration if you have more than one jar containing conf/portal/configuration.xml file. In this case choosing a configuration is unpredictable.

JCR service configuration looks like:

<component>
  <key>org.exoplatform.services.jcr.RepositoryService</key>
  <type>org.exoplatform.services.jcr.impl.RepositoryServiceImpl</type>
</component>
<component>
  <key>org.exoplatform.services.jcr.config.RepositoryServiceConfiguration</key>
  <type>org.exoplatform.services.jcr.impl.config.RepositoryServiceConfigurationImpl</type>
  <init-params>
    <value-param>
      <name>conf-path</name>
      <description>JCR repositories configuration file</description>
      <value>jar:/conf/standalone/exo-jcr-config.xml</value>
    </value-param>
    <value-param>
      <name>max-backup-files</name>
      <value>5</value>
    </value-param>
    <properties-param>
      <name>working-conf</name>
      <description>working-conf</description>
      <property name="source-name" value="jdbcjcr" />
      <property name="dialect" value="hsqldb" />
      <property name="persister-class-name" value="org.exoplatform.services.jcr.impl.config.JDBCConfigurationPersister" />
    </properties-param>
  </init-params>
</component>

conf-path : a path to a RepositoryService JCR Configuration.

max-backup-files : max number of backup files. This option lets you specify the number of stored backups. Number of backups can't exceed this value. File which will exceed the limit will replace the oldest file.

working-conf : optional; JCR configuration persister configuration. If there isn't a working-conf, the persister will be disabled.

time-out: Time after which the unused global lock will be removed.

persister: A class for storing lock information for future use. For example, remove lock after jcr restart.

path: A lock folder. Each workspace has its own one.

Note

Also see lock-remover-max-threads repository configuration parameter.

<!ELEMENT repository-service (repositories)>
<!ATTLIST repository-service default-repository NMTOKEN #REQUIRED>
<!ELEMENT repositories (repository)>
<!ELEMENT repository (security-domain,access-control,session-max-age,authentication-policy,workspaces)>
<!ATTLIST repository
  default-workspace NMTOKEN #REQUIRED
  name NMTOKEN #REQUIRED
  system-workspace NMTOKEN #REQUIRED
>
<!ELEMENT security-domain (#PCDATA)>
<!ELEMENT access-control (#PCDATA)>
<!ELEMENT session-max-age (#PCDATA)>
<!ELEMENT authentication-policy (#PCDATA)>
<!ELEMENT workspaces (workspace+)>
<!ELEMENT workspace (container,initializer,cache,query-handler)>
<!ATTLIST workspace name NMTOKEN #REQUIRED>
<!ELEMENT container (properties,value-storages)>
<!ATTLIST container class NMTOKEN #REQUIRED>
<!ELEMENT value-storages (value-storage+)>
<!ELEMENT value-storage (properties,filters)>
<!ATTLIST value-storage class NMTOKEN #REQUIRED>
<!ELEMENT filters (filter+)>
<!ELEMENT filter EMPTY>
<!ATTLIST filter property-type NMTOKEN #REQUIRED>
<!ELEMENT initializer (properties)>
<!ATTLIST initializer class NMTOKEN #REQUIRED>
<!ELEMENT cache (properties)>
<!ATTLIST cache 
  enabled NMTOKEN #REQUIRED
  class NMTOKEN #REQUIRED
>
<!ELEMENT query-handler (properties)>
<!ATTLIST query-handler class NMTOKEN #REQUIRED>
<!ELEMENT access-manager (properties)>
<!ATTLIST access-manager class NMTOKEN #REQUIRED>
<!ELEMENT lock-manager (time-out,persister)>
<!ELEMENT time-out (#PCDATA)>
<!ELEMENT persister (properties)>
<!ELEMENT properties (property+)>
<!ELEMENT property EMPTY>

You can configure values of properties defined in the file repository-configuration.xml using System Properties. This is quite helpful especially when you want to change the default configuration of all the workspaces for example if we want to disable the rdms indexing for all the workspace without this kind of improvement it is very error prone. For all components that can be configured thanks to properties such as container, value-storage, workspace-initializer, cache, query-handler, lock-manager, access-manager and persister the logic for example for the component 'container' and the property called 'foo' will be the following:

To turn on this feature you need to define a component called SystemParametersPersistenceConfigurator. A simple example:

  <component>
    <key>org.exoplatform.services.jcr.config.SystemParametersPersistenceConfigurator</key>
    <type>org.exoplatform.services.jcr.config.SystemParametersPersistenceConfigurator</type>
    <init-params>
      <value-param>
        <name>file-path</name>
        <value>target/temp</value>
      </value-param>
      <values-param>
        <name>unmodifiable</name>
        <value>cache.test-parameter-I</value>
      </values-param>
      <values-param>
        <name>before-initialize</name>
        <value>value-storage.enabled</value>
      </values-param>
    </init-params>
  </component>

To make the configuration process easier here you can define thee parameters.

The parameter in the list have the following format: {component-name}.{parameter-name}. This takes affect for every workspace component called {component-name}.

Please take into account that if this component is not defined in the configuration, the workspace configuration overriding using system properties mechanism will be disabled. In other words: if you don't configure SystemParametersPersistenceConfigurator, the system properties are ignored.

Whenever relational database is used to store multilingual text data of eXo Java Content Repository, it is necessary to adapt configuration in order to support UTF-8 encoding. Here is a short HOWTO instruction for several supported RDBMS with examples.

The configuration file you have to modify: .../webapps/portal/WEB-INF/conf/jcr/repository-configuration.xml

In order to run multilanguage JCR on an Oracle backend Unicode encoding for characters set should be applied to the database. Other Oracle globalization parameters don't make any impact. The only property to modify is NLS_CHARACTERSET.

We have tested NLS_CHARACTERSET = AL32UTF8 and it works well for many European and Asian languages.

Example of database configuration (used for JCR testing):

NLS_LANGUAGE             AMERICAN
NLS_TERRITORY            AMERICA
NLS_CURRENCY             $
NLS_ISO_CURRENCY         AMERICA
NLS_NUMERIC_CHARACTERS   .,
NLS_CHARACTERSET         AL32UTF8
NLS_CALENDAR             GREGORIAN
NLS_DATE_FORMAT          DD-MON-RR
NLS_DATE_LANGUAGE        AMERICAN
NLS_SORT                 BINARY
NLS_TIME_FORMAT          HH.MI.SSXFF AM
NLS_TIMESTAMP_FORMAT     DD-MON-RR HH.MI.SSXFF AM
NLS_TIME_TZ_FORMAT       HH.MI.SSXFF AM TZR
NLS_TIMESTAMP_TZ_FORMAT  DD-MON-RR HH.MI.SSXFF AM TZR
NLS_DUAL_CURRENCY        $
NLS_COMP                 BINARY
NLS_LENGTH_SEMANTICS     BYTE
NLS_NCHAR_CONV_EXCP      FALSE
NLS_NCHAR_CHARACTERSET   AL16UTF16

Create database with Unicode encoding and use Oracle dialect for the Workspace Container:

<workspace name="collaboration">
          <container class="org.exoplatform.services.jcr.impl.storage.jdbc.optimisation.CQJDBCWorkspaceDataContainer">
            <properties>
              <property name="source-name" value="jdbcjcr" />
              <property name="dialect" value="oracle" />
              <property name="multi-db" value="false" />
              <property name="max-buffer-size" value="200k" />
              <property name="swap-directory" value="target/temp/swap/ws" />
            </properties>
          .....

DB2 Universal Database (DB2 UDB) supports UTF-8 and UTF-16/UCS-2. When a Unicode database is created, CHAR, VARCHAR, LONG VARCHAR data are stored in UTF-8 form. It's enough for JCR multi-lingual support.

Example of UTF-8 database creation:

DB2 CREATE DATABASE dbname USING CODESET UTF-8 TERRITORY US

Create database with UTF-8 encoding and use db2 dialect for Workspace Container on DB2 v.9 and higher:

<workspace name="collaboration">
          <container class="org.exoplatform.services.jcr.impl.storage.jdbc.optimisation.CQJDBCWorkspaceDataContainer">
            <properties>
              <property name="source-name" value="jdbcjcr" />
              <property name="dialect" value="db2" />
              <property name="multi-db" value="false" />
              <property name="max-buffer-size" value="200k" />
              <property name="swap-directory" value="target/temp/swap/ws" />
            </properties>
          .....

Note

For DB2 v.8.x support change the property "dialect" to db2v8.

JCR MySQL-backend requires special dialect MySQL-UTF8 to be used for internationalization support. But the database default charset should be latin1 to use limited index space effectively (1000 bytes for MyISAM engine, 767 for InnoDB). If database default charset is multibyte, a JCR database initialization error is thrown concerning index creation failure. In other words, JCR can work on any singlebyte default charset of database, with UTF8 supported by MySQL server. But we have tested it only on latin1 database default charset.

Repository configuration, workspace container entry example:

<workspace name="collaboration">
          <container class="org.exoplatform.services.jcr.impl.storage.jdbc.optimisation.CQJDBCWorkspaceDataContainer">
            <properties>
              <property name="source-name" value="jdbcjcr" />
              <property name="dialect" value="mysql-utf8" />
              <property name="multi-db" value="false" />
              <property name="max-buffer-size" value="200k" />
              <property name="swap-directory" value="target/temp/swap/ws" />
            </properties>
          .....

You will need also to indicate the charset name either at server level using the server parameter --character-set-server (find more details there ) or at datasource configuration level by adding a new property as below:

          <property name="connectionProperties" value="useUnicode=yes;characterEncoding=utf8;characterSetResults=UTF-8;" />
    

On PostgreSQL/PostgrePlus-backend, multilingual support can be enabled in different ways:

  • Using the locale features of the operating system to provide locale-specific collation order, number formatting, translated messages, and other aspects. UTF-8 is widely used on Linux distributions by default, so it can be useful in such case.

  • Providing a number of different character sets defined in the PostgreSQL/postgrePlus server, including multiple-byte character sets, to support storing text of any languages, and providing character set translation between client and server. We recommend to use UTF-8 database charset, it will allow any-to-any conversations and make this issue transparent for the JCR.

Create database with UTF-8 encoding and use a PgSQL dialect for Workspace Container:

<workspace name="collaboration">
          <container class="org.exoplatform.services.jcr.impl.storage.jdbc.optimisation.CQJDBCWorkspaceDataContainer">
            <properties>
              <property name="source-name" value="jdbcjcr" />
              <property name="dialect" value="pgsql" />
              <property name="multi-db" value="false" />
              <property name="max-buffer-size" value="200k" />
              <property name="swap-directory" value="target/temp/swap/ws" />
            </properties>
          .....

Frequently, a single database instance must be shared by several other applications. But some of our customers have also asked for a way to host several JCR instances in the same database instance. To fulfill this need, we had to review our queries and scope them to the current schema; it is now possible to have one JCR instance per DB schema instead of per DB instance. To benefit of the work done for this feature you will need to apply the configuration changes described below.

To enable this feature you need to replace org.infinispan.loaders.jdbc.stringbased.JdbcStringBasedCacheStore with org.exoplatform.services.jcr.infinispan.JdbcStringBasedCacheStore in Infinispan configuration file.

Here is an example of this very part of the configuration:

<infinispan
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
      xsi:schemaLocation="urn:infinispan:config:5.2 http://www.infinispan.org/schemas/infinispan-config-5.2.xsd"
      xmlns="urn:infinispan:config:5.2">

    <global>
      <evictionScheduledExecutor factory="org.infinispan.executors.DefaultScheduledExecutorFactory">
        <properties>
          <property name="threadNamePrefix" value="EvictionThread"/>
        </properties>
      </evictionScheduledExecutor>

      <globalJmxStatistics jmxDomain="exo" enabled="true" allowDuplicateDomains="true"/>
    </global>

    <default>
      <locking isolationLevel="READ_COMMITTED" lockAcquisitionTimeout="20000" writeSkewCheck="false" concurrencyLevel="500" useLockStriping="false"/>
      <transaction transactionManagerLookupClass="org.exoplatform.services.transaction.infinispan.JBossStandaloneJTAManagerLookup" syncRollbackPhase="true" syncCommitPhase="true" transactionMode="TRANSACTIONAL"/>
      <jmxStatistics enabled="true"/>
      <eviction strategy="NONE"/>

      <loaders passivation="false" shared="true" preload="true">
        <store class="org.exoplatform.services.jcr.infinispan.JdbcStringBasedCacheStore" fetchPersistentState="true" ignoreModifications="false" purgeOnStartup="false">
          <properties>
             <property name="stringsTableNamePrefix" value="${infinispan-cl-cache.jdbc.table.name}"/>
             <property name="idColumnName" value="${infinispan-cl-cache.jdbc.id.column}"/>
             <property name="dataColumnName" value="${infinispan-cl-cache.jdbc.data.column}"/>
             <property name="timestampColumnName" value="${infinispan-cl-cache.jdbc.timestamp.column}"/>
             <property name="idColumnType" value="${infinispan-cl-cache.jdbc.id.type}"/>
             <property name="dataColumnType" value="${infinispan-cl-cache.jdbc.data.type}"/>
             <property name="timestampColumnType" value="${infinispan-cl-cache.jdbc.timestamp.type}"/>
             <property name="dropTableOnExit" value="${infinispan-cl-cache.jdbc.table.drop}"/>
             <property name="createTableOnStart" value="${infinispan-cl-cache.jdbc.table.create}"/>
             <property name="connectionFactoryClass" value="${infinispan-cl-cache.jdbc.connectionFactory}"/>
             <property name="datasourceJndiLocation" value="${infinispan-cl-cache.jdbc.datasource}"/>
          </properties>
          <async enabled="false"/>
        </store>
      </loaders>
   </default>

</infinispan>

You can also obtain file example from GitHub.

Search is an important function in eXo JCR, so it is very necessary for you to know how to configure the eXo JCR Search tool.

Table 1.2. 

ParameterDefaultDescriptionSince
index-dirnoneThe location of the index directory. This parameter is mandatory. Up to 1.9, this parameter called "indexDir"1.0
use-compoundfiletrueAdvises lucene to use compound files for the index files.1.9
min-merge-docs100Minimum number of nodes in an index until segments are merged.1.9
volatile-idle-time3Idle time in seconds until the volatile index part is moved to a persistent index even though minMergeDocs is not reached.1.9
max-merge-docsInteger.MAX_VALUEMaximum number of nodes in segments that will be merged. The default value changed in JCR 1.9 to Integer.MAX_VALUE.1.9
merge-factor10Determines how often segment indices are merged.1.9
max-field-length10000The number of words that are fulltext indexed at most per property.1.9
cache-size1000Size of the document number cache. This cache maps uuids to lucene document numbers1.9
force-consistencycheckfalseRuns a consistency check on every startup. If false, a consistency check is only performed when the search index detects a prior forced shutdown.1.9
auto-repairtrueErrors detected by a consistency check are automatically repaired. If false, errors are only written to the log.1.9
query-classQueryImplClass name that implements the javax.jcr.query.Query interface.This class must also extend from the class: org.exoplatform.services.jcr.impl.core.query.AbstractQueryImpl.1.9
document-ordertrueIf true and the query does not contain an 'order by' clause, result nodes will be in document order. For better performance when queries return a lot of nodes set to 'false'.1.9
result-fetch-sizeInteger.MAX_VALUEThe number of results when a query is executed. Default value: Integer.MAX_VALUE (-> all).1.9
excerptprovider-classDefaultXMLExcerptThe name of the class that implements org.exoplatform.services.jcr.impl.core.query.lucene.ExcerptProvider and should be used for the rep:excerpt() function in a query.1.9
support-highlightingfalseIf set to true additional information is stored in the index to support highlighting using the rep:excerpt() function.1.9
synonymprovider-classnoneThe name of a class that implements org.exoplatform.services.jcr.impl.core.query.lucene.SynonymProvider. The default value is null (-> not set).1.9
synonymprovider-config-pathnoneThe path to the synonym provider configuration file. This path interpreted is relative to the path parameter. If there is a path element inside the SearchIndex element, then this path is interpreted and relative to the root path of the path. Whether this parameter is mandatory or not, it depends on the synonym provider implementation. The default value is null (-> not set).1.9
indexing-configuration-pathnoneThe path to the indexing configuration file.1.9
indexing-configuration-classIndexingConfigurationImplThe name of the class that implements org.exoplatform.services.jcr.impl.core.query.lucene.IndexingConfiguration.1.9
force-consistencycheckfalseIf setting to true, a consistency check is performed, depending on the parameter forceConsistencyCheck. If setting to false, no consistency check is performed on startup, even if a redo log had been applied.1.9
spellchecker-classnoneThe name of a class that implements org.exoplatform.services.jcr.impl.core.query.lucene.SpellChecker.1.9
spellchecker-more-populartrueIf setting true, spellchecker returns only the suggest words that are as frequent or more frequent than the checked word. If setting false, spellchecker returns null (if checked word exit in dictionary), or spellchecker will return most close suggest word.1.10
spellchecker-min-distance0.55fMinimal distance between checked word and proposed suggest word.1.10
errorlog-size50(Kb)The default size of error log file in Kb.1.9
upgrade-indexfalseAllows JCR to convert an existing index into the new format. Also, it is possible to set this property via system property, for example: -Dupgrade-index=true Indexes before JCR 1.12 will not run with JCR 1.12. Hence you have to run an automatic migration: Start JCR with -Dupgrade-index=true. The old index format is then converted in the new index format. After the conversion the new format is used. On the next start, you don't need this option anymore. The old index is replaced and a back conversion is not possible - therefore better take a backup of the index before. (Only for migrations from JCR 1.9 and later.)1.12
analyzerorg.apache.lucene.analysis.standard.StandardAnalyzerClass name of a lucene analyzer to use for fulltext indexing of text.1.12

Note

The Maximum number of clauses permitted per BooleanQuery, can be changed via the System property org.apache.lucene.maxClauseCount. The default value of this parameter is Integer.MAX_VALUE.

By default Exo JCR uses the Lucene standard Analyzer to index contents. This analyzer uses some standard filters in the method that analyzes the content:

public TokenStream tokenStream(String fieldName, Reader reader) {
    StandardTokenizer tokenStream = new StandardTokenizer(reader, replaceInvalidAcronym);
    tokenStream.setMaxTokenLength(maxTokenLength);
    TokenStream result = new StandardFilter(tokenStream);
    result = new LowerCaseFilter(result);
    result = new StopFilter(result, stopSet);
    return result;
  }

For specific cases, you may wish to use additional filters like ISOLatin1AccentFilter, which replaces accented characters in the ISO Latin 1 character set (ISO-8859-1) by their unaccented equivalents.

In order to use a different filter, you have to create a new analyzer, and a new search index to use the analyzer. You put it in a jar, which is deployed with your application.

You may also add a condition to the index rule and have multiple rules with the same nodeType. The first index rule that matches will apply and all remain ones are ignored:

<?xml version="1.0"?>
<!DOCTYPE configuration SYSTEM "http://www.exoplatform.org/dtd/indexing-configuration-1.0.dtd">
<configuration xmlns:nt="http://www.jcp.org/jcr/nt/1.0">
  <index-rule nodeType="nt:unstructured"
              boost="2.0"
              condition="@priority = 'high'">
    <property>Text</property>
  </index-rule>
  <index-rule nodeType="nt:unstructured">
    <property>Text</property>
  </index-rule>
</configuration>

In the above example, the first rule only applies if the nt:unstructured node has a priority property with a value 'high'. The condition syntax supports only the equals operator and a string literal.

You may also refer properties in the condition that are not on the current node:

<?xml version="1.0"?>
<!DOCTYPE configuration SYSTEM "http://www.exoplatform.org/dtd/indexing-configuration-1.0.dtd">
<configuration xmlns:nt="http://www.jcp.org/jcr/nt/1.0">
  <index-rule nodeType="nt:unstructured"
              boost="2.0"
              condition="ancestor::*/@priority = 'high'">
    <property>Text</property>
  </index-rule>
  <index-rule nodeType="nt:unstructured"
              boost="0.5"
              condition="parent::foo/@priority = 'low'">
    <property>Text</property>
  </index-rule>
  <index-rule nodeType="nt:unstructured"
              boost="1.5"
              condition="bar/@priority = 'medium'">
    <property>Text</property>
  </index-rule>
  <index-rule nodeType="nt:unstructured">
    <property>Text</property>
  </index-rule>
</configuration>

The indexing configuration also allows you to specify the type of a node in the condition. Please note however that the type match must be exact. It does not consider sub types of the specified node type.

<?xml version="1.0"?>
<!DOCTYPE configuration SYSTEM "http://www.exoplatform.org/dtd/indexing-configuration-1.0.dtd">
<configuration xmlns:nt="http://www.jcp.org/jcr/nt/1.0">
  <index-rule nodeType="nt:unstructured"
              boost="2.0"
              condition="element(*, nt:unstructured)/@priority = 'high'">
    <property>Text</property>
  </index-rule>
</configuration>

Sometimes it is useful to include the contents of descendant nodes into a single node to easier search on content that is scattered across multiple nodes.

JCR allows you to define indexed aggregates, basing on relative path patterns and primary node types.

The following example creates an indexed aggregate on nt:file that includes the content of the jcr:content node:

<?xml version="1.0"?>
<!DOCTYPE configuration SYSTEM "http://www.exoplatform.org/dtd/indexing-configuration-1.0.dtd">
<configuration xmlns:jcr="http://www.jcp.org/jcr/1.0"
               xmlns:nt="http://www.jcp.org/jcr/nt/1.0">
  <aggregate primaryType="nt:file">
    <include>jcr:content</include>
  </aggregate>
</configuration>

You can also restrict the included nodes to a certain type:

<?xml version="1.0"?>
<!DOCTYPE configuration SYSTEM "http://www.exoplatform.org/dtd/indexing-configuration-1.0.dtd">
<configuration xmlns:jcr="http://www.jcp.org/jcr/1.0"
               xmlns:nt="http://www.jcp.org/jcr/nt/1.0">
  <aggregate primaryType="nt:file">
    <include primaryType="nt:resource">jcr:content</include>
  </aggregate>
</configuration>

You may also use the * to match all child nodes:

<?xml version="1.0"?>
<!DOCTYPE configuration SYSTEM "http://www.exoplatform.org/dtd/indexing-configuration-1.0.dtd">
<configuration xmlns:jcr="http://www.jcp.org/jcr/1.0"
               xmlns:nt="http://www.jcp.org/jcr/nt/1.0">
  <aggregate primaryType="nt:file">
    <include primaryType="nt:resource">*</include>
  </aggregate>
</configuration>

If you wish to include nodes up to a certain depth below the current node, you can add multiple include elements. E.g. the nt:file node may contain a complete XML document under jcr:content:

<?xml version="1.0"?>
<!DOCTYPE configuration SYSTEM "http://www.exoplatform.org/dtd/indexing-configuration-1.0.dtd">
<configuration xmlns:jcr="http://www.jcp.org/jcr/1.0"
               xmlns:nt="http://www.jcp.org/jcr/nt/1.0">
  <aggregate primaryType="nt:file">
    <include>*</include>
    <include>*/*</include>
    <include>*/*/*</include>
  </aggregate>
</configuration>

When using analyzers, you may encounter an unexpected behavior when searching within a property compared to searching within a node scope. The reason is that the node scope always uses the global analyzer.

Let's suppose that the property "mytext" contains the text : "testing my analyzers" and that you haven't configured any analyzers for the property "mytext" (and not changed the default analyzer in SearchIndex).

If your query is for example:

xpath = "//*[jcr:contains(mytext,'analyzer')]"
        

This xpath does not return a hit in the node with the property above and default analyzers.

Also a search on the node scope

xpath = "//*[jcr:contains(.,'analyzer')]"

won't give a hit. Realize that you can only set specific analyzers on a node property, and that the node scope indexing/analyzing is always done with the globally defined analyzer in the SearchIndex element.

Now, if you change the analyzer used to index the "mytext" property above to

<analyzer class="org.apache.lucene.analysis.Analyzer.GermanAnalyzer">
     <property>mytext</property>
</analyzer>

and you do the same search again, then for

xpath = "//*[jcr:contains(mytext,'analyzer')]"

you would get a hit because of the word stemming (analyzers - analyzer).

The other search,

xpath = "//*[jcr:contains(.,'analyzer')]"
        

still would not give a result, since the node scope is indexed with the global analyzer, which in this case does not take into account any word stemming.

In conclusion, be aware that when using analyzers for specific properties, you might find a hit in a property for some search text, and you do not find a hit with the same search text in the node scope of the property!

eXo JCR supports some advanced features, which are not specified in JSR 170:

eXo JCR allows using persister to store configuration. In this section, you will understand how to use and configure eXo JCR persister.

On startup RepositoryServiceConfiguration component checks if a configuration persister was configured. In that case, it uses the provided ConfigurationPersister implementation class to instantiate the persister object.

Configuration with persister:

<component>
    <key>org.exoplatform.services.jcr.config.RepositoryServiceConfiguration</key>
    <type>org.exoplatform.services.jcr.impl.config.RepositoryServiceConfigurationImpl</type>
    <init-params>
      <value-param>
        <name>conf-path</name>
        <description>JCR configuration file</description>
        <value>/conf/standalone/exo-jcr-config.xml</value>
      </value-param>
      <properties-param>
        <name>working-conf</name>
        <description>working-conf</description>
        <property name="source-name" value="jdbcjcr" />
        <property name="dialect" value="mysql" />
        <property name="persister-class-name" value="org.exoplatform.services.jcr.impl.config.JDBCConfigurationPersister" />
      </properties-param>
    </init-params>
  </component>
  

Where:

ConfigurationPersister interface:

/**
   * Init persister.
   * Used by RepositoryServiceConfiguration on init. 
   * @return - config data stream
   */
  void init(PropertiesParam params) throws RepositoryConfigurationException;
  
  /**
   * Read config data.
   * @return - config data stream
   */
  InputStream read() throws RepositoryConfigurationException;
  
  /**
   * Create table, write data.
   * @param confData - config data stream
   */
  void write(InputStream confData) throws RepositoryConfigurationException;
  
  /**
   * Tell if the config exists.
   * @return - flag
   */
  boolean hasConfig() throws RepositoryConfigurationException;
  

JCR Core implementation contains a persister which stores the repository configuration in the relational database using JDBC calls - org.exoplatform.services.jcr.impl.config.JDBCConfigurationPersister.

The implementation will crate and use table JCR_CONFIG in the provided database.

But the developer can implement his own persister for his particular usecase.

eXo JCR persistent data container can work in two configuration modes:

The data container uses the JDBC driver to communicate with the actual database software, i.e. any JDBC-enabled data storage can be used with eXo JCR implementation.

Currently the data container is tested with the following configurations:

Each database software supports ANSI SQL standards but also has its own specifics. So, each database has its own configuration in eXo JCR as a database dialect parameter. If you need a more detailed configuration of the database, it's possible to do that by editing the metadata SQL-script files.

SQL-scripts you can obtain from jar-file exo.jcr.component.core-XXX.XXX.jar:conf/storage/. They also can be found at GitHub here.

In the next two tables correspondence between the scripts and databases is shown.

Table 1.3. Single-database
MySQL DB jcr-sjdbc.mysql.sql
MySQL DB with utf-8 jcr-sjdbc.mysql-utf8.sql
MySQL DB with MyISAM* jcr-sjdbc.mysql-myisam.sql
MySQL DB with MyISAM and utf-8* jcr-sjdbc.mysql-myisam-utf8.sql
MySQL DB with NDB engine jcr-sjdbc.mysql-ndb.sql
MySQL DB with NDB engine and utf-8 jcr-sjdbc.mysql-ndb-utf8.sql
PostgreSQL and Postgre Plus jcr-sjdbc.pqsql.sql
Oracle DB jcr-sjdbc.ora.sql
DB2 jcr-sjdbc.db2.sql
MS SQL Server jcr-sjdbc.mssql.sql
Sybase jcr-sjdbc.sybase.sql
HSQLDB jcr-sjdbc.sql
H2 jcr-sjdbc.h2.sql
Table 1.4. Multi-database
MySQL DB jcr-mjdbc.mysql.sql
MySQL DB with utf-8 jcr-mjdbc.mysql-utf8.sql
MySQL DB with MyISAM* jcr-mjdbc.mysql-myisam.sql
MySQL DB with MyISAM and utf-8* jcr-mjdbc.mysql-myisam-utf8.sql
MySQL DB with NDB engine jcr-mjdbc.mysql-ndb.sql
MySQL DB with NDB engine and utf-8 jcr-mjdbc.mysql-ndb-utf8.sql
PostgreSQL and Postgre Plus jcr-mjdbc.pqsql.sql
Oracle DB jcr-mjdbc.ora.sql
DB2 jcr-mjdbc.db2.sql
MS SQL Server jcr-mjdbc.mssql.sql
Sybase jcr-mjdbc.sybase.sql
HSQLDB jcr-mjdbc.sql
H2 jcr-mjdbc.h2.sql

In case the non-ANSI node name is used, it's necessary to use a database with MultiLanguage support. Some JDBC drivers need additional parameters for establishing a Unicode friendly connection. E.g. under mysql it's necessary to add an additional parameter for the JDBC driver at the end of JDBC URL. For instance: jdbc:mysql://exoua.dnsalias.net/portal?characterEncoding=utf8

There are preconfigured configuration files for HSQLDB. Look for these files in /conf/portal and /conf/standalone folders of the jar-file exo.jcr.component.core-XXX.XXX.jar or source-distribution of eXo JCR implementation.

By default, the configuration files are located in service jars /conf/portal/configuration.xml (eXo services including JCR Repository Service) and exo-jcr-config.xml (repositories configuration). In GateIn product, JCR is configured in portal web application portal/WEB-INF/conf/jcr/jcr-configuration.xml (JCR Repository Service and related serivces) and repository-configuration.xml (repositories configuration).

Read more about Repository configuration.

  • Statistics is collected automatically starting from DB2 Version 9, however it is needed to launch statistics collection manually during the very first start, otherwise it could be very long. You need to run 'RUNSTATS' command

    RUNSTATS ON TABLE <scheme>.<table> WITH DISTRIBUTION AND INDEXES ALL

    for JCR_SITEM (or JCR_MITEM) and JCR_SVALUE (or JCR_MVALUE) tables.

  • Oracle DB automatically collects statistics to optimize performance of queries, but you can manually call 'ANALYZE' command to start collecting statistics immediately which may improve performance. For example

    ANALYZE TABLE JCR_SITEM COMPUTE STATISTICS
    ANALYZE TABLE JCR_SVALUE COMPUTE STATISTICS
    ANALYZE TABLE JCR_SREF COMPUTE STATISTICS
    ANALYZE INDEX JCR_PK_SITEM COMPUTE STATISTICS
    ANALYZE INDEX JCR_IDX_SITEM_PARENT_FK COMPUTE STATISTICS
    ANALYZE INDEX JCR_IDX_SITEM_PARENT COMPUTE STATISTICS
    ANALYZE INDEX JCR_IDX_SITEM_PARENT_NAME COMPUTE STATISTICS
    ANALYZE INDEX JCR_IDX_SITEM_PARENT_ID COMPUTE STATISTICS
    ANALYZE INDEX JCR_PK_SVALUE COMPUTE STATISTICS
    ANALYZE INDEX JCR_IDX_SVALUE_PROPERTY COMPUTE STATISTICS
    ANALYZE INDEX JCR_PK_SREF COMPUTE STATISTICS
    ANALYZE INDEX JCR_IDX_SREF_PROPERTY COMPUTE STATISTICS
    ANALYZE INDEX JCR_PK_SCONTAINER COMPUTE STATISTICS

Isolated-database configuration allows to configure single database for repository but separate database tables for each workspace. First step is to configure the data container in the org.exoplatform.services.naming.InitialContextInitializer service. It's the JNDI context initializer, which registers (binds) naming resources (DataSources) for data containers.

For example:

 <external-component-plugins>
    <target-component>org.exoplatform.services.naming.InitialContextInitializer</target-component>
    <component-plugin>
      <name>bind.datasource</name>
      <set-method>addPlugin</set-method>
      <type>org.exoplatform.services.naming.BindReferencePlugin</type>
      <init-params>
        <value-param>
          <name>bind-name</name>
          <value>jdbcjcr</value>
        </value-param>
        <value-param>
          <name>class-name</name>
          <value>javax.sql.DataSource</value>
        </value-param>
        <value-param>
          <name>factory</name>
          <value>org.apache.commons.dbcp.BasicDataSourceFactory</value>
        </value-param>
          <properties-param>
            <name>ref-addresses</name>
            <description>ref-addresses</description>
            <property name="driverClassName" value="org.postgresql.Driver"/>
            <property name="url" value="jdbc:postgresql://exoua.dnsalias.net/portal"/>
            <property name="username" value="exoadmin"/>
            <property name="password" value="exo12321"/>
          </properties-param>
      </init-params>
    </component-plugin>
  </external-component-plugins>

We configure the database connection parameters:

When the data container configuration is done, we can configure the repository service. Each workspace will be configured for the same data container.

For example:

<workspaces>
   <workspace name="ws">
      <!-- for system storage -->
      <container class="org.exoplatform.services.jcr.impl.storage.jdbc.optimisation.CQJDBCWorkspaceDataContainer">
         <properties>
            <property name="source-name" value="jdbcjcr" />
            <property name="db-structure-type" value="isolated" />
            ...
         </properties>
         ...
      </container>
      ...
   </workspace>

   <workspace name="ws1">
      <container class="org.exoplatform.services.jcr.impl.storage.jdbc.optimisation.CQJDBCWorkspaceDataContainer">
         <properties>
            <property name="source-name" value="jdbcjcr" />
            <property name="db-structure-type" value="isolated" />
            ...
         </properties>
         ...
      </container>
      ...
   </workspace>
</workspaces>

In this way, we have configured two workspace which will be persisted in different database tables.

Note

Starting from v.1.9 repository configuration parameters supports human-readable formats of values (e.g. 200K - 200 Kbytes, 30m - 30 minutes etc)

You need to configure each workspace in a repository. You may have each one on different remote servers as far as you need.

First of all configure the data containers in the org.exoplatform.services.naming.InitialContextInitializer service. It's the JNDI context initializer which registers (binds) naming resources (DataSources) for data containers.

For example:

<component>
   <key>org.exoplatform.services.naming.InitialContextInitializer</key>
   <type>org.exoplatform.services.naming.InitialContextInitializer</type>
   <component-plugins>
      <component-plugin>
         <name>bind.datasource</name>
         <set-method>addPlugin</set-method>
         <type>org.exoplatform.services.naming.BindReferencePlugin</type>
         <init-params>
            <value-param>
               <name>bind-name</name>
               <value>jdbcjcr</value>
            </value-param>
            <value-param>
               <name>class-name</name>
               <value>javax.sql.DataSource</value>
            </value-param>
            <value-param>
               <name>factory</name>
               <value>org.apache.commons.dbcp.BasicDataSourceFactory</value>
            </value-param>
            <properties-param>
               <name>ref-addresses</name>
               <description>ref-addresses</description>
               <property name="driverClassName" value="org.hsqldb.jdbcDriver"/>
               <property name="url" value="jdbc:hsqldb:file:target/temp/data/portal"/>
               <property name="username" value="sa"/>
               <property name="password" value=""/>
            </properties-param>
         </init-params>
      </component-plugin>
      <component-plugin>
         <name>bind.datasource</name>
         <set-method>addPlugin</set-method>
         <type>org.exoplatform.services.naming.BindReferencePlugin</type>
         <init-params>
            <value-param>
               <name>bind-name</name>
               <value>jdbcjcr1</value>
            </value-param>
            <value-param>
               <name>class-name</name>
               <value>javax.sql.DataSource</value>
            </value-param>
            <value-param>
               <name>factory</name>
               <value>org.apache.commons.dbcp.BasicDataSourceFactory</value>
            </value-param>
            <properties-param>
               <name>ref-addresses</name>
               <description>ref-addresses</description>
               <property name="driverClassName" value="com.mysql.jdbc.Driver"/>
               <property name="url" value="jdbc:mysql://exoua.dnsalias.net/jcr"/>
               <property name="username" value="exoadmin"/>
               <property name="password" value="exo12321"/>
               <property name="maxActive" value="50"/>
               <property name="maxIdle" value="5"/>
               <property name="initialSize" value="5"/>
            </properties-param>
         </init-params>
      </component-plugin>
   <component-plugins>
</component>
                    

When the data container configuration is done, we can configure the repository service. Each workspace will be configured for its own data container.

For example:

<workspaces>
   <workspace name="ws">
      <container class="org.exoplatform.services.jcr.impl.storage.jdbc.optimisation.CQJDBCWorkspaceDataContainer">
         <properties>
            <property name="source-name" value="jdbcjcr"/>
            <property name="db-structure-type" value="multi"/>
            ...
         </properties>
      </container>
      ...
   </workspace>

   <workspace name="ws1">
      <container class="org.exoplatform.services.jcr.impl.storage.jdbc.optimisation.CQJDBCWorkspaceDataContainer">
         <properties>
            <property name="source-name" value="jdbcjcr1"/>
            <property name="db-structure-type" value="multi"/>
            ...
         </properties>
      </container>
      ...
   </workspace>
</workspaces>                                     

In this way, we have configured two workspace which will be persisted in two different databases (ws in HSQLDB, ws1 in MySQL).

It's simplier to configure a single-database data container. We have to configure one naming resource.

For example:

<external-component-plugins>
    <target-component>org.exoplatform.services.naming.InitialContextInitializer</target-component>
    <component-plugin>
        <name>bind.datasource</name>
        <set-method>addPlugin</set-method>
        <type>org.exoplatform.services.naming.BindReferencePlugin</type>
        <init-params>
          <value-param>
            <name>bind-name</name>
            <value>jdbcjcr</value>
          </value-param>
          <value-param>
            <name>class-name</name>
            <value>javax.sql.DataSource</value>
          </value-param>
          <value-param>
            <name>factory</name>
            <value>org.apache.commons.dbcp.BasicDataSourceFactory</value>
          </value-param>
          <properties-param>
            <name>ref-addresses</name>
            <description>ref-addresses</description>
            <property name="driverClassName" value="org.postgresql.Driver"/>
            <property name="url" value="jdbc:postgresql://exoua.dnsalias.net/portal"/>
            <property name="username" value="exoadmin"/>
            <property name="password" value="exo12321"/>
            <property name="maxActive" value="50"/>
            <property name="maxIdle" value="5"/>
            <property name="initialSize" value="5"/>
          </properties-param>
        </init-params>
    </component-plugin>
  </external-component-plugins>
  

And configure repository workspaces in repositories configuration with this one database.

For example:

<workspaces>
  <workspace name="ws">
    <container class="org.exoplatform.services.jcr.impl.storage.jdbc.optimisation.CQJDBCWorkspaceDataContainer">
      <properties>
        <property name="source-name" value="jdbcjcr"/>
        <property name="db-structure-type" value="single" />
        ...
      </properties>
    </container>
    ...
  </workspace>

  <workspace name="ws1">
    <container class="org.exoplatform.services.jcr.impl.storage.jdbc.optimisation.CQJDBCWorkspaceDataContainer">
    <properties>
      <property name="source-name" value="jdbcjcr"/>
      <property name="db-structure-type" value="single" />
      ...
    </properties>
    ...
  </workspace>
</workspaces>

In this way, we have configured two workspaces which will be persisted in one database (PostgreSQL).

The current configuration of eXo JCR uses Apache DBCP connection pool (org.apache.commons.dbcp.BasicDataSourceFactory). It's possible to set a big value for maxActive parameter in configuration.xml. That means usage of lots of TCP/IP ports from a client machine inside the pool (i.e. JDBC driver). As a result, the data container can throw exceptions like "Address already in use". To solve this problem, you have to configure the client's machine networking software for the usage of shorter timeouts for opened TCP/IP ports.

Microsoft Windows has MaxUserPort, TcpTimedWaitDelay registry keys in the node HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesTcpipParameters, by default these keys are unset, set each one with values like these:

  • "TcpTimedWaitDelay"=dword:0000001e, sets TIME_WAIT parameter to 30 seconds, default is 240.

  • "MaxUserPort"=dword:00001b58, sets the maximum of open ports to 7000 or higher, default is 5000.

A sample registry file is below:

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters]
"MaxUserPort"=dword:00001b58
"TcpTimedWaitDelay"=dword:0000001e

By default JCR Values are stored in the Workspace Data container along with the JCR structure (i.e. Nodes and Properties). eXo JCR offers an additional option of storing JCR Values separately from Workspace Data container, which can be extremely helpful to keep Binary Large Objects (BLOBs) for example.

Value storage configuration is a part of Repository configuration, find more details there.

Tree-based storage is recommended for most of cases. If you run an application on Amazon EC2 - the S3 option may be interesting for architecture. Simple 'flat' storage is good in speed of creation/deletion of values, it might be a compromise for a small storages.

Holds Values in tree-like FileSystem files. path property points to the root directory to store the files.

This is a recommended type of external storage, it can contain large amount of files limited only by disk/volume free space.

A disadvantage is that it's a higher time on Value deletion due to unused tree-nodes remove.

<value-storage id="Storage #1" class="org.exoplatform.services.jcr.impl.storage.value.fs.TreeFileValueStorage">
     <properties>
       <property name="path" value="data/values"/>
     </properties>
     <filters>
       <filter property-type="Binary" min-value-size="1M"/>
     </filters>

Where :

id: The value storage unique identifier, used for linking with properties stored in workspace container.
path: A location where value files will be stored.

Each file value storage can have the filter(s) for incoming values. A filter can match values by property type (property-type), property name (property-name), ancestor path (ancestor-path) and/or size of values stored (min-value-size, in bytes). In code sample, we use a filter with property-type and min-value-size only. I.e. storage for binary values with size greater of 1MB. It's recommended to store properties with large values in file value storage only.

Another example shows a value storage with different locations for large files (min-value-size a 20Mb-sized filter). A value storage uses ORed logic in the process of filter selection. That means the first filter in the list will be asked first and if not matched the next will be called etc. Here a value matches the 20 MB-sized filter min-value-size and will be stored in the path "data/20Mvalues", all other in "data/values".

<value-storages>
  <value-storage id="Storage #1" class="org.exoplatform.services.jcr.impl.storage.value.fs.TreeFileValueStorage">
    <properties>
      <property name="path" value="data/20Mvalues"/>
    </properties>
    <filters>
      <filter property-type="Binary" min-value-size="20M"/>
    </filters>
  <value-storage>
  <value-storage id="Storage #2" class="org.exoplatform.services.jcr.impl.storage.value.fs.TreeFileValueStorage">
    <properties>
      <property name="path" value="data/values"/>
    </properties>
    <filters>
      <filter property-type="Binary" min-value-size="1M"/>
    </filters>
  <value-storage>
<value-storages>

eXo JCR supports Content-addressable storage feature for Values storing.

Content Addressable Value storage stores unique content once. Different properties (values) with same content will be stored as one data file shared between those values. We can tell the Value content will be shared across some Values in storage and will be stored on one physical file.

Storage size will be decreased for application which governs potentially same data in the content.

If property Value changes, it is stored in an additional file. Alternatively the file is shared with other values, pointing to the same content.

The storage calculates Value content address each time the property was changed. CAS write operations are much more expensive compared to the non-CAS storages.

Content address calculation based on java.security.MessageDigest hash computation and tested with MD5 and SHA1 algorithms.

CAS support can be enabled for Tree and Simple File Value Storage types.

To enable CAS support, just configure it in JCR Repositories configuration as we do for other Value Storages.

<workspaces>
        <workspace name="ws">
          <container class="org.exoplatform.services.jcr.impl.storage.jdbc.optimisation.CQJDBCWorkspaceDataContainer">
            <properties>
              <property name="source-name" value="jdbcjcr"/>
              <property name="dialect" value="oracle"/>
              <property name="multi-db" value="false"/>
              <property name="max-buffer-size" value="200k"/>
              <property name="swap-directory" value="target/temp/swap/ws"/>
            </properties>
            <value-storages>
<!------------------- here ----------------------->
              <value-storage id="ws" class="org.exoplatform.services.jcr.impl.storage.value.fs.CASableTreeFileValueStorage">
                <properties>
                  <property name="path" value="target/temp/values/ws"/>
                  <property name="digest-algo" value="MD5"/>
                  <property name="vcas-type" value="org.exoplatform.services.jcr.impl.storage.value.cas.JDBCValueContentAddressStorageImpl"/>
                  <property name="jdbc-source-name" value="jdbcjcr"/>
                  <property name="jdbc-dialect" value="oracle"/>
                </properties>
                <filters>
                  <filter property-type="Binary"/>
                </filters>
              </value-storage>
            </value-storages>

Properties:

digest-algo: Digest hash algorithm (MD5 and SHA1 were tested);
vcas-type: Value CAS internal data type, JDBC backed is currently implemented org.exoplatform.services.jcr.impl.storage.value.cas.JDBCValueContentAddressStorageImp;l
jdbc-source-name: JDBCValueContentAddressStorageImpl specific parameter, database will be used to save CAS metadata. It's simple to use same as in workspace container;
jdbc-dialect: JDBCValueContentAddressStorageImpl specific parameter, database dialect. It's simple to use the same as in workspace container;

Each Workspace of JCR has its own persistent storage to hold workspace's items data. eXo Content Repository can be configured so that it can use one or more workspaces that are logical units of the repository content. Physical data storage mechanism is configured using mandatory element container. The type of container is described in the attribute class = fully qualified name of org.exoplatform.services.jcr.storage.WorkspaceDataContainer subclass like

<container class="org.exoplatform.services.jcr.impl.storage.jdbc.optimisation.CQJDBCWorkspaceDataContainer">
  <properties>
    <property name="source-name" value="jdbcjcr1"/>
    <property name="dialect" value="hsqldb"/>
    <property name="multi-db" value="true"/>
    <property name="max-buffer-size" value="200K"/>
    <property name="swap-directory" value="target/temp/swap/ws"/>
    <property name="lazy-node-iterator-page-size" value="50"/>
    <property name="acl-bloomfilter-false-positive-probability" value="0.1d"/>
    <property name="acl-bloomfilter-elements-number" value="1000000"/>
    <property name="check-sns-new-connection" value="false"/>
    <property name="batch-size" value="1000"/>
  </properties>

Workspace Data Container specific parameters:

eXo JCR has an RDB (JDBC) based, production ready Workspace Data Container.

JDBC Workspace Data Container specific parameters:

  • source-name: JDBC data source name, registered in JDNI by InitialContextInitializer. ( sourceName prior v.1.9). This property is mandatory.

  • dialect: Database dialect, one of "hsqldb", "h2", "mysql", "mysql-myisam", "mysql-utf8", "mysql-myisam-utf8", "pgsql", "pgsql-scs", "oracle", "oracle-oci", "mssql", "sybase", "derby", "db2" , "db2v8". The default value is "auto".

  • multi-db: Enable multi-database container with this parameter (if "true"). Otherwise (if "false") configured for single-database container. Please, be aware, that this property is currently deprecated. It is advised to use db-structure-type instead.

  • db-structure-type: Can be set to isolated, multi, single to set corresponding configuration for data container. This property is mandatory.

  • db-tablename-suffix: If db-structure-type is set to isolated, tables, used by repository service, have the following format:

    • JCR_I${db-tablename-suffix} - for items

    • JCR_V${db-tablename-suffix} - for values

    • JCR_R${db-tablename-suffix} - for references

      db-tablename-suffix by default equals to workspace name, but can be set via configuration to any suitable.

  • batch-size: the batch size. Default value is -1 (disabled)

  • use-sequence-for-order-number: Indicates whether or not a sequence must be used to manage the order number. The value expected for this parameter is a boolean or "auto", by default it is set to "auto" where the value of use-sequence will be set automatically according to your database type.

    • It is enabled in case of H2, HSQLDB, PGSQL and ORACLE.

    • It is disabled in case of MSSQL, MYSQL, DB2 and SYBASE.

Workspace Data Container MAY support external storages for javax.jcr.Value (which can be the case for BLOB values for example) using the optional element value-storages. Data Container will try to read or write Value using underlying value storage plugin if the filter criteria (see below) match the current property.

<value-storages>
  <value-storage id="Storage #1" class="org.exoplatform.services.jcr.impl.storage.value.fs.TreeFileValueStorage">
    <properties>
      <property name="path" value="data/values"/>
    </properties>
    <filters>
     <filter property-type="Binary" min-value-size="1M"/><!-- Values large of 1Mbyte -->
    </filters>
.........
</value-storages>

Where value-storage is the subclass of org.exoplatform.services.jcr.storage.value.ValueStoragePlugin and properties are optional plugin specific parameters.

filters : Each file value storage can have the filter(s) for incoming values. If there are several filter criteria, they all have to match (AND-Condition).

A filter can match values by property type (property-type), property name (property-name), ancestor path (ancestor-path) and/or the size of values stored (min-value-size, e.g. 1M, 4.2G, 100 (bytes)).

In a code sample, we use a filter with property-type and min-value-size only. That means that the storage is only for binary values whose size is greater than 1Mbyte.

It's recommended to store properties with large values in a file value storage only.

Starting from version 1.9, JCR Service supports REST services creation on Groovy script.

The feature bases on RESTful framework and uses ResourceContainer concept.

Scripts should extend ResourceContainer and should be stored in JCR as a node of type exo:groovyResourceContainer.

Detailed REST services step-by-step implementation check there Create REST service step by step.

Component configuration enables Groovy services loader:

<component>
  <type>org.exoplatform.services.jcr.ext.script.groovy.GroovyScript2RestLoader</type>
  <init-params>
    <object-param>
      <name>observation.config</name>
      <object type="org.exoplatform.services.jcr.ext.script.groovy.GroovyScript2RestLoader$ObservationListenerConfiguration">
        <field name="repository">
          <string>repository</string>
        </field>
        <field name="workspaces">
          <collection type="java.util.ArrayList">
            <value>
              <string>collaboration</string>
            </value>
          </collection>
        </field>
      </object>
    </object-param>
  </init-params>
</component>

To deploy eXo JCR to JBoss, do the following steps:

  1. Download the latest version of eXo JCR .ear file distribution.

  2. Copy <jcr.ear> into <%jboss_home%/server/default/deploy>

  3. Put exo-configuration.xml to the root <%jboss_home%/exo-configuration.xml>

  4. Configure JAAS by inserting XML fragment shown below into <%jboss_home%/server/default/conf/login-config.xml>

    <application-policy name="exo-domain">
       <authentication>
          <login-module code="org.exoplatform.services.security.j2ee.JbossLoginModule" flag="required"></login-module>
       </authentication>
    </application-policy>
  5. Ensure that you use JBossTS Transaction Service and Infinispan Transaction Manager. Your exo-configuration.xml must contain such parts:

    <component>
       <key>org.infinispan.transaction.lookup.TransactionManagerLookup</key>
       <type>org.exoplatform.services.transaction.infinispan.JBossStandaloneJTAManagerLookup</type>
    </component>
       
    <component>
      <key>org.exoplatform.services.transaction.TransactionService</key>
      <type>org.exoplatform.services.transaction.infinispan.JBossTransactionsService</type>
      <init-params>
        <value-param>
          <name>timeout</name>
          <value>3000</value>
        </value-param>
      </init-params>   
    </component>
  6. Start server:

    • bin/run.sh for Unix

    • bin/run.bat for Windows

  7. Try accessing http://localhostu:8080/browser with root/exo as login/password if you have done everything right, you'll get access to repository browser.

  • To manually configure repository, create a new configuration file (e.g., exo-jcr-configuration.xml). For details, see JCR Configuration. Your configuration must look like:

    <repository-service default-repository="repository1">
       <repositories>
          <repository name="repository1" system-workspace="ws1" default-workspace="ws1">
             <security-domain>exo-domain</security-domain>
             <access-control>optional</access-control>
             <authentication-policy>org.exoplatform.services.jcr.impl.core.access.JAASAuthenticator</authentication-policy>
             <workspaces>
                <workspace name="ws1">
                   <container class="org.exoplatform.services.jcr.impl.storage.jdbc.optimisation.CQJDBCWorkspaceDataContainer">
                      <properties>
                         <property name="source-name" value="jdbcjcr" />
                         <property name="dialect" value="oracle" />
                         <property name="multi-db" value="false" />
                         <property name="update-storage" value="false" />
                         <property name="max-buffer-size" value="200k" />
                         <property name="swap-directory" value="../temp/swap/production" />
                      </properties>
                      <value-storages>
                         see "Value storage configuration" part.
                      </value-storages>
                   </container>
                   <initializer class="org.exoplatform.services.jcr.impl.core.ScratchWorkspaceInitializer">
                      <properties>
                         <property name="root-nodetype" value="nt:unstructured" />
                      </properties>
                   </initializer>
                   <cache enabled="true" class="org.exoplatform.services.jcr.impl.dataflow.persistent.infinispan.ISPNCacheWorkspaceStorageCache">
                         see  "Cache configuration" part.
                   </cache>
                   <query-handler class="org.exoplatform.services.jcr.impl.core.query.lucene.SearchIndex">
                      see  "Indexer configuration" part.
                   </query-handler>
                   <lock-manager class="org.exoplatform.services.jcr.impl.core.lock.infinispan.ISPNCacheableLockManagerImpl">
                      see  "Lock Manager configuration" part.
                   </lock-manager>
                </workspace>
                <workspace name="ws2">
                            ...
                </workspace>
                <workspace name="wsN">
                            ...
                </workspace>
             </workspaces>
          </repository>
       </repositories>
    </repository-service> 
  • Then, update RepositoryServiceConfiguration configuration in exo-configuration.xml to use this file:

    <component>
       <key>org.exoplatform.services.jcr.config.RepositoryServiceConfiguration</key>
       <type>org.exoplatform.services.jcr.impl.config.RepositoryServiceConfigurationImpl</type>
       <init-params>
          <value-param>
             <name>conf-path</name>
             <description>JCR configuration file</description>
             <value>exo-jcr-configuration.xml</value>
          </value-param>
       </init-params>
    </component>

Configuration of every workspace in repository must contains of such parts:

This section will show you how to use and configure Infinispan in the clustered environment. Also, you will know how to use a template-based configuration offered by eXo JCR for Infinispan instances.

eXo JCR implementation is shipped with ready-to-use Infinispan configuration templates for JCR's components. They are located in the application package inside the folder /conf/portal/cluster.

Data container template is "infinispan-data.xml":

<?xml version="1.0" encoding="UTF-8"?>
<infinispan
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
      xsi:schemaLocation="urn:infinispan:config:5.2 http://www.infinispan.org/schemas/infinispan-config-5.2.xsd"
      xmlns="urn:infinispan:config:5.2">

    <global>
      <evictionScheduledExecutor factory="org.infinispan.executors.DefaultScheduledExecutorFactory">
        <properties>
          <property name="threadNamePrefix" value="EvictionThread"/>
        </properties>
      </evictionScheduledExecutor>

      <globalJmxStatistics jmxDomain="exo" enabled="true" allowDuplicateDomains="true"/>

      <transport transportClass="org.infinispan.remoting.transport.jgroups.JGroupsTransport" clusterName="${infinispan-cluster-name}" distributedSyncTimeout="20000">
        <properties>
          <property name="configurationFile" value="${jgroups-configuration}"/>
        </properties>
      </transport>
    </global>

    <default>
      <clustering mode="replication">
        <stateTransfer timeout="20000" fetchInMemoryState="false" />
        <sync replTimeout="20000"/>
      </clustering>

      <locking isolationLevel="READ_COMMITTED" lockAcquisitionTimeout="20000" writeSkewCheck="false" concurrencyLevel="500" useLockStriping="false"/>
      <transaction transactionManagerLookupClass="org.exoplatform.services.transaction.infinispan.JBossStandaloneJTAManagerLookup" syncRollbackPhase="true" syncCommitPhase="true" transactionMode="TRANSACTIONAL"/>
      <jmxStatistics enabled="true"/>
      <eviction strategy="LIRS" threadPolicy="DEFAULT" maxEntries="1000000"/>
      <expiration wakeUpInterval="5000"/>
   </default>
</infinispan>

It's template name is "infinispan-lock.xml"

<?xml version="1.0" encoding="UTF-8"?>
<infinispan
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
      xsi:schemaLocation="urn:infinispan:config:5.2 http://www.infinispan.org/schemas/infinispan-config-5.2.xsd"
      xmlns="urn:infinispan:config:5.2">

    <global>
      <evictionScheduledExecutor factory="org.infinispan.executors.DefaultScheduledExecutorFactory">
        <properties>
          <property name="threadNamePrefix" value="EvictionThread"/>
        </properties>
      </evictionScheduledExecutor>

      <globalJmxStatistics jmxDomain="exo" enabled="true" allowDuplicateDomains="true"/>

      <transport transportClass="org.infinispan.remoting.transport.jgroups.JGroupsTransport" clusterName="${infinispan-cluster-name}" distributedSyncTimeout="20000">
        <properties>
          <property name="configurationFile" value="${jgroups-configuration}"/>
        </properties>
      </transport>
    </global>

    <default>
      <clustering mode="replication">
        <stateTransfer timeout="20000" fetchInMemoryState="false" />
        <sync replTimeout="20000"/>
      </clustering>

      <locking isolationLevel="READ_COMMITTED" lockAcquisitionTimeout="20000" writeSkewCheck="false" concurrencyLevel="500" useLockStriping="false"/>
      <transaction transactionManagerLookupClass="org.exoplatform.services.transaction.infinispan.JBossStandaloneJTAManagerLookup" syncRollbackPhase="true" syncCommitPhase="true" transactionMode="TRANSACTIONAL"/>
      <jmxStatistics enabled="true"/>
      <eviction strategy="NONE"/>

      <loaders passivation="false" shared="true" preload="true">
        <store class="org.exoplatform.services.jcr.infinispan.JdbcStringBasedCacheStore" fetchPersistentState="true" ignoreModifications="false" purgeOnStartup="false">
          <properties>
             <property name="stringsTableNamePrefix" value="${infinispan-cl-cache.jdbc.table.name}"/>
             <property name="idColumnName" value="${infinispan-cl-cache.jdbc.id.column}"/>
             <property name="dataColumnName" value="${infinispan-cl-cache.jdbc.data.column}"/>
             <property name="timestampColumnName" value="${infinispan-cl-cache.jdbc.timestamp.column}"/>
             <property name="idColumnType" value="${infinispan-cl-cache.jdbc.id.type}"/>
             <property name="dataColumnType" value="${infinispan-cl-cache.jdbc.data.type}"/>
             <property name="timestampColumnType" value="${infinispan-cl-cache.jdbc.timestamp.type}"/>
             <property name="dropTableOnExit" value="${infinispan-cl-cache.jdbc.table.drop}"/>
             <property name="createTableOnStart" value="${infinispan-cl-cache.jdbc.table.create}"/>
             <property name="connectionFactoryClass" value="${infinispan-cl-cache.jdbc.connectionFactory}"/>
             <property name="datasourceJndiLocation" value="${infinispan-cl-cache.jdbc.datasource}"/>
          </properties>
          <async enabled="false"/>
        </store>
      </loaders>
   </default>

</infinispan>

Have a look at "infinispan-indexer.xml"

<?xml version="1.0" encoding="UTF-8"?>
<infinispan
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
      xsi:schemaLocation="urn:infinispan:config:5.2 http://www.infinispan.org/schemas/infinispan-config-5.2.xsd"
      xmlns="urn:infinispan:config:5.2">

    <global>
      <evictionScheduledExecutor factory="org.infinispan.executors.DefaultScheduledExecutorFactory">
        <properties>
          <property name="threadNamePrefix" value="EvictionThread"/>
        </properties>
      </evictionScheduledExecutor>

      <globalJmxStatistics jmxDomain="exo" enabled="true" allowDuplicateDomains="true"/>

      <transport transportClass="org.infinispan.remoting.transport.jgroups.JGroupsTransport" clusterName="${infinispan-cluster-name}" distributedSyncTimeout="20000">
        <properties>
          <property name="configurationFile" value="${jgroups-configuration}"/>
        </properties>
      </transport>
    </global>

    <default>
      <clustering mode="replication">
        <stateTransfer timeout="20000" fetchInMemoryState="false" />
        <sync replTimeout="20000"/>
      </clustering>

      <locking isolationLevel="READ_COMMITTED" lockAcquisitionTimeout="20000" writeSkewCheck="false" concurrencyLevel="500" useLockStriping="false"/>
      <transaction transactionManagerLookupClass="org.exoplatform.services.transaction.infinispan.JBossStandaloneJTAManagerLookup" syncRollbackPhase="true" syncCommitPhase="true" transactionMode="TRANSACTIONAL"/>
      <jmxStatistics enabled="true"/>
      <eviction strategy="NONE"/>

      <loaders passivation="false" shared="false" preload="false">
        <store class="${infinispan-cachestore-classname}" fetchPersistentState="true" ignoreModifications="false" purgeOnStartup="false">
          <async enabled="false"/>
        </store>
      </loaders>
   </default>
</infinispan>

What LockManager does?

In general, LockManager stores Lock objects, so it can give a Lock object or can release it.

Also, LockManager is responsible for removing Locks that live too long. This parameter may be configured with "time-out" property.

JCR provides one basic implementations of LockManager:

ISPNCacheableLockManagerImpl stores Lock objects in Infinispan, so Locks are replicable and affect on cluster, not only a single node. Also, Infinispan has a JdbcStringBasedCacheStore, so Locks will be stored to the database.

You can enable LockManager by adding lock-manager-configuration to workspace-configuration.

For example:

<workspace name="ws">
   ...
   <lock-manager class="org.exoplatform.services.jcr.impl.core.lock.infinispan.ISPNCacheableLockManagerImpl">
      <properties>
         <property name="time-out" value="15m" />
         ...
      </properties>
   </lock-manager>               
   ...
</workspace>

Wher time-out parameter represents interval to remove Expired Locks. LockRemover separates threads, that periodically ask LockManager to remove Locks that live so long.

The configuration uses the template Infinispan configuration for all LockManagers.

Lock template configuration

test-infinispan-lock.xml

<?xml version="1.0" encoding="UTF-8"?>
<infinispan
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
      xsi:schemaLocation="urn:infinispan:config:5.2 http://www.infinispan.org/schemas/infinispan-config-5.2.xsd"
      xmlns="urn:infinispan:config:5.2">

    <global>
      <evictionScheduledExecutor factory="org.infinispan.executors.DefaultScheduledExecutorFactory">
        <properties>
          <property name="threadNamePrefix" value="EvictionThread"/>
        </properties>
      </evictionScheduledExecutor>

      <globalJmxStatistics jmxDomain="exo" enabled="true" allowDuplicateDomains="true"/>

      <transport transportClass="org.infinispan.remoting.transport.jgroups.JGroupsTransport" clusterName="${infinispan-cluster-name}" distributedSyncTimeout="20000">
        <properties>
          <property name="configurationFile" value="${jgroups-configuration}"/>
        </properties>
      </transport>
    </global>

    <default>
      <clustering mode="replication">
        <stateTransfer timeout="20000" fetchInMemoryState="false" />
        <sync replTimeout="20000"/>
      </clustering>

      <locking isolationLevel="READ_COMMITTED" lockAcquisitionTimeout="20000" writeSkewCheck="false" concurrencyLevel="500" useLockStriping="false"/>
      <transaction transactionManagerLookupClass="org.exoplatform.services.transaction.infinispan.JBossStandaloneJTAManagerLookup" syncRollbackPhase="true" syncCommitPhase="true"/>
      <jmxStatistics enabled="true"/>
      <eviction strategy="NONE"/>

      <loaders passivation="false" shared="true" preload="true">
        <store class="org.exoplatform.services.jcr.infinispan.JdbcStringBasedCacheStore" fetchPersistentState="true" ignoreModifications="false" purgeOnStartup="false">
          <properties>
             <property name="stringsTableNamePrefix" value="${infinispan-cl-cache.jdbc.table.name}"/>
             <property name="idColumnName" value="${infinispan-cl-cache.jdbc.id.column}"/>
             <property name="dataColumnName" value="${infinispan-cl-cache.jdbc.data.column}"/>
             <property name="timestampColumnName" value="${infinispan-cl-cache.jdbc.timestamp.column}"/>
             <property name="idColumnType" value="${infinispan-cl-cache.jdbc.id.type}"/>
             <property name="dataColumnType" value="${infinispan-cl-cache.jdbc.data.type}"/>
             <property name="timestampColumnType" value="${infinispan-cl-cache.jdbc.timestamp.type}"/>
             <property name="dropTableOnExit" value="${infinispan-cl-cache.jdbc.table.drop}"/>
             <property name="createTableOnStart" value="${infinispan-cl-cache.jdbc.table.create}"/>
             <property name="connectionFactoryClass" value="${infinispan-cl-cache.jdbc.connectionFactory}"/>
             <property name="datasourceJndiLocation" value="${infinispan-cl-cache.jdbc.datasource}"/>
          </properties>
          <async enabled="false"/>
        </store>
      </loaders>
   </default>

</infinispan>

As you see, all configurable parameters are filled by templates and will be replaced by LockManagers configuration parameters:

<lock-manager class="org.exoplatform.services.jcr.impl.core.lock.infinispan.ISPNCacheableLockManagerImpl">
   <properties>
      <property name="time-out" value="15m" />
      <property name="infinispan-configuration" value="conf/standalone/cluster/test-infinispan-lock.xml" />
      <property name="jgroups-configuration" value="udp-mux.xml" />
      <property name="infinispan-cluster-name" value="JCR-cluster" />
      <property name="infinispan-cl-cache.jdbc.table.name" value="lk" />
      <property name="infinispan-cl-cache.jdbc.table.create" value="true" />
      <property name="infinispan-cl-cache.jdbc.table.drop" value="false" />
      <property name="infinispan-cl-cache.jdbc.id.column" value="id" />
      <property name="infinispan-cl-cache.jdbc.data.column" value="data" />
      <property name="infinispan-cl-cache.jdbc.timestamp.column" value="timestamp" />
      <property name="infinispan-cl-cache.jdbc.datasource" value="jdbcjcr" />
      <property name="infinispan-cl-cache.jdbc.dialect" value="${dialect}" />
      <property name="infinispan-cl-cache.jdbc.connectionFactory" value="org.exoplatform.services.jcr.infinispan.ManagedConnectionFactory" />
   </properties>
</lock-manager>

Configuration requirements:

our udp-mux.xml

<config>
    <UDP
         singleton_name="JCR-cluster" 
         mcast_addr="${jgroups.udp.mcast_addr:228.10.10.10}"
         mcast_port="${jgroups.udp.mcast_port:45588}"
         tos="8" 
         ucast_recv_buf_size="20000000"
         ucast_send_buf_size="640000" 
         mcast_recv_buf_size="25000000" 
         mcast_send_buf_size="640000" 
         loopback="false"
         discard_incompatible_packets="true" 
         max_bundle_size="64000" 
         max_bundle_timeout="30"
         use_incoming_packet_handler="true" 
         ip_ttl="${jgroups.udp.ip_ttl:2}"
         enable_bundling="false" 
         enable_diagnostics="true"
         thread_naming_pattern="cl" 

         use_concurrent_stack="true" 

         thread_pool.enabled="true" 
         thread_pool.min_threads="2"
         thread_pool.max_threads="8" 
         thread_pool.keep_alive_time="5000" 
         thread_pool.queue_enabled="true"
         thread_pool.queue_max_size="1000"
         thread_pool.rejection_policy="discard"

         oob_thread_pool.enabled="true"
         oob_thread_pool.min_threads="1"
         oob_thread_pool.max_threads="8"
         oob_thread_pool.keep_alive_time="5000"
         oob_thread_pool.queue_enabled="false" 
         oob_thread_pool.queue_max_size="100" 
         oob_thread_pool.rejection_policy="Run" />

    <PING timeout="2000"<config xmlns="urn:org:jgroups"
        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/JGroups-3.2.xsd">
    <UDP
         singleton_name="JCR-cluster" 
         mcast_port="${jgroups.udp.mcast_port:45588}"
         tos="8"
         ucast_recv_buf_size="20M"
         ucast_send_buf_size="640K"
         mcast_recv_buf_size="25M"
         mcast_send_buf_size="640K"
         loopback="true"
         max_bundle_size="64K"
         max_bundle_timeout="30"
         ip_ttl="${jgroups.udp.ip_ttl:8}"
         enable_bundling="true"
         enable_diagnostics="true"
         thread_naming_pattern="cl"

         timer_type="old"
         timer.min_threads="4"
         timer.max_threads="10"
         timer.keep_alive_time="3000"
         timer.queue_max_size="500"

         thread_pool.enabled="true"
         thread_pool.min_threads="2"
         thread_pool.max_threads="8"
         thread_pool.keep_alive_time="5000"
         thread_pool.queue_enabled="true"
         thread_pool.queue_max_size="10000"
         thread_pool.rejection_policy="discard"

         oob_thread_pool.enabled="true"
         oob_thread_pool.min_threads="1"
         oob_thread_pool.max_threads="8"
         oob_thread_pool.keep_alive_time="5000"
         oob_thread_pool.queue_enabled="false"
         oob_thread_pool.queue_max_size="100"
         oob_thread_pool.rejection_policy="Run"/>

    <PING timeout="2000"
            num_initial_members="20"/>
    <MERGE2 max_interval="30000"
            min_interval="10000"/>
    <FD_SOCK/>
    <FD_ALL/>
    <VERIFY_SUSPECT timeout="1500"  />
    <BARRIER />
    <pbcast.NAKACK2 xmit_interval="1000"
                    xmit_table_num_rows="100"
                    xmit_table_msgs_per_row="2000"
                    xmit_table_max_compaction_time="30000"
                    max_msg_batch_size="500"
                    use_mcast_xmit="false"
                    discard_delivered_msgs="true"/>
    <UNICAST  xmit_interval="2000"
              xmit_table_num_rows="100"
              xmit_table_msgs_per_row="2000"
              xmit_table_max_compaction_time="60000"
              conn_expiry_timeout="60000"
              max_msg_batch_size="500"/>
    <pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000"
                   max_bytes="4M"/>
    <pbcast.GMS print_local_addr="true" join_timeout="3000"
                view_bundling="true"/>
    <UFC max_credits="2M"
         min_threshold="0.4"/>
    <MFC max_credits="2M"
         min_threshold="0.4"/>
    <FRAG2 frag_size="60K"  />
    <RSVP resend_interval="2000" timeout="10000"/>
    <pbcast.STATE_TRANSFER />
    <!-- pbcast.FLUSH  /-->
</config>

This section shows you how to configure QueryHandler: Indexing in clustered environment.

JCR offers multiple indexing strategies. They include both for standalone and clustered environments using the advantages of running in a single JVM or doing the best to use all resources available in cluster. JCR uses Lucene library as underlying search and indexing engine, but it has several limitations that greatly reduce possibilities and limits the usage of cluster advantages. That's why eXo JCR offers three strategies that are suitable for it's own usecases. They are standalone, clustered with shared index, clustered with local indexes and RSync-based. Each one has it's pros and cons.

Stanadlone strategy provides a stack of indexes to achieve greater performance within single JVM.

It combines in-memory buffer index directory with delayed file-system flushing. This index is called "Volatile" and it is invoked in searches also. Within some conditions volatile index is flushed to the persistent storage (file system) as new index directory. This allows to achieve great results for write operations.

Clustered implementation with local indexes is built upon same strategy with volatile in-memory index buffer along with delayed flushing on persistent storage.

As this implementation designed for clustered environment it has additional mechanisms for data delivery within cluster. Actual text extraction jobs done on the same node that does content operations (i.e. write operation). Prepared "documents" (Lucene term that means block of data ready for indexing) are replicated withing cluster nodes and processed by local indexes. So each cluster instance has the same index content. When new node joins the cluster it has no initial index, so it must be created. There are some supported ways of doing this operation. The simplest is to simply copy the index manually but this is not intended for use. If no initial index found JCR uses automated sceneries. They are controlled via configuration (see "index-recovery-mode" parameter) offering full re-indexing from database or copying from another cluster node.

For some reasons having a multiple index copies on each instance can be costly. So shared index can be used instead (see diagram below).

This indexing strategy combines advantages of in-memory index along with shared persistent index offering "near" real time search capabilities. This means that newly added content is accessible via search practically immediately. This strategy allows nodes to index data in their own volatile (in-memory) indexes, but persistent indexes are managed by single "coordinator" node only. Each cluster instance has a read access for shared index to perform queries combining search results found in own in-memory index also. Take in account that shared folder must be configured in your system environment (i.e. mounted NFS folder). But this strategy in some extremely rare cases can have a bit different volatile indexes within cluster instances for a while. In a few seconds they will be up2date.

Shared index is consistent and stable enough, but slow, while local index is fast, but requires much time for re-synchronization, when cluster node is leaving a cluster for a small period of time. RSync-based index solves this problem along with local file system advantages in term of speed.

This strategy is the same shared index, but stores actual data on local file system, instead of shared. Eventually triggering a synchronization job, that woks on the level of file blocks, synchronizing only modified data. Diagram shows it in action. Only single node in the cluster is responsible for modifying index files, this is the Coordinator node. When data persisted, corresponding command fired, starting synchronization jobs all over the cluster.

See more about Search Configuration.

Configuration example:

<workspace name="ws">
   <query-handler class="org.exoplatform.services.jcr.impl.core.query.lucene.SearchIndex">
      <properties>
         <property name="index-dir" value="shareddir/index/db1/ws" />
         <property name="changesfilter-class"
            value="org.exoplatform.services.jcr.impl.core.query.ispn.ISPNIndexChangesFilter" />
         <property name="infinispan-configuration" value="infinispan-indexer.xml" />
         <property name="jgroups-configuration" value="udp-mux.xml" />
         <property name="infinispan-cluster-name" value="JCR-cluster" />
         <property name="max-volatile-time" value="60" />
         <property name="rdbms-reindexing" value="true" />
         <property name="reindexing-page-size" value="1000" />
         <property name="index-recovery-mode" value="from-coordinator" />
         <property name="index-recovery-filter" value="org.exoplatform.services.jcr.impl.core.query.lucene.DocNumberRecoveryFilter" />
         <property name="indexing-thread-pool-size" value="16" />
      </properties>
   </query-handler>
</workspace>

Table 1.9. Config properties description

Property nameDescription
index-dirpath to index
changesfilter-classThe FQN of the class to use to indicate the policy to use to manage the lucene indexes changes. This class must extend org.exoplatform.services.jcr.impl.core.query.IndexerChangesFilter. This must be set in cluster environment to define the clustering strategy to adopt. To use the Shared Indexes Strategy, you can set it to org.exoplatform.services.jcr.impl.core.query.ispn.ISPNIndexChangesFilter. I you prefer the Local Indexes Strategy, you can set it to org.exoplatform.services.jcr.impl.core.query.ispn.LocalIndexChangesFilter.
infinispan-configurationtemplate of Infinispan configuration for all query-handlers in repository (search, cache, locks)
jgroups-configurationThis is the path to JGroups configuration that should not be anymore jgroups' stack definitions but a normal jgroups configuration format with the shared transport configured by simply setting the jgroups property singleton_name to a unique name (it must remain unique from one portal container to another). This file is also pre-bundled with templates and is recommended for use.
infinispan-cluster-namecluster name (must be unique)
max-volatile-timemax time to live for Volatile Index
rdbms-reindexingIndicates whether the rdbms re-indexing mechanism must be used, the default value is true.
reindexing-page-sizemaximum amount of nodes which can be retrieved from storage for re-indexing purpose, the default value is 100
index-recovery-modeIf the parameter has been set to from-indexing, so a full indexing will be automatically launched, if the parameter has been set to from-coordinator (default behavior), the index will be retrieved from coordinator
index-recovery-filterDefines implementation class or classes of RecoveryFilters, the mechanism of index synchronization for Local Index strategy.
async-reindexingControls the process of re-indexing on JCR's startup. If flag set, indexing will be launched asynchronously, without blocking the JCR. Default is "false".
indexing-thread-pool-sizeDefines the total amount of indexing threads.
max-volatile-sizeThe maximum volatile index size in bytes until it is written to disk. The default value is 1048576 (1MB).
indexing-load-batching-threshold-propertyThe total amount of properties from which the application will decide to get by name all the properties of a node to be indexed using one single query instead of one query per property. The query used is the equivalent of getProperties(String namePattern). The default value is -1 which actually disables this feature. The expected value is an integer.
indexing-load-batching-threshold-nodeThe total amount of nodes to index within the same transaction from which the application will decide to get all the properties of the remaining nodes to be indexed using one single query instead of one query per property and a query that will get the list of properties. The query used is the equivalent of getProperties(). The default value is -1 which actually disables this feature. The expected value is an integer.
indexing-load-batching-threshold-dynamicIn case indexing-load-batching-threshold-property and/or indexing-load-batching-threshold-node have been enabled, you could expect to see the thresholds to be updated automatically in order to better match with the current performances of the database used. This is possible if you set this parameter to true knowing that the default value is false and if you enable the JCR statistics. Based on the JCR statistics, the application will be able to set the best possible values for your thresholds to get the best possible performances.
indexing-load-batching-threshold-ttlIn case indexing-load-batching-threshold-property and/or indexing-load-batching-threshold-node, indexing-load-batching-threshold-dynamic and the JCR statistics have been enabled, the application will regularily update if needed the thresholds. This parameter defines the periodicity of the task that will update the thresholds. The default value is 5 minutes. The expected value is a time expressed in milliseconds.

Note

If you use postgreSQL and the parameter rdbms-reindexing is set to true, the performances of the queries used while indexing can be improved by setting the parameter "enable_seqscan" to "off" or "default_statistics_target" to at least "50" in the configuration of your database. Then you need to restart DB server and make analyze of the JCR_SVALUE (or JCR_MVALUE) table.

Note

If you use DB2 and the parameter rdbms-reindexing is set to true, the performance of the queiries used while indexing can be improved by making statisticks on tables by running "RUNSTATS ON TABLE <scheme>.<table> WITH DISTRIBUTION AND INDEXES ALL" for JCR_SITEM (or JCR_MITEM) and JCR_SVALUE (or JCR_MVALUE) tables.

Configuration has much in common with Shared Index, it just requires some additional parameters for RSync options. If they are present, JCR switches from shared to RSync-based index. Here is an example configuration:

<query-handler class="org.exoplatform.services.jcr.impl.core.query.lucene.SearchIndex">
   <properties>
      <property name="index-dir" value="/var/data/index/repository1/production" />
      <property name="changesfilter-class"
         value="org.exoplatform.services.jcr.impl.core.query.ispn.ISPNIndexChangesFilter" />
      <property name="infinispan-configuration" value="jar:/conf/portal/cluster/infinispan-indexer.xml" />
      <property name="jgroups-configuration" value="jar:/conf/portal/cluster/udp-mux.xml" />
      <property name="infinispan-cluster-name" value="JCR-cluster" />
      <property name="max-volatile-time" value="60" /> 
      <property name="rsync-entry-name" value="index" />
      <property name="rsync-entry-path" value="/var/data/index" />
      <property name="rsync-port" value="8085" />
      <property name="rsync-user" value="rsyncexo" />
      <property name="rsync-password" value="exo" />
   </properties>
</query-handler>

Let's start with authentication: "rsync-user" and "rsync-password". They are optional and can be skipped if RSync Server configured to accept anonymous identity. Before reviewing other RSync index options need to have a look at RSync Server configuration. Sample RSync Server (rsyncd) Configuration

uid = nobody
gid = nobody
use chroot = no
port = 8085
log file = rsyncd.log
pid file = rsyncd.pid
[index]
        path = /var/data/index
        comment = indexes
        read only = true
        auth users = rsyncexo
        secrets file= rsyncd.secrets

This sample configuration shares folder "/var/data/index" as an entry "index". Those parameters should match corresponding properties in JCR configuration. Respectively "rsync-entry-name", "rsync-entry-path", "rsync-port" properties. Notice! Make sure "index-dir" is a descendant folder of RSync shared folder and those paths are the same on each cluster node.

In order to use cluster-ready strategy based on local indexes, when each node has own copy of index on local file system, the following configuration must be applied. Indexing directory must point to any folder on local file system and "changesfilter-class" must be set to "org.exoplatform.services.jcr.impl.core.query.ispn.LocalIndexChangesFilter".

<query-handler class="org.exoplatform.services.jcr.impl.core.query.lucene.SearchIndex">
   <properties>
      <property name="index-dir" value="/mnt/nfs_drive/index/db1/ws" />
      <property name="changesfilter-class"
         value="org.exoplatform.services.jcr.impl.core.query.ispn.LocalIndexChangesFilter" />
      <property name="infinispan-configuration" value="infinispan-indexer.xml" />
      <property name="jgroups-configuration" value="udp-mux.xml" />
      <property name="infinispan-cluster-name" value="JCR-cluster" />
      <property name="max-volatile-time" value="60" />
      <property name="rdbms-reindexing" value="true" />
      <property name="reindexing-page-size" value="1000" />
      <property name="index-recovery-mode" value="from-coordinator" />
   </properties>
</query-handler>

Common usecase for all cluster-ready applications is a hot joining and leaving of processing units. Node that is joining cluster for the first time or node joining after some downtime, they all must be in a synchronized state. When having a deal with shared value storages, databases and indexes, cluster nodes are synchronized anytime. But it's an issue when local index strategy used. If new node joins cluster, having no index it is retrieved or recreated. Node can be restarted also and thus index not empty. Usually existing index is thought to be actual, but can be outdated. JCR offers a mechanism called RecoveryFilters that will automatically retrieve index for the joining node on startup. This feature is a set of filters that can be defined via QueryHandler configuration:

<property name="index-recovery-filter" value="org.exoplatform.services.jcr.impl.core.query.lucene.DocNumberRecoveryFilter" />

Filter number is not limited so they can be combined:

<property name="index-recovery-filter" value="org.exoplatform.services.jcr.impl.core.query.lucene.DocNumberRecoveryFilter" />
<property name="index-recovery-filter" value="org.exoplatform.services.jcr.impl.core.query.lucene.SystemPropertyRecoveryFilter" />

If any one fires, the index is re-synchronized. Please take in account, that DocNumberRecoveryFilter is used in cases when no filter configured. So, if resynchronization should be blocked, or strictly required on start, then ConfigurationPropertyRecoveryFilter can be used.

This feature uses standard index recovery mode defined by previously described parameter (can be "from-indexing" or "from-coordinator" (default value))

<property name="index-recovery-mode" value="from-coordinator" />

There are couple implementations of filters:

Infinispan template configuration for query handler is about the same for both clustered strategies.

infinispan-indexer.xml

<?xml version="1.0" encoding="UTF-8"?>
<infinispan
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
      xsi:schemaLocation="urn:infinispan:config:5.2 http://www.infinispan.org/schemas/infinispan-config-5.2.xsd"
      xmlns="urn:infinispan:config:5.2">

    <global>
      <evictionScheduledExecutor factory="org.infinispan.executors.DefaultScheduledExecutorFactory">
        <properties>
          <property name="threadNamePrefix" value="EvictionThread"/>
        </properties>
      </evictionScheduledExecutor>

      <globalJmxStatistics jmxDomain="exo" enabled="true" allowDuplicateDomains="true"/>

      <transport transportClass="org.infinispan.remoting.transport.jgroups.JGroupsTransport" clusterName="${infinispan-cluster-name}" distributedSyncTimeout="20000">
        <properties>
          <property name="configurationFile" value="${jgroups-configuration}"/>
        </properties>
      </transport>
    </global>

    <default>
      <clustering mode="replication">
        <stateTransfer timeout="20000" fetchInMemoryState="false" />
        <sync replTimeout="20000"/>
      </clustering>

      <locking isolationLevel="READ_COMMITTED" lockAcquisitionTimeout="20000" writeSkewCheck="false" concurrencyLevel="500" useLockStriping="false"/>
      <transaction transactionManagerLookupClass="org.exoplatform.services.transaction.infinispan.JBossStandaloneJTAManagerLookup" syncRollbackPhase="true" syncCommitPhase="true" transactionMode="TRANSACTIONAL"/>
      <jmxStatistics enabled="true"/>
      <eviction strategy="NONE"/>

      <loaders passivation="false" shared="false" preload="false">
        <store class="${infinispan-cachestore-classname}" fetchPersistentState="true" ignoreModifications="false" purgeOnStartup="false">
          <async enabled="false"/>
        </store>
      </loaders>
   </default>
</infinispan>

See more about template configurations here.

Managing a big set of data using JCR in production environment sometimes requires special operations with Indexes, stored on File System. One of those maintenance operations is a recreation of it. Also called "re-indexing". There are various usecases when it's important to do. They include hardware faults, hard restarts, data-corruption, migrations and JCR updates that brings new features related to index. Usually index re-creation requested on server's startup or in runtime.

Common usecase for updating and re-creating the index is to stop the server and manually remove indexes for workspaces requiring it. When server will be started, missing indexes are automatically recovered by re-indexing. JCR Supports direct RDBMS re-indexing, that usually is faster than ordinary and can be configured via QueryHandler parameter "rdbms-reindexing" set to "true" (for more information please refer to "Query-handler configuration overview"). New feature to introduce is asynchronous indexing on startup. Usually startup is blocked until process is finished. Block can take any period of time, depending on amount of data persisted in repositories. But this can be resolved by using an asynchronous approaches of startup indexation. Saying briefly, it performs all operations with index in background, without blocking the repository. This is controlled by the value of "async-reindexing" parameter in QueryHandler configuration. With asynchronous indexation active, JCR starts with no active indexes present. Queries on JCR still can be executed without exceptions, but no results will be returned until index creation completed. Checking index state is possible via QueryManagerImpl:

boolean online = ((QueryManagerImpl)Workspace.getQueryManager()).getQueryHandeler().isOnline();

"OFFLINE" state means that index is currently re-creating. When state changed, corresponding log event is printed. From the start of background task index is switched to "OFFLINE", with following log event :

[INFO] Setting index OFFLINE (repository/production[system]).

When process finished, two events are logged :

[INFO] Created initial index for 143018 nodes (repository/production[system]).
[INFO] Setting index ONLINE (repository/production[system]).

Those two log lines indicates the end of process for workspace given in brackets. Calling isOnline() as mentioned above, will also return true.

Some hard system faults, error during upgrades, migration issues and some other factors may corrupt the index. Most likely end customers would like the production systems to fix index issues in run-time, without delays and restarts. Current versions of JCR supports "Hot Asynchronous Workspace Reindexing" feature. It allows end-user (Service Administrator) to launch the process in background without stopping or blocking whole application by using any JMX-compatible console (see screenshot below, "JConsole in action").

Server can continue working as expected while index is recreated. This depends on the flag "allow queries", passed via JMX interface to reindex operation invocation. If the flag set, then application continues working. But there is one critical limitation the end-users must be aware. Index is frozen while background task is running. It meant that queries are performed on index present on the moment of task startup and data written into repository after startup won't be available through the search until process finished. Data added during re-indexation is also indexed, but will be available only when task is done. Briefly, JCR makes the snapshot of indexes on asynch task startup and uses it for searches. When operation finished, stale indexes replaced by newly created including newly added data. If flag "allow queries" is set to false, then all queries will throw an exception while task is running. Current state can be acquired using the following JMX operation:

RepositoryCreationService is the service which is used to create repositories in runtime. The service can be used in a standalone or cluster environment.

RepositoryConfigurationService depends to next components:

  • DBCreator - DBCreator used to create new database for each unbinded datasource.

  • BackupManager - BackupManager used to created repository from backup.

  • RPCService - RPCService used for communication between cluster-nodes

    Note

    RPCService may not be configured - in this case, RepositoryService will work as standalone service.

RepositoryCreationService configuration

<component>
   <key>org.exoplatform.services.jcr.ext.backup.BackupManager</key>
   <type>org.exoplatform.services.jcr.ext.backup.impl.BackupManagerImpl</type>
   <init-params>
      <properties-param>
         <name>backup-properties</name>
         <property name="backup-dir" value="target/backup" />
      </properties-param>
   </init-params>
</component>

<component>
   <key>org.exoplatform.services.database.creator.DBCreator</key>
   <type>org.exoplatform.services.database.creator.DBCreator</type>
   <init-params>
      <properties-param>
         <name>db-connection</name>
         <description>database connection properties</description>
         <property name="driverClassName" value="org.hsqldb.jdbcDriver" />
         <property name="url" value="jdbc:hsqldb:file:target/temp/data/" />
         <property name="username" value="sa" />
         <property name="password" value="" />
      </properties-param>
      <properties-param>
         <name>db-creation</name>
         <description>database creation properties</description>
         <property name="scriptPath" value="src/test/resources/test.sql" />
         <property name="username" value="sa" />
         <property name="password" value="" />
      </properties-param>
   </init-params>
</component>

<component>
    <key>org.exoplatform.services.rpc.RPCService</key>
    <type>org.exoplatform.services.rpc.jgv3.RPCServiceImpl</type>
    <init-params>
        <value-param>
            <name>jgroups-configuration</name>
            <value>jar:/conf/udp-mux.xml</value>
        </value-param>
        <value-param>
            <name>jgroups-cluster-name</name>
            <value>RPCService-Cluster</value>
        </value-param>
        <value-param>
            <name>jgroups-default-timeout</name>
            <value>0</value>
        </value-param>
    </init-params>
</component>  

<component>
   <key>org.exoplatform.services.jcr.ext.repository.creation.RepositoryCreationService</key>
   <type>
      org.exoplatform.services.jcr.ext.repository.creation.RepositoryCreationServiceImpl
   </type>
     <init-params> 
         <value-param> 
            <name>factory-class-name</name> 
            <value>org.apache.commons.dbcp.BasicDataSourceFactory</value> 
         </value-param> 
      </init-params>
</component>
public interface RepositoryCreationService
{
   /**
    * Reserves, validates and creates repository in a simplified form.
    * 
    * @param rEntry - repository Entry - note that datasource must not exist.
    * @param backupId - backup id
    * @param creationProps - storage creation properties 
    * @throws RepositoryConfigurationException
    *          if some exception occurred during repository creation or repository name is absent in reserved list
    * @throws RepositoryCreationServiceException
    *          if some exception occurred during repository creation or repository name is absent in reserved list
    */
   void createRepository(String backupId, RepositoryEntry rEntry, StorageCreationProperties creationProps)
      throws RepositoryConfigurationException, RepositoryCreationException;

   /**
    * Reserves, validates and creates repository in a simplified form. 
    * 
    * @param rEntry - repository Entry - note that datasource must not exist.
    * @param backupId - backup id
    * @throws RepositoryConfigurationException
    *          if some exception occurred during repository creation or repository name is absent in reserved list
    * @throws RepositoryCreationServiceException
    *          if some exception occurred during repository creation or repository name is absent in reserved list
    */
   void createRepository(String backupId, RepositoryEntry rEntry) throws RepositoryConfigurationException,
      RepositoryCreationException;

   /**
    * Reserve repository name to prevent repository creation with same name from other place in same time
    * via this service.
    * 
    * @param repositoryName - repositoryName
    * @return repository token. Anyone obtaining a token can later create a repository of reserved name.
    * @throws RepositoryCreationServiceException if can't reserve name
    */
   String reserveRepositoryName(String repositoryName) throws RepositoryCreationException;

   /**
    * Creates repository, using token of already reserved repository name. 
    * Good for cases, when repository creation should be delayed or made asynchronously in dedicated thread. 
    * 
    * @param rEntry - repository entry - note, that datasource must not exist
    * @param backupId - backup id
    * @param rToken - token
    * @param creationProps - storage creation properties
    * @throws RepositoryConfigurationException
    *          if some exception occurred during repository creation or repository name is absent in reserved list
    * @throws RepositoryCreationServiceException
    *          if some exception occurred during repository creation or repository name is absent in reserved list
    */
   void createRepository(String backupId, RepositoryEntry rEntry, String rToken, StorageCreationProperties creationProps)
      throws RepositoryConfigurationException, RepositoryCreationException;

   /**
    * Creates  repository, using token of already reserved repository name. Good for cases, when repository creation should be delayed or 
    * made asynchronously in dedicated thread. 
    * 
    * @param rEntry - repository entry - note, that datasource must not exist
    * @param backupId - backup id
    * @param rToken - token
    * @throws RepositoryConfigurationException
    *          if some exception occurred during repository creation or repository name is absent in reserved list
    * @throws RepositoryCreationServiceException
    *          if some exception occurred during repository creation or repository name is absent in reserved list
    */
   void createRepository(String backupId, RepositoryEntry rEntry, String rToken)
      throws RepositoryConfigurationException, RepositoryCreationException;

   /**
    * Remove previously created repository. 
    * 
    * @param repositoryName - the repository name to delete
    * @param forceRemove - force close all opened sessions 
    * @throws RepositoryCreationServiceException
    *          if some exception occurred during repository removing occurred
    */
   void removeRepository(String repositoryName, boolean forceRemove) throws RepositoryCreationException;

}

JCR supports two query languages - JCR and XPath. A query, whether XPath or SQL, specifies a subset of nodes within a workspace, called the result set. The result set constitutes all the nodes in the workspace that meet the constraints stated in the query.

Find all nodes in the repository. Only those nodes are found to which the session has READ permission. See also Access Control.

Find all nodes in repository, that contain a mixin type "mix:title".

Find all nodes with mixin type 'mix:title' where the prop_pagecount property contains a value less than 90. Only select the title of each node.

Find all nodes with mixin type 'mix:title' and where the property 'jcr:title' starts with 'P'.

Find all nodes with a mixin type 'mix:title' and whose property 'jcr:title' starts with 'P%ri'.

As you see "P%rison break" contains the symbol '%'. This symbol is reserved for LIKE comparisons. So what can we do?

Within the LIKE pattern, literal instances of percent ("%") or underscore ("_") must be escaped. The SQL ESCAPE clause allows the definition of an arbitrary escape character within the context of a single LIKE statement. The following example defines the backslash ' \' as escape character:

SELECT * FROM mytype WHERE a LIKE 'foo\%' ESCAPE '\'

XPath does not have any specification for defining escape symbols, so we must use the default escape character (' \').

Find all nodes with a mixin type 'mix:title' and where the property 'jcr:title' does NOT start with a 'P' symbol

Find all fairytales with a page count more than 90 pages.

How does it sound in jcr terms - Find all nodes with mixin type 'mix:title' where the property 'jcr:description' equals "fairytale" and whose "prop_pagecount" property value is less than 90.

Find all documents whose title is 'Cinderella' or whose description is 'novel'.

How does it sound in jcr terms? - Find all nodes with a mixin type 'mix:title' whose property 'jcr:title' equals "Cinderella" or whose "jcr:description" property value is "novel".

Find all nodes with a mixin type 'mix:title' where the property 'jcr:description' does not exist (is null).

Find all nodes with a mixin type 'mix:title' and where the property 'jcr:title' equals 'casesensitive' in lower or upper case.

Find all nodes of primary type "nt:resource" whose jcr:lastModified property value is greater than 2006-06-04 and less than 2008-06-04.

SQL

In SQL you have to use the keyword TIMESTAMP for date comparisons. Otherwise, the date would be interpreted as a string. The date has to be surrounded by single quotes (TIMESTAMP 'datetime') and in the ISO standard format: YYYY-MM-DDThh:mm:ss.sTZD ( http://en.wikipedia.org/wiki/ISO_8601 and well explained in a W3C note http://www.w3.org/TR/NOTE-datetime).

You will see that it can be a date only (YYYY-MM-DD) but also a complete date and time with a timezone designator (TZD).

// make SQL query
QueryManager queryManager = workspace.getQueryManager();
// create query
StringBuffer sb = new StringBuffer();
sb.append("select * from nt:resource where ");
sb.append("( jcr:lastModified >= TIMESTAMP '");
sb.append("2006-06-04T15:34:15.917+02:00");
sb.append("' )");
sb.append(" and ");
sb.append("( jcr:lastModified <= TIMESTAMP '");
sb.append("2008-06-04T15:34:15.917+02:00");
sb.append("' )");
String sqlStatement = sb.toString();
Query query = queryManager.createQuery(sqlStatement, Query.SQL);
// execute query and fetch result
QueryResult result = query.execute();

XPath

Compared to the SQL format, you have to use the keyword xs:dateTime and surround the datetime by extra brackets: xs:dateTime('datetime'). The actual format of the datetime also conforms with the ISO date standard.

// make XPath query
QueryManager queryManager = workspace.getQueryManager();
// create query
StringBuffer sb = new StringBuffer();
sb.append("//element(*,nt:resource)");
sb.append("[");
sb.append("@jcr:lastModified >= xs:dateTime('2006-08-19T10:11:38.281+02:00')");
sb.append(" and ");
sb.append("@jcr:lastModified <= xs:dateTime('2008-06-04T15:34:15.917+02:00')");
sb.append("]");
String xpathStatement = sb.toString();
Query query = queryManager.createQuery(xpathStatement, Query.XPATH);
// execute query and fetch result
QueryResult result = query.execute();

Find all nodes with primary type 'nt:file' whose node name is 'document'. The node name is accessible by a function called "fn:name()".

Find all nodes with the primary type 'nt:unstructured' whose property 'multiprop' contains both values "one" and "two".

Find a node with the primary type 'nt:file' that is located on the exact path "/folder1/folder2/document1".

Find all nodes with the primary type 'nt:folder' that are children of node by path "/root1/root2". Only find children, do not find further descendants.

Find all nodes with the primary type 'nt:folder' that are descendants of the node "/folder1/folder2".

Select all nodes with the mixin type ''mix:title' and order them by the 'prop_pagecount' property.

Select all nodes with the mixin type 'mix:title' containing any word from the set {'brown','fox','jumps'}. Then, sort result by the score in ascending node. This way nodes that match better the query statement are ordered at the last positions in the result list.

Find all nodes containing a mixin type 'mix:title' and whose 'jcr:description' contains "forest" string.

Find nodes with mixin type 'mix:title' where any property contains 'break' string.

In this example, we will create new Analyzer, set it in QueryHandler configuration, and make query to check it.

Standard analyzer does not normalize accents like é,è,à. So, a word like 'tréma' will be stored to index as 'tréma'. But if we want to normalize such symbols or not? We want to store 'tréma' word as 'trema'.

There is two ways of setting up new Analyzer (no matter standarts or our):

There is only one way - create new Analyzer (if there is no previously created and accepted for our needs) and set it in Search index.

  • The second way: Register new Analyzer in QueryHandler configuration (this one eccepted since 1.12 version);

We will use the last one:

  • Create new MyAnalyzer

public class MyAnalyzer extends Analyzer
{
   @Override
   public TokenStream tokenStream(String fieldName, Reader reader)
   {
      StandardTokenizer tokenStream = new StandardTokenizer(reader);
      // process all text with standard filter
      // removes 's (as 's in "Peter's") from the end of words and removes dots from acronyms.
      TokenStream result = new StandardFilter(tokenStream);
      // this filter normalizes token text to lower case
      result = new LowerCaseFilter(result);
      // this one replaces accented characters in the ISO Latin 1 character set (ISO-8859-1) by their unaccented equivalents
      result = new ISOLatin1AccentFilter(result);
      // and finally return token stream
      return result;
   }
}
  • Then, register new MyAnalyzer in configuration

<workspace name="ws">
   ...
   <query-handler class="org.exoplatform.services.jcr.impl.core.query.lucene.SearchIndex">
      <properties>
         <property name="analyzer" value="org.exoplatform.services.jcr.impl.core.MyAnalyzer"/>
         ...
      </properties>
   </query-handler>
   ...
</workspace>

After that, check it with query:

Find node with mixin type 'mix:title' where 'jcr:title' contains "tréma" and "naïve" strings.

The node type nt:file represents a file. It requires a single child node, called jcr:content. This node type represents images and other binary content in a JCRWiki entry. The node type of jcr:conent is nt:resource which represents the actual content of a file.

Find node with the primary type is 'nt:file' and which whose 'jcr:content' child node contains "cats".

Normally, we can't find nodes (in our case) using just JCR SQL or XPath queries. But we can configure indexing so that nt:file aggregates jcr:content child node.

So, change indexing-configuration.xml:

<?xml version="1.0"?>
<!DOCTYPE configuration SYSTEM "http://www.exoplatform.org/dtd/indexing-configuration-1.2.dtd">
<configuration xmlns:jcr="http://www.jcp.org/jcr/1.0"
               xmlns:nt="http://www.jcp.org/jcr/nt/1.0">
    <aggregate primaryType="nt:file">
        <include>jcr:content</include>
        <include>jcr:content/*</include>
        <include-property>jcr:content/jcr:lastModified</include-property>
    </aggregate>
</configuration>

Now the content of 'nt:file' and 'jcr:content' ('nt:resource') nodes are concatenated in a single Lucene document. Then, we can make a fulltext search query by content of 'nt:file'; this search includes the content of child 'jcr:content' node.

In this example, we will set different boost values for predefined nodes, and will check effect by selecting those nodes and order them by jcr:score.

The default boost value is 1.0. Higher boost values (a reasonable range is 1.0 - 5.0) will yield a higher score value and appear as more relevant.

In this example, we will exclude some 'text' property of nt:unstructured node from indexind. And, therefore, node will not be found by the content of this property, even if it accepts all constraints.

First of all, add rules to indexing-configuration.xml:

<index-rule nodeType="nt:unstructured" condition="@rule='nsiTrue'">
    <!-- default value for nodeScopeIndex is true -->
    <property>text</property>
</index-rule>

<index-rule nodeType="nt:unstructured" condition="@rule='nsiFalse'">
    <!-- do not include text in node scope index -->
    <property nodeScopeIndex="false">text</property>
</index-rule>

In this example, we want to configure indexind in the next way. All properties of nt:unstructured nodes must be excluded from search, except properties whoes names ends with 'Text' string. First of all, add rules to indexing-configuration.xml:

<index-rule nodeType="nt:unstructured"">
   <property isRegexp="true">.*Text</property>
</index-rule>

Now, let's check this rule with simple query - select all nodes with primary type 'nt:unstructured' and containing 'quick' string (fulltext search by full node).

It's also called excerption (see Excerpt configuration in Search Configuration and in Searching Repository article).

The goal of this query is to find words "eXo" and "implementation" with fulltext search and high-light this words in result value.

Find all mix:title nodes where title contains synonims to 'fast' word.

Synonim provider must be configured in indexing-configuration.xml :

<query-handler class="org.exoplatform.services.jcr.impl.core.query.lucene.SearchIndex">
   <properties>
      ...
      <property name="synonymprovider-class" value="org.exoplatform.services.jcr.impl.core.query.lucene.PropertiesSynonymProvider" />
      <property name="synonymprovider-config-path" value="../../synonyms.properties" />
      ...
   </properties>
</query-handler>

File synonim.properties contains next synonims list:

ASF=Apache Software Foundation
quick=fast
sluggish=lazy

Check the correct spelling of phrase 'quik OR (-foo bar)' according to data already stored in index.

SpellChecker must be settled in query-handler config.

test-jcr-config.xml:

<query-handler class="org.exoplatform.services.jcr.impl.core.query.lucene.SearchIndex">
   <properties>
      ...
   <property name="spellchecker-class" value="org.exoplatform.services.jcr.impl.core.query.lucene.spell.LuceneSpellChecker$FiveSecondsRefreshInterval" />
      ...
   </properties>
</query-handler>

Find similar nodes to node by path '/baseFile/jcr:content'.

In our example, baseFile will contain text where "terms" word happens many times. That's a reason why the existanse of this word will be used as a criteria of node similarity (for node baseFile).

Higlighting support must be added to configuration. test-jcr-config.xml:

<query-handler class="org.exoplatform.services.jcr.impl.core.query.lucene.SearchIndex">
   <properties>
      ...
      <property name="support-highlighting" value="true" />
      ...
   </properties>
</query-handler>

If you execute an XPath request like this:

XPath

// get QueryManager
QueryManager queryManager = workspace.getQueryManager(); 
// make XPath query
Query query = queryManager.createQuery("/jcr:root/Documents/Publie/2010//element(*, exo:article)", Query.XPATH);

You will have an error : "Invalid request". This happens because XML does not allow names starting with a number - and XPath is part of XML: http://www.w3.org/TR/REC-xml/#NT-Name

Therefore, you cannot do XPath requests using a node name that starts with a number.

Easy workarounds:

  • Use an SQL request.

  • Use escaping :

XPath

// get QueryManager
QueryManager queryManager = workspace.getQueryManager(); 
// make XPath query
Query query = queryManager.createQuery("/jcr:root/Documents/Publie/_x0032_010//element(*, exo:article)", Query.XPATH);

You can find the JCR configuration file at .../portal/WEB-INF/conf/jcr/repository-configuration.xml. Please read also Search Configuration for more information about index configuration.

JCR supports such features as Lucene Fuzzy Searches Apache Lucene - Query Parser Syntax.

To use it, you have to form a query like the one described below:

QueryManager qman = session.getWorkspace().getQueryManager();
Query q = qman.createQuery("select * from nt:base where contains(field, 'ccccc~')", Query.SQL);
QueryResult res = q.execute();

Searching with synonyms is integrated in the jcr:contains() function and uses the same syntax as synonym searches in Google. If a search term is prefixed by a tilde symbol ( ~ ), also synonyms of the search term are taken into consideration. For example:

SQL: select * from nt:resource where contains(., '~parameter')

XPath: //element(*, nt:resource)[jcr:contains(., '~parameter')

This feature is disabled by default and you need to add a configuration parameter to the query-handler element in your jcr configuration file to enable it.

<param  name="synonymprovider-config-path" value="..you path to configuration file....."/>
<param  name="synonymprovider-class" value="org.exoplatform.services.jcr.impl.core.query.lucene.PropertiesSynonymProvider"/>
/**
 * <code>SynonymProvider</code> defines an interface for a component that
 * returns synonyms for a given term.
 */
public interface SynonymProvider {

   /**
    * Initializes the synonym provider and passes the file system resource to
    * the synonym provider configuration defined by the configuration value of
    * the <code>synonymProviderConfigPath</code> parameter. The resource may be
    * <code>null</code> if the configuration parameter is not set.
    *
    * @param fsr the file system resource to the synonym provider
    *            configuration.
    * @throws IOException if an error occurs while initializing the synonym
    *                     provider.
    */
   public void initialize(InputStream fsr) throws IOException;

   /**
    * Returns an array of terms that are considered synonyms for the given
    * <code>term</code>.
    *
    * @param term a search term.
    * @return an array of synonyms for the given <code>term</code> or an empty
    *         array if no synonyms are known.
    */
   public String[] getSynonyms(String term);
}

An ExcerptProvider retrieves text excerpts for a node in the query result and marks up the words in the text that match the query terms.

By default highlighting words matched the query is disabled because this feature requires that additional information is written to the search index. To enable this feature, you need to add a configuration parameter to the query-handler element in your jcr configuration file to enable it.

<param name="support-highlighting" value="true"/>

Additionally, there is a parameter that controls the format of the excerpt created. In JCR 1.9, the default is set to org.exoplatform.services.jcr.impl.core.query.lucene.DefaultHTMLExcerpt. The configuration parameter for this setting is:

<param name="excerptprovider-class" value="org.exoplatform.services.jcr.impl.core.query.lucene.DefaultXMLExcerpt"/>

The lucene based query handler implementation supports a pluggable spell checker mechanism. By default, spell checking is not available and you have to configure it first. See parameter spellCheckerClass on page Search Configuration. JCR currently provides an implementation class , which uses the lucene-spellchecker to contribute . The dictionary is derived from the fulltext indexed content of the workspace and updated periodically. You can configure the refresh interval by picking one of the available inner classes of org.exoplatform.services.jcr.impl.core.query.lucene.spell.LuceneSpellChecker:

  • OneMinuteRefreshInterval

  • FiveMinutesRefreshInterval

  • ThirtyMinutesRefreshInterval

  • OneHourRefreshInterval

  • SixHoursRefreshInterval

  • TwelveHoursRefreshInterval

  • OneDayRefreshInterval

For example, if you want a refresh interval of six hours, the class name is: org.exoplatform.services.jcr.impl.core.query.lucene.spell.LuceneSpellChecker$SixHoursRefreshInterval. If you use org.exoplatform.services.jcr.impl.core.query.lucene.spell.LuceneSpellChecker, the refresh interval will be one hour.

The spell checker dictionary is stored as a lucene index under "index-dir"/spellchecker. If it does not exist, a background thread will create it on startup. Similarly, the dictionary refresh is also done in a background thread to not block regular queries.

The sense of analyzers is to transform all strings stored in the index in a well-defined condition. The same analyzer(s) is/are used when searching in order to adapt the query string to the index reality.

Therefore, performing the same query using different analyzers can return different results.

Now, let's see how the same string is transformed by different analyzers.



Note

StandardAnalyzer is the default analyzer in exo's jcr search engine. But we do not use stop words.

You can assign your analyzer as described in Search Configuration

eXo JCR implementation offers new extended feature beyond JCR specification. Sometimes one JCR Node has hundreds or even thousands of child nodes. This situation is highly not recommended for content repository data storage, but sometimes it occurs. JCR Team is pleased to announce new feature that will help to have a deal with huge child lists. They can be iterated in a "lazy" manner now giving improvement in term of performance and RAM usage.

Lazy child nodes iteration feature is accessible via extended interface org.exoplatform.services.jcr.core.ExtendedNode, the inheritor of javax.jcr.Node. It provides a new single method shown below:

   /**
    * Returns a NodeIterator over all child Nodes of this Node. Does not include properties 
    * of this Node. If this node has no child nodes, then an empty iterator is returned.
    * 
    * @return A NodeIterator over all child Nodes of this <code>Node</code>.
    * @throws RepositoryException If an error occurs.
    */
   public NodeIterator getNodesLazily() throws RepositoryException;

From the view of end-user or client application, getNodesLazily() works similar to JCR specified getNodes() returning NodeIterator. "Lazy" iterator supports the same set of features as an ordinary NodeIterator, including skip() and excluding remove() features. "Lazy" implementation performs reading from DB by pages. Each time when it has no more elements stored in memory, it reads next set of items from persistent layer. This set is called "page". Must admit that getNodesLazily feature fully supports session and transaction changes log, so it's a functionally-full analogue of specified getNodes() operation. So when having a deal with huge list of child nodes, getNodes() can be simply and safely substituted with getNodesLazily().

JCR gives an experimental opportunity to replace all getNodes() invocations with getNodesLazily() calls. It handles a boolean system property named "org.exoplatform.jcr.forceUserGetNodesLazily" that internally replaces one call with another, without any code changes. But be sure using it only for development purposes. This feature can be used with top level products using eXo JCR to perform a quick compatibility and performance tests without changing any code. This is not recommended to be used as a production solution.

The WebDAV protocol enables you to use the third party tools to communicate with hierarchical content servers via HTTP. It is possible to add and remove documents or a set of documents from a path on the server. DeltaV is an extension of the WebDav protocol that allows managing document versioning. Locking guarantees protection against multiple access when writing resources. The ordering support allows changing the position of the resource in the list and sort the directory to make the directory tree viewed conveniently. The full-text search makes it easy to find the necessary documents. You can search by using two languages: SQL and XPATH.

In eXo JCR, we plug in the WebDAV layer - based on the code taken from the extension modules of the reference implementation - on the top of our JCR implementation so that it is possible to browse a workspace using the third party tools (it can be Windows folders or Mac ones as well as a Java WebDAV client, such as DAVExplorer or IE using File->Open as a Web Folder).

Now WebDav is an extension of the REST service. To get the WebDav server ready, you must deploy the REST application. Then, you can access any workspaces of your repository by using the following URL:

Standalone mode:

http://host:port/rest/jcr/{RepositoryName}/{WorkspaceName}/{Path}

Portal mode:

http://host:port/portal/rest/private/jcr/{RepositoryName}/{WorkspaceName}/{Path}

When accessing the WebDAV server with the URLhttp://localhost:8080/rest/jcr/repository/production, you might also use "collaboration" (instead of "production") which is the default workspace in eXo products. You will be asked to enter your login and password. Those will then be checked by using the organization service that can be implemented thanks to an InMemory (dummy) module or a DB module or an LDAP one and the JCR user session will be created with the correct JCR Credentials.

Related documents

<component>
  <key>org.exoplatform.services.jcr.webdav.WebDavServiceImpl</key>
  <type>org.exoplatform.services.jcr.webdav.WebDavServiceImpl</type>
  <init-params>

    <!-- default node type which is used for the creation of collections -->
    <value-param>
      <name>def-folder-node-type</name>
      <value>nt:folder</value>
    </value-param>

    <!-- default node type which is used for the creation of files -->
    <value-param>
      <name>def-file-node-type</name>
      <value>nt:file</value>
    </value-param>

    <!-- if MimeTypeResolver can't find the required mime type, 
         which conforms with the file extension, and the mimeType header is absent
         in the HTTP request header, this parameter is used 
         as the default mime type-->
    <value-param>
      <name>def-file-mimetype</name>
      <value>application/octet-stream</value>
    </value-param>

    <!-- This parameter indicates one of the three cases when you update the content of the resource by PUT command.
         In case of "create-version", PUT command creates the new version of the resource if this resource exists.
         In case of "replace" - if the resource exists, PUT command updates the content of the resource and its last modification date.
         In case of "add", the PUT command tries to create the new resource with the same name (if the parent node allows same-name siblings).-->

    <value-param>
      <name>update-policy</name>
      <value>create-version</value>
      <!--value>replace</value -->
      <!-- value>add</value -->
    </value-param>

    <!--
        This parameter determines how service responds to a method that attempts to modify file content.
        In case of "checkout-checkin" value, when a modification request is applied to a checked-in version-controlled resource, the request is automatically preceded by a checkout and followed by a checkin operation.
        In case of "checkout" value, when a modification request is applied to a checked-in version-controlled resource, the request is automatically preceded by a checkout operation.
        In case of "checkin-checkout" value, when a modification request is applied, the request is automatically preceded by a checkin then a checkout operation.
    -->         
    <value-param>
      <name>auto-version</name>
      <value>checkout-checkin</value>
      <!--value>checkout</value -->
    </value-param>

    <!--
        This parameter determines the list of  (workspace, absolute path ) where the auto versioning is enabled.
        When a new version of a non-versioned document is uploaded through WebDAV, the new version replaces the existing one.
        When a new version of a versioned document is uploaded through WebDAV, the new version is automatically created.
    -->

    <value-param>
            <name>allowed.folder.auto-version</name>
       <value>workspace1:path1;workspace1:path2;workspace2:path3</value>
    </value-param>

    <!--
        This parameter allow to enable auto versioning strategy (updatePolicyType : create-version , autoVersion : checkin-checkout).
    -->

    <value-param>
       <name>enableAutoVersion</name>
       <value>false</value>
    </value-param>


       <!--
        This parameter is responsible for managing Cache-Control header value which will be returned to the client.
        You can use patterns like "text/*", "image/*" or wildcard to define the type of content.
    -->  
    <value-param>
      <name>cache-control</name>
      <value>text/xml,text/html:max-age=3600;image/png,image/jpg:max-age=1800;*/*:no-cache;</value>
    </value-param>
    
    <!--
        This parameter determines the absolute path to the folder icon file, which is shown
        during WebDAV view of the contents
    -->
    <value-param>
      <name>folder-icon-path</name>
      <value>/absolute/path/to/file</value>
    </value-param>

    <!--
        This parameter determines the absolute path to the file icon file, which is shown
        during WebDAV view of the contents
    -->
    <value-param>
      <name>file-icon-path</name>
      <value>/absolute/path/to/file</value>
    </value-param>

    <!-- 
        This parameter is responsible for untrusted user agents definition.
        Content-type headers of listed here user agents should be
        ignored and MimeTypeResolver should be explicitly used instead 
    -->
    <values-param>
      <name>untrusted-user-agents</name>
      <value>Microsoft Office Core Storage Infrastructure/1.0</value>
    </values-param>

    <--
        Allows to define which node type can be used to
        create files via WebDAV.
        Default value: nt:file
    -->
    <values-param>
      <name>allowed-file-node-types</name>
      <value>nt:file</value>
    </values-param>

    <--
        Allows to define which node type can be used to
        create folders via WebDAV.
        Default value: nt:folder
    -->
    <values-param>
      <name>allowed-folder-node-types</name>
      <value>nt:folder</value>
    </values-param>

  </init-params>
</component>

There are some restrictions for WebDAV in different Operating systems.

The JSR 170 allows JCR nodes under specific conditions to have sub nodes of the same name which is not supported by the webdav clients. To workaround that, the nodes of the same name are displayed with their index in case the index is higher than 1, so if for example we have 2 nodes called foo, the first one will be displayed foo (as before) and the second one will be displayed foo[2]. Thanks to this approach, all webdav clients will be able to display the entire content of your repository even if you have some nodes of the same name.

If you want to replace a file whose name is foo[3] which means that it is the 3th node called foo understand the same parent node, if you paste a file with the exact same name (assuming that the update policy allows it), the old file content will be replaced with the new one.

The JCR-FTP Server represents the standard eXo service, operates as an FTP server with an access to a content stored in JCR repositories in the form of nt:file/nt:folder nodes or their successors. The client of an executed Server can be any FTP client. The FTP server is supported by a standard configuration which can be changed as required.

The main purpose of that feature is to restore data in case of system faults and repository crashes. Also, the backup results may be used as a content history.

The concept is based on the export of a workspace unit in the Full, or Full + Incrementals model. A repository workspace can be backup and restored using a combination of these modes. In all cases, at least one Full (initial) backup must be executed to mark a starting point of the backup history. An Incremental backup is not a complete image of the workspace. It contains only changes for some period. So it is not possible to perform an Incremental backup without an initial Full backup.

The Backup service may operate as a hot-backup process at runtime on an in-use workspace. It's a case when the Full + Incrementals model should be used to have a guaranty of data consistency during restoration. An Incremental will be run starting from the start point of the Full backup and will contain changes that have occured during the Full backup, too.

A restore operation is a mirror of a backup one. At least one Full backup should be restored to obtain a workspace corresponding to some points in time. On the other hand, Incrementals may be restored in the order of creation to reach a required state of a content. If the Incremental contains the same data as the Full backup (hot-backup), the changes will be applied again as if they were made in a normal way via API calls.

According to the model there are several modes for backup logic:

The work of Backup is based on the BackupConfig configuration and the BackupChain logical unit.

BackupConfig describes the backup operation chain that will be performed by the service. When you intend to work with it, the configuration should be prepared before the backup is started.

The configuration contains such values as:

BackupChain is a unit performing the backup process and it covers the principle of initial Full backup execution and manages Incrementals operations. BackupChain is used as a key object for accessing current backups during runtime via BackupManager. Each BackupJob performs a single atomic operation - a Full or Incremental process. The result of that operation is data for a Restore. BackupChain can contain one or more BackupJobs. But at least the initial Full job is always there. Each BackupJobs has its own unique number which means its Job order in the chain, the initial Full job always has the number 0.

Backup process, result data and file location

To start the backup process, it's necessary to create the BackupConfig and call the BackupManager.startBackup(BackupConfig) method. This method will return BackupChain created according to the configuration. At the same time, the chain creates a BackupChainLog which persists BackupConfig content and BackupChain operation states to the file in the service working directory (see Configuration).

When the chain starts the work and the initial BackupJob starts, the job will create a result data file using the destination directory path from BackupConfig. The destination directory will contain a directory with an automatically created name using the pattern repository_workspace-timestamp where timestamp is current time in the format of yyyyMMdd_hhmmss (E.g. db1_ws1-20080306_055404). The directory will contain the results of all Jobs configured for execution. Each Job stores the backup result in its own file with the name repository_workspace-timestamp.jobNumber. BackupChain saves each state (STARTING, WAITING, WORKING, FINISHED) of its Jobs in the BackupChainLog, which has a current result full file path.

BackupChain log file and job result files are a whole and consistent unit, that is a source for a Restore.

Restore requirements

As mentioned before a Restore operation is a mirror of a Backup. The process is a Full restore of a root node with restoring an additional Incremental backup to reach a desired workspace state. Restoring of the workspace Full backup will create a new workspace in the repository using given RepositoyEntry of existing repository and given (preconfigured) WorkspaceEntry for a new target workspace. A Restore process will restore a root node from the SysView XML data.

Finally, we may say that Restore is a process of a new Workspace creation and filling it with a Backup content. In case you already have a target Workspace (with the same name) in a Repository, you have to configure a new name for it. If no target workspace exists in the Repositor, you may use the same name as the Backup one.

As an optional extension, the Backup service is not enabled by default. You need to enable it via configuration.

The following is an example configuration :

<component>
  <key>org.exoplatform.services.jcr.ext.backup.BackupManager</key>
  <type>org.exoplatform.services.jcr.ext.backup.impl.BackupManagerImpl</type>
  <init-params>
    <properties-param>
      <name>backup-properties</name>
      <property name="backup-dir" value="target/backup" />
    </properties-param>
  </init-params>
</component>

Where mandatory paramet is:

Also, there are optional parameters:

Restoration involves reloading the backup file into a BackupChainLog and applying appropriate workspace initialization. The following snippet shows the typical sequence for restoring a workspace :

// find BackupChain using the repository and workspace names (return null if not found)
BackupChain chain = backup.findBackup("db1", "ws1");

// Get the RepositoryEntry and WorkspaceEntry
ManageableRepository repo = repositoryService.getRepository(repository);
RepositoryEntry repoconf = repo.getConfiguration();
List<WorkspaceEntry> entries = repoconf.getWorkspaceEntries();
WorkspaceEntry = getNewEntry(entries, workspace); // create a copy entry from an existing one

// restore backup log using ready RepositoryEntry and WorkspaceEntry
File backLog = new File(chain.getLogFilePath());
BackupChainLog bchLog = new BackupChainLog(backLog);

// initialize the workspace
repository.configWorkspace(workspaceEntry);

// run restoration
backup.restore(bchLog, repositoryEntry, workspaceEntry);

Repository and Workspace initialization from backup can use the BackupWorkspaceInitializer.

Will be configured BackupWorkspaceInitializer in configuration of workspace to restore the Workspace from backup over initializer.

Will be configured BackupWorkspaceInitializer in all configurations workspaces of the Repository to restore the Repository from backup over initializer.

Restoring the repository or workspace requires to shutdown the repository.

Follow these steps:

Example of configuration initializer to restore workspace "backup" over BackupWorkspaceInitializer:

<workspaces>
  <workspace name="backup" ... >
    <container class="org.exoplatform.services.jcr.impl.storage.jdbc.optimisation.CQJDBCWorkspaceDataContainer">
      ...
    </container>
    <initializer class="org.exoplatform.services.jcr.impl.core.BackupWorkspaceInitializer">
      <properties>
         <property name="restore-path" value="D:\java\exo-working\backup\repository_backup-20110120_044734"/>
      </properties>
   </initializer>
    ...
</workspace>

Example of configuration initializers to restore the repository "repository" over BackupWorkspaceInitializer:

In configuration of repository will be configured initializers of workspace to refer to your backup.

For example:

...
<workspaces>
 <workspace name="system" ... >
  <container class="org.exoplatform.services.jcr.impl.storage.jdbc.optimisation.CQJDBCWorkspaceDataContainer">
  ...
  </container>
  <initializer class="org.exoplatform.services.jcr.impl.core.BackupWorkspaceInitializer">
   <properties>
    <property name="restore-path" value="D:\java\exo-working\backup\repository_system-20110120_052334"/>
   </properties>
  </initializer>
  ...
 </workspace>

 <workspace name="collaboration" ... >
   <container class="org.exoplatform.services.jcr.impl.storage.jdbc.optimisation.CQJDBCWorkspaceDataContainer">
   ...
  </container>
  <initializer class="org.exoplatform.services.jcr.impl.core.BackupWorkspaceInitializer">
   <properties>
    <property name="restore-path" value="D:\java\exo-working\backup\repository_collaboration-20110120_052341"/>
   </properties>
  </initializer>
  ...
 </workspace>

 <workspace name="backup" ... >
  <container class="org.exoplatform.services.jcr.impl.storage.jdbc.optimisation.CQJDBCWorkspaceDataContainer">
  ...
  </container>

  <initializer class="org.exoplatform.services.jcr.impl.core.BackupWorkspaceInitializer">
   <properties>
    <property name="restore-path" value="D:\java\exo-working\backup\repository_backup-20110120_052417"/>
   </properties>
  </initializer>
  ...
  </workspace>
</workspaces>

The resore of existing workspace or repositry is available.

For restore will be used spacial methods:

 /**
    * Restore existing workspace. Previous data will be deleted.
    * For getting status of workspace restore can use 
    * BackupManager.getLastRestore(String repositoryName, String workspaceName) method 
    * 
    * @param workspaceBackupIdentifier
    *          backup identifier
    * @param workspaceEntry
    *          new workspace configuration
    * @param asynchronous
    *          if 'true' restore will be in asynchronous mode (i.e. in separated thread)
    * @throws BackupOperationException
    *           if backup operation exception occurred 
    * @throws BackupConfigurationException
    *           if configuration exception occurred
    */
   void restoreExistingWorkspace(String workspaceBackupIdentifier, String repositoryName, WorkspaceEntry workspaceEntry,
      boolean asynchronous) throws BackupOperationException, BackupConfigurationException;

   /**
    * Restore existing workspace. Previous data will be deleted.
    * For getting status of workspace restore use can use 
    * BackupManager.getLastRestore(String repositoryName, String workspaceName) method 
    * 
    * @param log
    *          workspace backup log
    * @param workspaceEntry
    *          new workspace configuration
    * @param asynchronous
    *          if 'true' restore will be in asynchronous mode (i.e. in separated thread)
    * @throws BackupOperationException
    *           if backup operation exception occurred 
    * @throws BackupConfigurationException
    *           if configuration exception occurred
    */
   void restoreExistingWorkspace(BackupChainLog log, String repositoryName, WorkspaceEntry workspaceEntry, boolean asynchronous)  throws BackupOperationException, BackupConfigurationException;

   /**
    * Restore existing repository. Previous data will be deleted.
    * For getting status of repository restore can use 
    * BackupManager.getLastRestore(String repositoryName) method 
    * 
    * @param repositoryBackupIdentifier
    *          backup identifier
    * @param repositoryEntry
    *          new repository configuration
    * @param asynchronous
    *          if 'true' restore will be in asynchronous mode (i.e. in separated thread)
    * @throws BackupOperationException
    *           if backup operation exception occurred 
    * @throws BackupConfigurationException
    *           if configuration exception occurred
    */
   void restoreExistingRepository(String  repositoryBackupIdentifier, RepositoryEntry repositoryEntry, boolean asynchronous)  throws BackupOperationException, BackupConfigurationException;

   /**
    * Restore existing repository. Previous data will be deleted.
    * For getting status of repository restore can use 
    * BackupManager.getLastRestore(String repositoryName) method 
    * 
    * @param log
    *          repository backup log
    * @param repositoryEntry
    *          new repository configuration
    * @param asynchronous
    *          if 'true' restore will be in asynchronous mode (i.e. in separated thread)
    * @throws BackupOperationException
    *           if backup operation exception occurred 
    * @throws BackupConfigurationException
    *           if configuration exception occurred
    */
   void restoreExistingRepository(RepositoryBackupChainLog log, RepositoryEntry repositoryEntry, boolean asynchronous)
      throws BackupOperationException, BackupConfigurationException;

These methods for restore will do:

The Backup manager allows you to restore a repository or a workspace using the original configuration stored into the backup log:

/**
    * Restore existing workspace. Previous data will be deleted.
    * For getting status of workspace restore can use 
    * BackupManager.getLastRestore(String repositoryName, String workspaceName) method
    * WorkspaceEntry for restore should be contains in BackupChainLog. 
    * 
    * @param workspaceBackupIdentifier
    *          identifier to workspace backup. 
    * @param asynchronous
    *          if 'true' restore will be in asynchronous mode (i.e. in separated thread) 
    * @throws BackupOperationException
    *           if backup operation exception occurred 
    * @throws BackupConfigurationException
    *           if configuration exception occurred 
    */
   void restoreExistingWorkspace(String workspaceBackupIdentifier, boolean asynchronous)
            throws BackupOperationException,
            BackupConfigurationException;

   /**
    * Restore existing repository. Previous data will be deleted.
    * For getting status of repository restore can use 
    * BackupManager.getLastRestore(String repositoryName) method.
    * ReprositoryEntry for restore should be contains in BackupChainLog. 
    * 
    * @param repositoryBackupIdentifier
    *          identifier to repository backup.   
    * @param asynchronous
    *          if 'true' restore will be in asynchronous mode (i.e. in separated thread)
    * @throws BackupOperationException
    *           if backup operation exception occurred 
    * @throws BackupConfigurationException
    *           if configuration exception occurred
    */
   void restoreExistingRepository(String repositoryBackupIdentifier, boolean asynchronous)
            throws BackupOperationException,
            BackupConfigurationException;

   /**
    * WorkspaceEntry for restore should be contains in BackupChainLog. 
    * 
    * @param workspaceBackupIdentifier
    *          identifier to workspace backup. 
    * @param asynchronous
    *          if 'true' restore will be in asynchronous mode (i.e. in separated thread) 
    * @throws BackupOperationException
    *           if backup operation exception occurred 
    * @throws BackupConfigurationException
    *           if configuration exception occurred 
    */
   void restoreWorkspace(String workspaceBackupIdentifier, boolean asynchronous) throws BackupOperationException,
            BackupConfigurationException;

   /**
    * ReprositoryEntry for restore should be contains in BackupChainLog. 
    * 
    * @param repositoryBackupIdentifier
    *          identifier to repository backup.   
    * @param asynchronous
    *          if 'true' restore will be in asynchronous mode (i.e. in separated thread)
    * @throws BackupOperationException
    *           if backup operation exception occurred 
    * @throws BackupConfigurationException
    *           if configuration exception occurred
    */
   void restoreRepository(String repositoryBackupIdentifier, boolean asynchronous) throws BackupOperationException,
            BackupConfigurationException;

    /**
    * Restore existing workspace. Previous data will be deleted.
    * For getting status of workspace restore can use 
    * BackupManager.getLastRestore(String repositoryName, String workspaceName) method
    * WorkspaceEntry for restore should be contains in BackupChainLog. 
    * 
    * @param workspaceBackupSetDir
    *          the directory with backup set  
    * @param asynchronous
    *          if 'true' restore will be in asynchronous mode (i.e. in separated thread) 
    * @throws BackupOperationException
    *           if backup operation exception occurred 
    * @throws BackupConfigurationException
    *           if configuration exception occurred 
    */
   void restoreExistingWorkspace(File workspaceBackupSetDir, boolean asynchronous)
            throws BackupOperationException, BackupConfigurationException;

   /**
    * Restore existing repository. Previous data will be deleted.
    * For getting status of repository restore can use 
    * BackupManager.getLastRestore(String repositoryName) method.
    * ReprositoryEntry for restore should be contains in BackupChainLog. 
    * 
    * @param repositoryBackupSetDir
    *          the directory with backup set     
    * @param asynchronous
    *          if 'true' restore will be in asynchronous mode (i.e. in separated thread)
    * @throws BackupOperationException
    *           if backup operation exception occurred 
    * @throws BackupConfigurationException
    *           if configuration exception occurred
    */
   void restoreExistingRepository(File repositoryBackupSetDir, boolean asynchronous)
            throws BackupOperationException, BackupConfigurationException;

   /**
    * WorkspaceEntry for restore should be contains in BackupChainLog. 
    * 
    * @param workspaceBackupSetDir
    *          the directory with backup set 
    * @param asynchronous
    *          if 'true' restore will be in asynchronous mode (i.e. in separated thread) 
    * @throws BackupOperationException
    *           if backup operation exception occurred 
    * @throws BackupConfigurationException
    *           if configuration exception occurred 
    */
   void restoreWorkspace(File workspaceBackupSetDir, boolean asynchronous) throws BackupOperationException,
            BackupConfigurationException;

   /**
    * ReprositoryEntry for restore should be contains in BackupChainLog. 
    * 
    * @param repositoryBackupSetDir
    *          the directory with backup set   
    * @param asynchronous
    *          if 'true' restore will be in asynchronous mode (i.e. in separated thread)
    * @throws BackupOperationException
    *           if backup operation exception occurred 
    * @throws BackupConfigurationException
    *           if configuration exception occurred
    */
   void restoreRepository(File repositoryBackupSetDir, boolean asynchronous) throws BackupOperationException,
            BackupConfigurationException;

You can use backup/restore mechanism to migrate between different DB types configuration. Currently three DB types supported (single, multi, isolated) and you can migrate between each of them.

To accomplish migration you simply need to set desired DB type in the repository configuration file of backup set. It is highly recommended to make backup at the DB level before starting the migration process.

Before starting migrating the data of your JCR from single/multi data format to isolated data format, you need to have the backupconsole.

See the Building application section for more details.

Or you can download it from ow2 directly.

See the Configuration Backup service section for details.

  • Create a full backup

For example:

              jcrbackup.cmd http://root:exo@localhost:8080/rest start /repository
          

Return

              Successful :
              status code = 200
          
  • Get the backup id

You need get the backup id used in restore action.

For example:

              jcrbackup http://root:exo@localhost:8080 list completed
          

Return

              The completed (ready to restore) backups information :
              1) Repository backup with id 5dcbc851c0a801c9545eb434947dbe87 :
              repository name           : repository
              backup type               : full only
              started time              : lun., 21 janv. 2013 16:48:21 GMT+01:00
              finished time             : lun., 21 janv. 2013 16:48:25 GMT+01:00
          

The backup id: 5dcbc851c0a801c9545eb434947dbe87

See the Backup Client Usage section for more details.

  • Set desired DB type in the repository configuration file of backup

Change db-structure-type to isolated.

For example: In original-repository-config :

              exo-tomcat\temp\backup\repository_repository_backup_1358783301705\original-repository-config
          

replace

              <property name="db-structure-type" value="single"/>
          

by

              <property name="db-structure-type" value="isolated"/>
          

This change must be done for all workspaces.

  • Activate the persister config

Before starting the restore operation, ensure that the persister is configured to save the changes of the repository configuration.

If it's not activated, it should be configured, See the JCR Configuration persister section for more details.

  • Restore repository with original configuation and remove exists

For example:

              jcrbackup.cmd http://root:exo@localhost:8080/rest restore remove-exists 5dcbc851c0a801c9545eb434947dbe87
          

Return

              Successful :
              status code = 200
          
  • Drop the old tables with the old data format

              drop table JCR_SREF;
              drop table JCR_SVALUE;
              drop table JCR_SITEM;
          

See the Configuration Backup service section for details.

  • Create a full backup

For example:

              jcrbackup.cmd http://root:exo@localhost:8080/rest start /repository
          

Return

              Successful :
              status code = 200
          
  • Get the backup id

You need get the backup id to launch the restore action.

For example:

              jcrbackup http://root:exo@localhost:8080 list completed
          

Return

              The completed (ready to restore) backups information :
              1) Repository backup with id 5dcbc851c0a801c9545eb434947dbe87 :
              repository name           : repository
              backup type               : full only
              started time              : lun., 21 janv. 2013 16:48:21 GMT+01:00
              finished time             : lun., 21 janv. 2013 16:48:25 GMT+01:00
          

The backup id: 5dcbc851c0a801c9545eb434947dbe87

See the Backup Client Usage section for more details.

  • Set desired DB type in the repository configuration file of backup

Change db-structure-type to isolated.

For example: In original-repository-config :

              exo-tomcat\temp\backup\repository_repository_backup_1358783301705\original-repository-config
          

replace

              <property name="db-structure-type" value="multi"/>
          

by

              <property name="db-structure-type" value="isolated"/>
          

This change must be done for all workspaces.

  • Configure the datasource name used for the isolated mode

Make sure that in your repository configuration all the workspaces of a same repository share the same datasource.

  • Activate the persister config

Before starting the restore operation, ensure that the persister is configured to save the changes of the repository configuration.

If it's not activated, it should be configured, See the JCR Configuration persister section for more details.

  • Restore repository with original configuation and remove exists

For example:

              jcrbackup.cmd http://root:exo@localhost:8080/rest restore remove-exists 5dcbc851c0a801c9545eb434947dbe87
          

Return

              Successful :
              status code = 200
          
  • Drop the old tables with the old data format

              drop table JCR_MREF;
              drop table JCR_MVALUE;
              drop table JCR_MITEM;
          

GateIn uses context /portal/rest, therefore you need to use http://host:port/portal/rest/ instread of http://host:port/rest/

GateIn uses form authentication, so first you need to login (url to form authentication is http://host:port/portal/login) and then perform requests.

The service org.exoplatform.services.jcr.ext.backup.server.HTTPBackupAgent is REST-based front-end to service org.exoplatform.services.jcr.ext.backup.BackupManager. HTTPBackupAgent is representation BackupManager to creation backup, restore, getting status of current or completed backup/restore, etc.

The backup client is http client for HTTPBackupAgent.

The HTTPBackupAgent is based on REST (see details about the REST Framework).

HTTPBackupAgent is using POST and GET methods for request.

The HTTPBackupAgent allows :

  • Start backup

  • Stop backup

  • Restore from backup

  • Delete the workspace

  • Get information about backup service (BackupManager)

  • Get information about current backup / restores / completed backups

/rest/jcr-backup/start/{repo}/{ws}

Start backup on specific workspace

URL: http://host:port/rest/jcr-backup/start/{repo}/{ws}

Formats: json.

Method: POST

Parameters:

The BackupConfigBean:

header :
"Content-Type" = "application/json; charset=UTF-8"

body:
<JSON to BackupConfigBean>

The JSON bean of org.exoplatform.services.jcr.ext.backup.server.bean.BackupConfigBean :

{"incrementalRepetitionNumber":<Integer>,"incrementalBackupJobConfig":<JSON to BackupJobConfig>,
"backupType":<Integer>,"fullBackupJobConfig":<JSON to BackupJobConfig>,
"incrementalJobPeriod":<Long>,"backupDir":"<String>"}

Where :

backupType                  - the type of backup:
                                  0 - full backup only;
                                  1 - full and incremental backup.
backupDir                   - the path to backup folder;
incrementalJobPeriod        - the incremental job period;
incrementalRepetitionNumber - the incremental repetition number;
fullBackupJobConfig         - the configuration to full backup, JSON to BackupJobConfig;
incrementalJobPeriod        - the configuration to incremental backup, JSON to BackupJobConfig.

The JSON bean of org.exoplatform.services.jcr.ext.backup.server.bean.response.BackupJobConfig :

{"parameters":[<JSON to Pair>, ..., <JSON to pair> ],"backupJob":"<String>"}

Where:

backupJob  - the FQN (fully qualified name) to BackupJob class;
parameters - the list of JSON of Pair.

The JSON bean of org.exoplatform.services.jcr.ext.backup.server.bean.response.Pair :

{"name":"<String>","value":"<String>"}

Where:

name  - the name of parameter;
value - the value of parameter.

Returns:

/rest/jcr-backup/info/backup

Information about the current and completed backups

URL: http://host:port/rest/jcr-backup/info/backup

Formats: json

Method: GET

Parameters: no

Returns:

/rest/jcr-backup/info/backup/{id} Detailed information about a current or completed backup with identifier '{id}'.

URL: http://host:port/rest/jcr-backup/info/backup/{id}

Formats: json

Method: GET

Parameters:

Returns:

/rest/jcr-backup/restore/{repo}/{id}

Restore the workspace from specific backup.

URL: http://host:port/rest/jcr-backup/restore/{repo}/{id}

Formats: json.

Method: POST

Parameters:

The RestoreBean:

header :
"Content-Type" = "application/json; charset=UTF-8"

body:
<JSON to WorkspaceEntry>

The example of JSON bean to org.exoplatform.services.jcr.config.WorkspaceEntry :

{ "accessManager" : null,
  "autoInitPermissions" : null,
  "autoInitializedRootNt" : null,
  "cache" : { "parameters" : [ { "name" : "max-size",
            "value" : "10k"
          },
          { "name" : "live-time",
            "value" : "1h"
          }
        ],
      "type" : "org.exoplatform.services.jcr.impl.dataflow.persistent.LinkedWorkspaceStorageCacheImpl"
    },
  "container" : { "parameters" : [ { "name" : "source-name",
            "value" : "jdbcjcr"
          },
          { "name" : "dialect",
            "value" : "hsqldb"
          },
          { "name" : "multi-db",
            "value" : "false"
          },
          { "name" : "max-buffer-size",
            "value" : "200k"
          },
          { "name" : "swap-directory",
            "value" : "../temp/swap/production"
          }
        ],
      "type" : "org.exoplatform.services.jcr.impl.storage.jdbc.optimisation.CQJDBCWorkspaceDataContainer",
      "valueStorages" : [ { "filters" : [ { "ancestorPath" : null,
                  "minValueSize" : 0,
                  "propertyName" : null,
                  "propertyType" : "Binary"
                } ],
            "id" : "system",
            "parameters" : [ { "name" : "path",
                  "value" : "../temp/values/production"
                } ],
            "type" : "org.exoplatform.services.jcr.impl.storage.value.fs.TreeFileValueStorage"
          } ]
    },
  "initializer" : { "parameters" : [ { "name" : "root-nodetype",
            "value" : "nt:unstructured"
          } ],
      "type" : "org.exoplatform.services.jcr.impl.core.ScratchWorkspaceInitializer"
    },
  "lockManager" : 
      "timeout" : 15728640
    },
  "name" : "production",
  "queryHandler" : { "analyzer" : {  },
      "autoRepair" : true,
      "bufferSize" : 10,
      "cacheSize" : 1000,
      "documentOrder" : true,
      "errorLogSize" : 50,
      "excerptProviderClass" : "org.exoplatform.services.jcr.impl.core.query.lucene.DefaultHTMLExcerpt",
      "excludedNodeIdentifers" : null,
      "extractorBackLogSize" : 100,
      "extractorPoolSize" : 0,
      "extractorTimeout" : 100,
      "indexDir" : "../temp/jcrlucenedb/production",
      "indexingConfigurationClass" : "org.exoplatform.services.jcr.impl.core.query.lucene.IndexingConfigurationImpl",
      "indexingConfigurationPath" : null,
      "maxFieldLength" : 10000,
      "maxMergeDocs" : 2147483647,
      "mergeFactor" : 10,
      "minMergeDocs" : 100,
      "parameters" : [ { "name" : "index-dir",
            "value" : "../temp/jcrlucenedb/production"
          } ],
      "queryClass" : "org.exoplatform.services.jcr.impl.core.query.QueryImpl",
      "queryHandler" : null,
      "resultFetchSize" : 2147483647,
      "rootNodeIdentifer" : "00exo0jcr0root0uuid0000000000000",
      "spellCheckerClass" : null,
      "supportHighlighting" : false,
      "synonymProviderClass" : null,
      "synonymProviderConfigPath" : null,
      "type" : "org.exoplatform.services.jcr.impl.core.query.lucene.SearchIndex",
      "useCompoundFile" : false,
      "volatileIdleTime" : 3
    },
  "uniqueName" : "repository_production"
}

Returns:

/rest/jcr-backup/info/default-ws-config Will be returned the JSON bean to WorkspaceEntry for default workspace.

URL: http://host:port/rest/jcr-backup/info/default-ws-config

Formats: json

Method: GET

Parameters: no

Returns:

Add the components org.exoplatform.services.jcr.ext.backup.server.HTTPBackupAgent and org.exoplatform.services.jcr.ext.backup.BackupManager to services configuration :

<component>
  <type>org.exoplatform.services.jcr.ext.backup.server.HTTPBackupAgent</type>
</component>

<component>
  <type>org.exoplatform.services.jcr.ext.repository.RestRepositoryService</type>
</component>

<component>
  <key>org.exoplatform.services.jcr.ext.backup.BackupManager</key>
  <type>org.exoplatform.services.jcr.ext.backup.impl.BackupManagerImpl</type>
  <init-params>
    <properties-param>
      <name>backup-properties</name>
      <property name="backup-dir" value="../temp/backup" />
    </properties-param>
  </init-params>
</component>

In case, if you will restore backup in same workspace (so you will drop previous workspace), you need configure RepositoryServiceConfiguration in order to save the changes of the repository configuration. For example

<component>
  <key>org.exoplatform.services.jcr.config.RepositoryServiceConfiguration</key>
  <type>org.exoplatform.services.jcr.impl.config.RepositoryServiceConfigurationImpl</type>
  <init-params>
    <value-param>
      <name>conf-path</name>
      <description>JCR repositories configuration file</description>
      <value>jar:/conf/portal/exo-jcr-config.xml</value>
    </value-param>
    <properties-param>
      <name>working-conf</name>
      <description>working-conf</description>
      <property name="source-name" value="jdbcjcr" />
      <property name="dialect" value="hsqldb" />
      <property name="persister-class-name" value="org.exoplatform.services.jcr.impl.config.JDBCConfigurationPersister" />
    </properties-param>
  </init-params>
</component>

See the eXo JCR Configuration article at the 'Portal and Standalone configuration' section for details.

Backup client is console application.

The backup client is http client for HTTPBackupAgent.

Command signature:

Help info:
 <url_basic_authentication>|<url form authentication>  <cmd> 
 <url_basic_authentication>  :   http(s)//login:password@host:port/<context> 

 <url form authentication>   :   http(s)//host:port/<context> "<form auth parm>" 
     <form auth parm>        :   form <method> <form path>
     <method>                :   POST or GET
     <form path>             :   /path/path?<paramName1>=<paramValue1>&<paramName2>=<paramValue2>...
     Example to <url form authentication> : http://127.0.0.1:8080/portal/rest form POST "/portal/login?initialURI=/portal/private&username=root&password=gtn"

 <cmd>  :   start <repo[/ws]> <backup_dir> [<incr>] 
            stop <backup_id> 
            status <backup_id> 
            restores <repo[/ws]> 
            restore [remove-exists] {{<backup_id>|<backup_set_path>} | {<repo[/ws]> {<backup_id>|<backup_set_path>} [<pathToConfigFile>]}} 
            list [completed] 
            info 
            drop [force-close-session] <repo[/ws]>  
            help  

 start          - start backup of repository or workspace 
 stop           - stop backup 
 status         - information about the current or completed backup by 'backup_id' 
 restores       - information about the last restore on specific repository or workspace 
 restore        - restore the repository or workspace from specific backup 
 list           - information about the current backups (in progress) 
 list completed - information about the completed (ready to restore) backups 
 info           - information about the service backup 
 drop           - delete the repository or workspace 
 help           - print help information about backup console 

 <repo[/ws]>         - /<reponsitory-name>[/<workspace-name>]  the repository or workspace 
 <backup_dir>        - path to folder for backup on remote server 
 <backup_id>         - the identifier for backup 
 <backup_set_dir>    - path to folder with backup set on remote server
 <incr>              - incemental job period 
 <pathToConfigFile>  - path (local) to  repository or workspace configuration 
 remove-exists       - remove fully (db, value storage, index) exists repository/workspace 
 force-close-session - close opened sessions on repository or workspace. 

 All valid combination of parameters for command restore: 
  1. restore remove-exists <repo/ws> <backup_id>       <pathToConfigFile> 
  2. restore remove-exists <repo>    <backup_id>       <pathToConfigFile> 
  3. restore remove-exists <repo/ws> <backup_set_path> <pathToConfigFile> 
  4. restore remove-exists <repo>    <backup_set_path> <pathToConfigFile> 
  5. restore remove-exists <backup_id> 
  6. restore remove-exists <backup_set_path> 
  7. restore <repo/ws> <backup_id>       <pathToConfigFile> 
  8. restore <repo>    <backup_id>       <pathToConfigFile> 
  9. restore <repo/ws> <backup_set_path> <pathToConfigFile> 
 10. restore <repo>    <backup_set_path> <pathToConfigFile> 
 11. restore <backup_id> 
 12. restore <backup_set_path> 
<repository-service default-repository="repository">
  <repositories>
    <repository name="repository" system-workspace="production" default-workspace="production">
      <security-domain>exo-domain</security-domain>
      <access-control>optional</access-control>
      <authentication-policy>org.exoplatform.services.jcr.impl.core.access.JAASAuthenticator</authentication-policy>
      <workspaces>
        
        <workspace name="backup">
          <container class="org.exoplatform.services.jcr.impl.storage.jdbc.optimisation.CQJDBCWorkspaceDataContainer">
            <properties>
              <property name="source-name" value="jdbcjcr" />
              <property name="dialect" value="pgsql" />
              <property name="multi-db" value="false" />
              <property name="max-buffer-size" value="200k" />
              <property name="swap-directory" value="../temp/swap/backup" />
            </properties>
            <value-storages>
              <value-storage id="draft" class="org.exoplatform.services.jcr.impl.storage.value.fs.TreeFileValueStorage">
                <properties>
                  <property name="path" value="../temp/values/backup" />
                </properties>
                <filters>
                  <filter property-type="Binary"/>
                </filters>
              </value-storage>
            </value-storages>
          </container>
          <initializer class="org.exoplatform.services.jcr.impl.core.ScratchWorkspaceInitializer">
            <properties>
              <property name="root-nodetype" value="nt:unstructured" />
            </properties>
          </initializer>
          <cache enabled="true" class="org.exoplatform.services.jcr.impl.dataflow.persistent.LinkedWorkspaceStorageCacheImpl">
            <properties>
              <property name="max-size" value="10k" />
              <property name="live-time" value="1h" />
            </properties>
          </cache>
          <query-handler class="org.exoplatform.services.jcr.impl.core.query.lucene.SearchIndex">
            <properties>
              <property name="index-dir" value="../temp/jcrlucenedb/backup" />
            </properties>
          </query-handler>
          <lock-manager class="org.exoplatform.services.jcr.impl.core.lock.infinispan.ISPNCacheableLockManagerImpl">
             <properties>
                <property name="time-out" value="15m" />
                <property name="infinispan-configuration" value="infinispan-lock.xml" />
                <property name="infinispan-cl-cache.jdbc.table.name" value="lk" />
                <property name="infinispan-cl-cache.jdbc.table.create" value="true" />
                <property name="infinispan-cl-cache.jdbc.table.drop" value="false" />
                <property name="infinispan-cl-cache.jdbc.id.column" value="id" />
                <property name="infinispan-cl-cache.jdbc.data.column" value="data" />
                <property name="infinispan-cl-cache.jdbc.timestamp.column" value="timestamp" />
                <property name="infinispan-cl-cache.jdbc.datasource" value="jdbcjcr" />
                <property name="infinispan-cl-cache.jdbc.connectionFactory" value="org.exoplatform.services.jcr.infinispan.ManagedConnectionFactory" />
             </properties>
          </lock-manager>
        </workspace>
      </workspaces>
    </repository>
  </repositories>
</repository-service>

This usecase needs RestRepositoryService enabled. (Deleting the repository needs it)

<component>
   <type>org.exoplatform.services.jcr.ext.repository.RestRepositoryService</type>
</component>
<repository-service default-repository="repository">
   <repositories>
      <repository name="repository" system-workspace="production" default-workspace="production">
         <security-domain>exo-domain</security-domain>
         <access-control>optional</access-control>
         <authentication-policy>org.exoplatform.services.jcr.impl.core.access.JAASAuthenticator</authentication-policy>
         <workspaces>
            <workspace name="production">
               <!-- for system storage -->
               <container class="org.exoplatform.services.jcr.impl.storage.jdbc.optimisation.CQJDBCWorkspaceDataContainer">
                  <properties>
                     <property name="source-name" value="jdbcjcr" />
                     <property name="multi-db" value="false" />
                     <property name="max-buffer-size" value="200k" />
                     <property name="swap-directory" value="../temp/swap/production" />
                  </properties>
                  <value-storages>
                     <value-storage id="system" class="org.exoplatform.services.jcr.impl.storage.value.fs.TreeFileValueStorage">
                        <properties>
                           <property name="path" value="../temp/values/production" />
                        </properties>
                        <filters>
                           <filter property-type="Binary" />
                        </filters>
                     </value-storage>
                  </value-storages>
               </container>
               <initializer class="org.exoplatform.services.jcr.impl.core.ScratchWorkspaceInitializer">
                  <properties>
                     <property name="root-nodetype" value="nt:unstructured" />
                  </properties>
               </initializer>
               <cache enabled="true" class="org.exoplatform.services.jcr.impl.dataflow.persistent.LinkedWorkspaceStorageCacheImpl">
                  <properties>
                     <property name="max-size" value="10k" />
                     <property name="live-time" value="1h" />
                  </properties>
               </cache>
               <query-handler class="org.exoplatform.services.jcr.impl.core.query.lucene.SearchIndex">
                  <properties>
                     <property name="index-dir" value="../temp/jcrlucenedb/production" />
                  </properties>
               </query-handler>
               <lock-manager class="org.exoplatform.services.jcr.impl.core.lock.infinispan.ISPNCacheableLockManagerImpl">
                  <properties>
                     <property name="time-out" value="15m" />
                     <property name="infinispan-configuration" value="infinispan-lock.xml" />
                     <property name="infinispan-cl-cache.jdbc.table.name" value="lk" />
                     <property name="infinispan-cl-cache.jdbc.table.create" value="true" />
                     <property name="infinispan-cl-cache.jdbc.table.drop" value="false" />
                     <property name="infinispan-cl-cache.jdbc.id.column" value="id" />
                     <property name="infinispan-cl-cache.jdbc.data.column" value="data" />
                     <property name="infinispan-cl-cache.jdbc.timestamp.column" value="timestamp" />
                     <property name="infinispan-cl-cache.jdbc.datasource" value="jdbcjcr" />
                     <property name="infinispan-cl-cache.jdbc.connectionFactory" value="org.exoplatform.services.jcr.infinispan.ManagedConnectionFactory" />
                  </properties>
               </lock-manager>
            </workspace>

            <workspace name="backup">
               <container class="org.exoplatform.services.jcr.impl.storage.jdbc.optimisation.CQJDBCWorkspaceDataContainer">
                  <properties>
                     <property name="source-name" value="jdbcjcr" />
                     <property name="multi-db" value="false" />
                     <property name="max-buffer-size" value="200k" />
                     <property name="swap-directory" value="../temp/swap/backup" />
                  </properties>
                  <value-storages>
                     <value-storage id="draft" class="org.exoplatform.services.jcr.impl.storage.value.fs.TreeFileValueStorage">
                        <properties>
                           <property name="path" value="../temp/values/backup" />
                        </properties>
                        <filters>
                           <filter property-type="Binary" />
                        </filters>
                     </value-storage>
                  </value-storages>
               </container>
               <initializer class="org.exoplatform.services.jcr.impl.core.ScratchWorkspaceInitializer">
                  <properties>
                     <property name="root-nodetype" value="nt:unstructured" />
                  </properties>
               </initializer>
               <cache enabled="true" class="org.exoplatform.services.jcr.impl.dataflow.persistent.LinkedWorkspaceStorageCacheImpl">
                  <properties>
                     <property name="max-size" value="10k" />
                     <property name="live-time" value="1h" />
                  </properties>
               </cache>
               <query-handler class="org.exoplatform.services.jcr.impl.core.query.lucene.SearchIndex">
                  <properties>
                     <property name="index-dir" value="../temp/jcrlucenedb/backup" />
                  </properties>
               </query-handler>
               <lock-manager class="org.exoplatform.services.jcr.impl.core.lock.infinispan.ISPNCacheableLockManagerImpl">
                  <properties>
                     <property name="time-out" value="15m" />
                     <property name="infinispan-configuration" value="infinispan-lock.xml" />
                     <property name="infinispan-cl-cache.jdbc.table.name" value="lk" />
                     <property name="infinispan-cl-cache.jdbc.table.create" value="true" />
                     <property name="infinispan-cl-cache.jdbc.table.drop" value="false" />
                     <property name="infinispan-cl-cache.jdbc.id.column" value="id" />
                     <property name="infinispan-cl-cache.jdbc.data.column" value="data" />
                     <property name="infinispan-cl-cache.jdbc.timestamp.column" value="timestamp" />
                     <property name="infinispan-cl-cache.jdbc.datasource" value="jdbcjcr" />
                     <property name="infinispan-cl-cache.jdbc.connectionFactory" value="org.exoplatform.services.jcr.infinispan.ManagedConnectionFactory" />
                  </properties>
               </lock-manager>
            </workspace>

            <workspace name="digital-assets">
               <container class="org.exoplatform.services.jcr.impl.storage.jdbc.optimisation.CQJDBCWorkspaceDataContainer">
                  <properties>
                     <property name="source-name" value="jdbcjcr" />
                     <property name="multi-db" value="false" />
                     <property name="max-buffer-size" value="200k" />
                     <property name="swap-directory" value="../temp/swap/digital-assets" />
                  </properties>
                  <value-storages>
                     <value-storage id="digital-assets" class="org.exoplatform.services.jcr.impl.storage.value.fs.TreeFileValueStorage">
                        <properties>
                           <property name="path" value="../temp/values/digital-assets" />
                        </properties>
                        <filters>
                           <filter property-type="Binary" />
                        </filters>
                     </value-storage>
                  </value-storages>
               </container>
               <initializer class="org.exoplatform.services.jcr.impl.core.ScratchWorkspaceInitializer">
                  <properties>
                     <property name="root-nodetype" value="nt:folder" />
                  </properties>
               </initializer>
               <cache enabled="true" class="org.exoplatform.services.jcr.impl.dataflow.persistent.LinkedWorkspaceStorageCacheImpl">
                  <properties>
                     <property name="max-size" value="5k" />
                     <property name="live-time" value="15m" />
                  </properties>
               </cache>
               <query-handler class="org.exoplatform.services.jcr.impl.core.query.lucene.SearchIndex">
                  <properties>
                     <property name="index-dir" value="../temp/jcrlucenedb/digital-assets" />
                  </properties>
               </query-handler>
               <lock-manager class="org.exoplatform.services.jcr.impl.core.lock.infinispan.ISPNCacheableLockManagerImpl">
                  <properties>
                     <property name="time-out" value="15m" />
                     <property name="infinispan-configuration" value="infinispan-lock.xml" />
                     <property name="infinispan-cl-cache.jdbc.table.name" value="lk" />
                     <property name="infinispan-cl-cache.jdbc.table.create" value="true" />
                     <property name="infinispan-cl-cache.jdbc.table.drop" value="false" />
                     <property name="infinispan-cl-cache.jdbc.id.column" value="id" />
                     <property name="infinispan-cl-cache.jdbc.data.column" value="data" />
                     <property name="infinispan-cl-cache.jdbc.timestamp.column" value="timestamp" />
                     <property name="infinispan-cl-cache.jdbc.datasource" value="jdbcjcr" />
                     <property name="infinispan-cl-cache.jdbc.connectionFactory" value="org.exoplatform.services.jcr.infinispan.ManagedConnectionFactory" />
                  </properties>
               </lock-manager>
            </workspace>
         </workspaces>
      </repository>
   </repositories>
</repository-service>

This section will show you how to get and manage all statistics provided by eXo JCR.

In order to have a better idea of the time spent into the database access layer, it can be interesting to get some statistics on that part of the code, knowing that most of the time spent into eXo JCR is mainly the database access. This statistics will then allow you to identify without using any profiler what is normally slow in this layer, which could help to fix the problem quickly.

In case you use org.exoplatform.services.jcr.impl.storage.jdbc.optimisation.CQJDBCWorkspaceDataContainer or org.exoplatform.services.jcr.impl.storage.jdbc.JDBCWorkspaceDataContainer as WorkspaceDataContainer, you can get statistics on the time spent into the database access layer. The database access layer (in eXo JCR) is represented by the methods of the interface org.exoplatform.services.jcr.storage.WorkspaceStorageConnection, so for all the methods defined in this interface, we can have the following figures:

Those figures are also available globally for all the methods which gives us the global behavior of this layer.

If you want to enable the statistics, you just need to set the JVM parameter called JDBCWorkspaceDataContainer.statistics.enabled to true. The corresponding CSV file is StatisticsJDBCStorageConnection-${creation-timestamp}.csv for more details about how the csv files are managed, please refer to the section dedicated to the statistics manager.

The format of each column header is ${method-alias}-${metric-alias}. The metric alias are described in the statistics manager section.

The name of the category of statistics corresponding to these statistics is JDBCStorageConnection, this name is mostly needed to access to the statistics through JMX.


In order to know exactly how your application uses eXo JCR, it can be interesting to register all the JCR API accesses in order to easily create real life test scenario based on pure JCR calls and also to tune your eXo JCR to better fit your requirements.

In order to allow you to specify the configuration which part of eXo JCR needs to be monitored without applying any changes in your code and/or building anything, we choose to rely on the Load-time Weaving proposed by AspectJ.

To enable this feature, you will have to add in your classpath the following jar files:

You will also need to get aspectjweaver-1.6.8.jar from the main maven repository http://repo2.maven.org/maven2/org/aspectj/aspectjweaver. At this stage, to enable the statistics on the JCR API accesses, you will need to add the JVM parameter -javaagent:${pathto}/aspectjweaver-1.6.8.jar to your command line, for more details please refer to http://www.eclipse.org/aspectj/doc/released/devguide/ltw-configuration.html.

By default, the configuration will collect statistics on all the methods of the internal interfaces org.exoplatform.services.jcr.core.ExtendedSession and org.exoplatform.services.jcr.core.ExtendedNode, and the JCR API interface javax.jcr.Property. To add and/or remove some interfaces to monitor, you have two configuration files to change that are bundled into the jar exo.jcr.component.statistics-X.Y.Z.jar, which are conf/configuration.xml and META-INF/aop.xml.

The file content below is the content of conf/configuration.xml that you will need to modify to add and/or remove the full qualified name of the interfaces to monitor, into the list of parameter values of the init param called targetInterfaces.

<configuration xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.exoplatform.org/xml/ns/kernel_1_3.xsd http://www.exoplatform.org/xml/ns/kernel_1_3.xsd"
 xmlns="http://www.exoplatform.org/xml/ns/kernel_1_3.xsd">

 <component>
   <type>org.exoplatform.services.jcr.statistics.JCRAPIAspectConfig</type>
   <init-params>
     <values-param>
       <name>targetInterfaces</name>
       <value>org.exoplatform.services.jcr.core.ExtendedSession</value>
       <value>org.exoplatform.services.jcr.core.ExtendedNode</value>
       <value>javax.jcr.Property</value>
     </values-param>
   </init-params>
  </component>
</configuration>

The file content below is the content of META-INF/aop.xml that you will need to modify to add and/or remove the full qualified name of the interfaces to monitor, into the expression filter of the pointcut called JCRAPIPointcut. As you can see below, by default only JCR API calls from the exoplatform packages are took into account, don't hesistate to modify this filter to add your own package names.

<aspectj>
  <aspects>
    <concrete-aspect name="org.exoplatform.services.jcr.statistics.JCRAPIAspectImpl" extends="org.exoplatform.services.jcr.statistics.JCRAPIAspect">
      <pointcut name="JCRAPIPointcut"
        expression="(target(org.exoplatform.services.jcr.core.ExtendedSession) || target(org.exoplatform.services.jcr.core.ExtendedNode) || target(javax.jcr.Property)) &amp;&amp; call(public * *(..))" />
    </concrete-aspect>
  </aspects>
  <weaver options="-XnoInline">
    <include within="org.exoplatform..*" />
  </weaver>
</aspectj> 

The corresponding CSV files are of type Statistics${interface-name}-${creation-timestamp}.csv for more details about how the csv files are managed, please refer to the section dedicated to the statistics manager.

The format of each column header is ${method-alias}-${metric-alias}. The method alias will be of type ${method-name}(list of parameter types separeted by ; to be compatible with the CSV format).

The metric alias are described in the statistics manager section.

The name of the category of statistics corresponding to these statistics is the simple name of the monitored interface (e.g. ExtendedSession for org.exoplatform.services.jcr.core.ExtendedSession), this name is mostly needed to access to the statistics through JMX.

Please note that this feature will affect the performances of eXo JCR so it must be used with caution.

The statistics manager manages all the statistics provided by eXo JCR, it is responsible of printing the data into the CSV files and also exposing the statistics through JMX and/or Rest.

The statistics manager will create all the CSV files for each category of statistics that it manages, the format of those files is Statistics${category-name}-${creation-timestamp}.csv. Those files will be created into the user directory if it is possible otherwise it will create them into the temporary directory. The format of those files is CSV (i.e. Comma-Seperated Values), one new line will be added regularily (every 5 seconds by default) and one last line will be added at JVM exit. Each line, will be composed of the 5 figures described below for each method and globaly for all the methods.


You can disable the persistence of the statistics by setting the JVM parameter called JCRStatisticsManager.persistence.enabled to false, by default, it is set to true. You can aslo define the period of time between each record (i.e. line of data into the file) by setting the JVM parameter called JCRStatisticsManager.persistence.timeout to your expected value expressed in milliseconds, by default it is set to 5000.

You can also access to the statistics thanks to JMX, the available methods are the following:

Table 1.44. JMX Methods

getMinGive the minimum time spent into the method corresponding to the given category name and statistics name. The expected arguments are the name of the category of statistics (e.g. JDBCStorageConnection) and the name of the expected method or global for the global value.
getMaxGive the maximum time spent into the method corresponding to the given category name and statistics name. The expected arguments are the name of the category of statistics (e.g. JDBCStorageConnection) and the name of the expected method or global for the global value.
getTotalGive the total amount of time spent into the method corresponding to the given category name and statistics name. The expected arguments are the name of the category of statistics (e.g. JDBCStorageConnection) and the name of the expected method or global for the global value.
getAvgGive the average time spent into the method corresponding to the given category name and statistics name. The expected arguments are the name of the category of statistics (e.g. JDBCStorageConnection) and the name of the expected method or global for the global value.
getTimesGive the total amount of times the method has been called corresponding to the given ,category name and statistics name. The expected arguments are the name of the category of statistics (e.g. JDBCStorageConnection) and the name of the expected method or global for the global value.
resetReset the statistics for the given category name and statistics name. The expected arguments are the name of the category of statistics (e.g. JDBCStorageConnection) and the name of the expected method or global for the global value.
resetAllReset all the statistics for the given category name. The expected argument is the name of the category of statistics (e.g. JDBCStorageConnection).


The full name of the related MBean is exo:service=statistic, view=jcr.

Production and any systems may have faults in some days. They may be caused by hardware and/or software problems, human faults during updates and in many other circumstances. It is important to check integrity and consistency of the system if it is not backed up or stale, or it takes the recovery process much time. The eXo JCR implementation offers an innovative JMX-based complex checking tool. Running inspection, this tool checks every major JCR component, such as persistent data layer and index. The persistent layer includes JDBC Data Container and Value Storage if they are configured. The database is verified using the set of complex specialized domain-specific queries. The Value Storage tool checks the existence and access to each file. Index verification contains two-way pass cycle, existence of each node in the index checks on persistent layer along with opposite direction, when each node from Data Container is validated in the index. Access to the checking tool is exposed via the JMX interface (RepositoryCheckController MBean) with the following operations available:


Among the list of known inconsistencies described in the next section, see below what can be checked and repaired automatically:

  • An item has no parent node: Properties will be removed and the root UUID will be assigned in case of nodes.

  • A node has a single valued property with nothing declared in the VALUE table: This property will be removed if it is not required by primary type of its node.

  • A node has no primary type property: This node and the whole subtree will be removed if it is not required by primary type of its parent.

  • Value record has no related property record: Value record will be removed from database.

  • An item is its own parent: Properties will be removed and root UUID will be assigned in case of nodes.

  • Several versions of same item: All earlier records with earlier versions will be removed from ITEM table.

  • Reference properties without reference records: The property will be removed if it is not required by the primary type of its node.

  • A node is marked as locked in the lockmanager's table but not in ITEM table or the opposite: All lock inconsistencies will be removed from both tables.

Note

The only inconsistency that cannot be fixed automatically is Corrupted VALUE records. Both STORAGE_DESC and DATA fields contain not null value. Since there is no way to determinate which value is valid: either on the file system or in the database.

The list of ValueStorage inconsistencies which can be checked and repaired automatically:

  • Property's value is stored in the File System but the content is missing: A new empty file corresponding to this value will be created.

The list of SearchIndex inconsistencies which can be checked. To repair them we need to reindex the content completely, what also can be done using JMX:

  • Not indexed document

  • Document indexed more than one time

  • Document corresponds to removed node


All tool activities are stored into a file, which can be found in app directory. The syntax of the name of the file is report-<repository name>-dd-MMM-yy-HH-mm.txt.

Note

You can use the JMX parameter nThreads, to set the number of threads used for checking and repairing repository (by default, the RepositoryCheckController uses a single thread).

Warning

When the multi threaded mode is used, The RepositoryCheckController uses more memory. So, it is recommended to avoid setting a large number of threads.

Here are examples of corrupted JCR and ways to eliminate them:

  1. Items have no parent nodes.

  2. A node has a single valued property with no declaration in the VALUE table.

  3. Node has no primary type property.

  4. All value records have no related property record.

  5. Corrupted VALUE records. Both STORAGE_DESC and DATA fields contain not null value.

  6. Item is its own parent.

  7. Several versions of same item.

  8. Reference properties without reference records.

  9. Node considered to be locked in the lockmanager data, is not locked according to the JCR data or the opposite situation.

  10. A property's value is stored in the file system, but its content is missing.

    This cannot be checked via simple SQL queries.

Quota manager is designed to provide the ability to manage qoutas of eXo jcr entities, which can be very useful for administration purposes. In general the major features are:

To use Quota Manager along with Infinispan you need to configure it. An example of the configuration:

 <component>
    <key>org.exoplatform.services.jcr.impl.quota.QuotaManager</key>
    <type>org.exoplatform.services.jcr.impl.quota.infinispan.ISPNQuotaManagerImpl</type>
    <init-params>
      <value-param>
        <name>exceeded-quota-behaviour</name>
        <value>exception</value>
      </value-param>
      <properties-param>
        <name>cache-configuration</name>
        <description>infinispan-configuration</description>
        <property name="infinispan-cl-cache.jdbc.datasource" value="jdbcjcr"/>
        <property name="infinispan-cl-cache.jdbc.dialect" value="${dialect}" />
        <property name="infinispan-configuration" value="conf/standalone/test-infinispan-quota.xml" />
        <property name="jgroups-configuration" value="jar:/conf/portal/cluster/udp-mux.xml" />
      </properties-param>
    </init-params>
  </component>

Quota manager interface declares the following methods:

eXo JCR supports J2EE Connector Architecture 1.5, thus If you would like to delegate the JCR Session lifecycle to your application server, you can use the JCA Resource Adapter for eXo JCR if your application server supports JCA 1.5. This adapter only supports XA Transaction, in other words you cannot use it for local transactions. Since the JCR Sessions have not been designed to be shareable, the session pooling is simply not covered by the adapter.

The equivalent of the javax.resource.cci.ConnectionFactory in JCA terminology is org.exoplatform.connectors.jcr.adapter.SessionFactory in the context of eXo JCR, the resource that you will get thanks to a JNDI lookup is of type SessionFactory and provides the following methods:

   /**
    * Get a JCR session corresponding to the repository
    * defined in the configuration and the default workspace.
    * @return a JCR session corresponding to the criteria
    * @throws RepositoryException if the session could not be created
    */
   Session getSession() throws RepositoryException;

   /**
    * Get a JCR session corresponding to the repository
    * defined in the configuration and the default workspace, using
    * the given user name and password.
    * @param userName the user name to use for the authentication
    * @param password the password to use for the authentication
    * @return a JCR session corresponding to the criteria
    * @throws RepositoryException if the session could not be created
    */
   Session getSession(String userName, String password) throws RepositoryException;

   /**
    * Get a JCR session corresponding to the repository
    * defined in the configuration and the given workspace.
    * @param workspace the name of the expected workspace
    * @return a JCR session corresponding to the criteria
    * @throws RepositoryException if the session could not be created
    */
   Session getSession(String workspace) throws RepositoryException;

   /**
    * Get a JCR session corresponding to the repository
    * defined in the configuration and the given workspace, using
    * the given user name and password.
    * @param workspace the name of the expected workspace
    * @param userName the user name to use for the authentication
    * @param password the password to use for the authentication
    * @return a JCR session corresponding to the criteria
    * @throws RepositoryException if the session could not be created
    */
   Session getSession(String workspace, String userName, String password) throws RepositoryException;

In case of the standalone mode where the JCR and its dependencies are not provided, you will need to deploy the whole ear file corresponding to the artifactId exo.jcr.ear and groupId org.exoplatform.jcr, the rar file is embedded into the ear file. In case the JCR and its dependencies are provided like when you use it with gateIn for example, you will need to deploy only the rar file corresponding to the artifactId exo.jcr.connectors.jca and groupId org.exoplatform.jcr.

To deploy JCA module on standalone mode :

To deploy JCA module on Platform :

To deploy JCA module on Gatein/JPP:

eXo JCR is a complete implementation of the standard JSR 170: Content Repository for Java TM Technology API, including Level 1, Level 2 and Additional Features specified in the JCR Specification.

The JSR170 specification does not define how permissions are managed or checked. So eXo JCR has implemented its own proprietary extension to manage and check permissions on nodes. In essence, this extension uses an Access Control List (ACL) policy model applied to eXo Organization model (see eXo Platform Organization Service).

An access control list (ACL) is a list of permissions attached to an object. An ACL specifies which users, groups or system processes are granted access to JCR nodes, as well as what operations are allowed to be performed on given objects.

eXo JCR Access Control is based on two facets applied to nodes :

Access Control nodetypes are not extendible: The access control mechanism works for exo:owneable and exo:privilegeable nodetypes only, not for their subtypes! So you cannot extend those nodetypes.

Autocreation: By default, newly created nodes are neither exo:privilegeable nor exo:owneable but it is possible to configure the repository to auto-create exo:privilegeable or/and exo:owneable thanks to eXo's JCR interceptors extension (see JCR Extensions)

OR-based Privilege Inheritance: Note, that eXo's Access Control implementation supports a privilege inheritance that follows a strategy of either...or/ and has only an ALLOW privilege mechanism (there is no DENY feature). This means that a session is allowed to perform some operations on some nodes if its identity has an appropriate permission assigned to this node. Only if there is no exo:permission property assigned to the node itself, the permissions of the node's ancestors are used.

In the following example, you see a node named "Politics" which contains two nodes named "Cats" and "Dogs".

<Politics  jcr:primaryType="nt:unstructured" jcr:mixinTypes="exo:owneable exo:datetime exo:privilegeable" exo:dateCreated="2009-10-08T18:02:43.687+02:00" 
exo:dateModified="2009-10-08T18:02:43.703+02:00" 
exo:owner="root" 
exo:permissions="any_x0020_read *:/platform/administrators_x0020_read *:/platform/administrators_x0020_add_node *:/platform/administrators_x0020_set_property *:/platform/administrators_x0020_remove">

<Cats jcr:primaryType="exo:article" 
jcr:mixinTypes="exo:owneable" 
exo:owner="marry"  
exo:summary="The_x0020_secret_x0020_power_x0020_of_x0020_cats_x0020_influences_x0020_the_x0020_leaders_x0020_of_x0020_the_x0020_world." 
exo:text="" exo:title="Cats_x0020_rule_x0020_the_x0020_world" />

<Dogs jcr:primaryType="exo:article" 
jcr:mixinTypes="exo:privilegeable" 
exo:permissions="manager:/organization_x0020_read manager:/organization_x0020_set_property"
exo:summary="Dogs" 
exo:text="" exo:title="Dogs_x0020_are_x0020_friends" />

</Politics>

The "Politics" node is exo:owneable and exo:privilegeable. It has both an exo:owner property and an exo:permissions property. There is an exo:owner="root" property so that the user root is the owner. In the exo:permissions value, you can see the ACL that is a list of access controls. In this example, the group *:/platform/administrators has all rights on this node (remember that the "*" means any kind of membership). any means that any users also have the read permission.s

As you see in the jcr:mixinTypes property, the "Cats" node is exo:owneable and there is an exo:owner="marry" property so that the user marry is the owner. The "Cats" node is not exo:privilegeable and has no exo:permissions. In this case, we can see the inheritance mechanism here is that the "Cats" node has the same permissions as "Politics" node.

Finally, the "Dogs" node is also a child node of "Politics". This node is not exo:owneable and inherits the owner of the "Politics" node (which is the user root). Otherwise, "Dogs" is exo:privilegeable and therefore, it has its own exo:permissions. That means only the users having a "manager" role in the group "/organization" and the user "root" have the rights to access this node.

This session describes how permission is validated for different JCR actions.

An extended Access Control system consists of:

Link Producer Service - a simple service, which generates an .lnk file, that is compatible with the Microsoft link file format. It is an extension of the REST Framework library and is included into the WebDav service. On dispatching a GET request the service generates the content of an .lnk file, which points to a JCR resource via WebDav.

Link Producer has a simple configuration like described below:

<component>
  <key>org.exoplatform.services.jcr.webdav.lnkproducer.LnkProducer</key>
  <type>org.exoplatform.services.jcr.webdav.lnkproducer.LnkProducer</type>
</component>

When using JCR the resource can be addressed by WebDav reference (href) like http://host:port/rest/jcr/repository/workspace/somenode/somefile.extention , the link servlet must be called for this resource by several hrefs, like http://localhost:8080/rest/lnkproducer/openit.lnk?path=/repository/workspace/somenode/somefile.extention

Please note, that when using the portal mode the REST servlet is available using a reference (href) like http://localhost:8080/portal/rest/...

The name of the .lnk file can be any. But for the best compatibility it must be the same as the name of the JCR resource.

Here is a step by step sample of a use case of the link producer... At first, type valid reference to the resource, using the link producer in your browser's adress field:

Internet Explorer will give a dialog window requesting to Open a file or to Save it. Click on the Open button

In Windows system an .lnk file will be downloaded and opened with the application which is registered to open the files, which are pointed to by the .lnk file. In case of a .doc file, Windows opens Microsoft Office Word which will try to open a remote file (test0000.doc). Maybe it will be necessary to enter USERNAME and PASSWORD.

Next, you will be able to edit the file in Microsoft Word.

The Link Producer is necessary for opening/editing and then saving the remote files in Microsoft Office Word, without any further updates.

Also the Link Producer can be referenced to from an HTML page. If page contains code like

<a href="http://localhost:8080/rest/lnkproducer/openit.lnk?path=/repository/workspace/somenode/somefile.extention">somefile.extention</a>

the file "somefile.extention" will open directly.

Processing binary large object (BLOB) is very important in eXo JCR, so this section focuses on explaining how to do it.

In both of the cases, a developer can set/update the binary Property via Node.setProperty(String, InputStream), Property.setValue(InputStream) as described in the spec JSR-170. Also, there is the setter with a ready Value object (obtainer from ValueFactory.createValue(InputStream)).

An example of a specification usage.

// Set the property value with given stream content. 
Property binProp = node.setProperty("BinData", myDataStream);
// Get the property value stream. 
InputStream binStream = binProp.getStream();

// You may change the binary property value with a new Stream, all data will be replaced
// with the content from the new stream.
Property updatedBinProp = node.setProperty("BinData", newDataStream);
// Or update an obtained property
updatedBinProp.setValue(newDataStream);
// Or update using a Value object 
updatedBinProp.setValue(ValueFactory.createValue(newDataStream));
// Get the updated property value stream. 
InputStream newStream = updatedBinProp.getStream();

But if you need to update the property sequentially and with partial content, you have no choice but to edit the whole data stream outside and get it back to the repository each time. In case of really large-sized data, the application will be stuck and the productivity will decrease a lot. JCR stream setters will also check constraints and perform common validation each time.

There is a feature of the eXo JCR extension that can be used for binary values partial writing without frequent session level calls. The main idea is to use a value object obtained from the property as the storage of the property content while writing/reading during runtime.

According to the spec JSR-170, Value interface provides the state of property that can't be changed (edited). The eXo JCR core provides ReadableBinaryValue and EditableBinaryValue interfaces which themselves extend JCR Value. The interfaces allow the user to partially read and change a value content.

ReadableBinaryValue value can be casted from any value, i.e. String, Binary, Date etc.

// get the property value of type PropertyType.STRING 
ReadableBinaryValue extValue = (ReadableBinaryValue) node.getProperty("LargeText").getValue();
// read 200 bytes to a destStream from the position 1024 in the value content
OutputStream destStream = new FileOutputStream("MyTextFile.txt");
extValue.read(destStream, 200, 1024);

But EditableBinaryValue can be applied only to properties of type PropertyType.BINARY. In other cases, a cast to EditableBinaryValue will fail.

After the value has been edited, the EditableBinaryValue value can be applied to the property using the standard setters (Property.setValue(Value), Property.setValues(Value), Node.setProperty(String, Value) etc.). Only after the EditableBinaryValue has been set to the property, it can be obtained in this session by getters (Property.getValue(), Node.getProperty(String) etc.).

The user can obtain an EditableBinaryValue instance and fill it with data in an interaction manner (or any other appropriated to the targets) and return (set) the value to the property after the content will be done.

// get the property value for PropertyType.BINARY Property
EditableBinaryValue extValue = (EditableBinaryValue) node.getProperty("BinData").getValue();

// update length bytes from the stream starting from the position 1024 in existing Value data
extValue.update(dataInputStream, dataLength, 1024);

// apply the edited EditableBinaryValue to the Property
node.setProperty("BinData", extValue);

// save the Property to persistence
node.save();

A practical example of the iterative usage. In this example, the value is updated with data from the sequence of streams and after the update is done, the value will be applied to the property and be visible during the session.

// update length bytes from the stream starting from the particular 
// position in the existing Value data
int dpos = 1024;
while (source.dataAvailable()) {
  extValue.update(source.getInputStream(), source.getLength(), dpos);
  dpos = dpos + source.getLength();
}

// apply the edited EditableBinaryValue to the Property
node.setProperty("BinData", extValue);

The goals of this section are:

Workspace Data Container (container) serves Repository Workspace persistent storage. WorkspacePersistentDataManager (data manager) uses container to perform CRUD operation on the persistent storage. Accessing to the storage in the data manager is implemented via storage connection obtained from the container (WorkspaceDataContainer interface implemenatiton). Each connection represents a transaction on the storage. Storage Connection (connection) should be an implementation of WorkspaceStorageConnection.

WorkspaceStorageConnection openConnection() throws RepositoryException;
WorkspaceStorageConnection openConnection(boolean readOnly) throws RepositoryException;
WorkspaceStorageConnection reuseConnection(WorkspaceStorageConnection original) throws RepositoryException;
boolean isCheckSNSNewConnection();

Container initialization is only based on a configuration. After the container has been created, it's not possible to change parameters. Configuration consists of implementation class and set of properties and Value Storages configuration.

Connection creation and reuse should be a thread safe operation. Connection provides CRUD operations support on the storage.

ItemData getItemData(String identifier) throws RepositoryException, IllegalStateException;
ItemData getItemData(NodeData parentData, QPathEntry name, ItemType itemType) throws RepositoryException, IllegalStateException;
List<NodeData> getChildNodesData(NodeData parent) throws RepositoryException, IllegalStateException;
List<NodeData> getChildNodesData(NodeData parent, ListList<QPathEntryFilter> pattern) throws RepositoryException, IllegalStateException;
List<PropertyData> getChildPropertiesData(NodeData parent) throws RepositoryException, IllegalStateException;
List<PropertyData> getChildPropertiesData(NodeData parent, List<QPathEntryFilter> pattern) throws RepositoryException, IllegalStateException;

This methiod specially dedicated for non-content modification operations (e.g. Items delete).

List<PropertyData> listChildPropertiesData(NodeData parent) throws RepositoryException, IllegalStateException;

It's REFERENCE type: Properties referencing Node with given nodeIdentifier. See more in javax.jcr.Node.getReferences()

List<PropertyData> getReferencesData(String nodeIdentifier) throws RepositoryException, IllegalStateException, UnsupportedOperationException;
boolean getChildNodesDataByPage(NodeData parent, int fromOrderNum, int toOrderNum, List<NodeData> childs) throws RepositoryException;
int getChildNodesCount(NodeData parent) throws RepositoryException;
int getLastOrderNumber(NodeData parent) throws RepositoryException;
void add(NodeData data) throws RepositoryException,UnsupportedOperationException,InvalidItemStateException,IllegalStateException;
void add(PropertyData data) throws RepositoryException,UnsupportedOperationException,InvalidItemStateException,IllegalStateException;
void update(NodeData data) throws RepositoryException,UnsupportedOperationException,InvalidItemStateException,IllegalStateException;
void update(PropertyData data) throws RepositoryException,UnsupportedOperationException,InvalidItemStateException,IllegalStateException;
void rename(NodeData data) throws RepositoryException,UnsupportedOperationException,InvalidItemStateException,IllegalStateException;
void delete(NodeData data) throws RepositoryException,UnsupportedOperationException,InvalidItemStateException,IllegalStateException;
void delete(PropertyData data) throws RepositoryException,UnsupportedOperationException,InvalidItemStateException,IllegalStateException;
void prepare() throws IllegalStateException, RepositoryException;
void commit() throws IllegalStateException, RepositoryException;
void rollback() throws IllegalStateException, RepositoryException;

All methods throw IllegalStateException if connection is closed. UnsupportedOperationException if the method is not supported (e.g. JCR Level 1 implementation etc). RepositoryException if some errors occur during preparation, validation or persistence.

Provider implementation should use ValueStoragePlugin abstract class as a base for all storage implementations. Plugin provides support for provider implementation methods. Plugin's methods should be implemented:

public abstract void init(Properties props, ValueDataResourceHolder resources) throws RepositoryConfigurationException, IOException;
public abstract ValueIOChannel openIOChannel() throws IOException;
public boolean isSame(String storageId);
public void checkConsistency(WorkspaceStorageConnection dataConnection);
public ValueStorageURLConnection createURLConnection(URL u) throws IOException;
public URL createURL(String resourceId) throws MalformedURLException;
protected ValueStorageURLStreamHandler getURLStreamHandler();

To implement Workspace data container, you need to do the following:

  1. Read a bit about the contract.

  2. Start a new implementation project pom.xml with org.exoplatform.jcr parent. It is not required, but will ease the development.

  3. Update sources of JCR Core and read JavaDoc on org.exoplatform.services.jcr.storage.WorkspaceDataContainer and org.exoplatform.services.jcr.storage.WorkspaceStorageConnection interfaces. They are the main part for the implemenation.

  4. Look at org.exoplatform.services.jcr.impl.dataflow.persistent.WorkspacePersistentDataManager sourcecode, check how data menager uses container and its connections (see in save() method)

  5. Create WorkspaceStorageConnection dummy implementation class. It's freeform class, but to be close to the eXo JCR, check how to implement JDBC ( org.exoplatform.services.jcr.impl.storage.jdbc.JDBCStorageConnection. Take in account usage of ValueStoragePluginProvider in both implementations.Value storage is an useful option for production versions. But leave it to the end of implementation work.

  6. Create the connection implementation unit tests to play TTD. (optional, but takes many benefits for the process)

  7. Implement CRUD starting from the read to write etc. Test the methods by using the external implementation ways of data read/write in your backend.

  8. When all methods of the connection done start WorkspaceDataContainer. Container class is very simple, it's like a factory for the connections only.

  9. Care about container reuseConnection(WorkspaceStorageConnection) method logic. For some backends, it cab be same as openConnection(), but for some others, it's important to reuse physical backend connection, e.g. to be in the same transaction - see JDBC container.

  10. It's almost ready to use in data manager. Start another test and go on.

When the container will be ready to run as JCR persistence storage (e.g. for this level testing), it should be configured in Repository configuration.

Assuming that our new implementation class name is org.project.jcr.impl.storage.MyWorkspaceDataContainer.

  <repository-service default-repository="repository">
  <repositories>
    <repository name="repository" system-workspace="production" default-workspace="production">
      .............
      <workspaces>
        <workspace name="production">
          <container class="org.project.jcr.impl.storage.MyWorkspaceDataContainer">
            <properties>
              <property name="propertyName1" value="propertyValue1" />
              <property name="propertyName2" value="propertyValue2" />
              .......
              <property name="propertyNameN" value="propertyValueN" />
            </properties>
            <value-storages>
              .......
            </value-storages>
          </container>

Container can be configured by using set properties.

It is a special service for data removal from database. The section shortly describes the principles of work DBCleanerTool under all databases.

It is special service for data removal from database. The article shortly describes the principles of work DBCleanerTool under all databases

This section will show you possible ways of improving JCR

It is intended to GateIn Administrators and those who wants to use JCR features.

For performance it is better to have loadbalacer, DB server and shared NFS on different computers. If for some reasons you see that one node gets more load than others you can decrease this load using load value in the configuration of your load balancer.

JGroups configuration

It's recommended to use the JGroups shared transaport. It is configured by default in eXo JCR and offers higher performance in cluster, using less network connections also. If there are two or more clusters in your network, please check that they use different ports and different cluster names.

Write performance in cluster

Exo JCR implementation uses Lucene indexing engine to provide search capabilities. But Lucene brings some limitations for write operations: it can perform indexing only in one thread. Thats why write performance in cluster is not higher than in singleton environment. Data is indexed on coordinator node, so increasing write-load on cluster may lead to ReplicationTimeout exception. It occurs because writing threads queue in the indexer and under high load timeout for replication to coordinator will be exceeded.

Taking in consideration this fact, it is recommended to exceed replTimeout value in cache configurations in case of high write-load.

Replication timeout

Some operations may take too much time. So if you get TimeoutException try to increase the replication timeout:

      <clustering mode="replication">
        ...
        <sync replTimeout="20000"/>
      </clustering>
   

value is set in miliseconds.

Another thing that you should check if you get TimeoutException is the jgroups thread pools (for normal messages and out-of-band messages). Indeed if one of them is exhausted you can easily get this kind of exceptions. To know if they are exhausted, you can get a thread dump using the jstack command on your process id and check if you have some unused jgroups threads knowing that the name of the threads by default starts with "Incoming-" followed by the thread index in case of the thread pool for normal messages and starts with "OOB-" followed by the thread index in case of the thread pool for out-of-band messages. See below an example of an unused thread for normal messages and for out-of-band messages:

"OOB-1,shared=JCR-cluster" prio=5 tid=7fbfd00a5000 nid=0x117bb0000 waiting on condition [117baf000]
   java.lang.Thread.State: WAITING (parking)
 at sun.misc.Unsafe.park(Native Method)
 - parking to wait for  <77aae61e8> (a java.util.concurrent.SynchronousQueue$TransferStack)
 at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156)
 at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:422)
 at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:323)
 at java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:857)
 at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:957)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:917)
 at java.lang.Thread.run(Thread.java:680)


"Incoming-1,shared=JCR-cluster" prio=5 tid=7fbfce5d3800 nid=0x119ecd000 waiting on condition [119ecc000]
   java.lang.Thread.State: WAITING (parking)
 at sun.misc.Unsafe.park(Native Method)
 - parking to wait for  <77abd6568> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
 at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156)
 at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1987)
 at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:399)
 at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:957)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:917)
 at java.lang.Thread.run(Thread.java:680)

In case you realize that at least one of the thread pools is exhausted, you will need to increase the size of the corresponding thread pool in your jgroups configuration. You need also to make sure that you allocated enough RAM to be able to support the total amount of added threads. For each thread pool, you can define the min and max amount of threads but also the keep alive time, the queue max size and the rejection policy. The configuration of thread pools is at protocol configuration level, in the next example, we use TCP so it is at TCP configuration level:

    <UDP
         singleton_name="JCR-cluster" 
         ...

         thread_pool.enabled="true"
         thread_pool.min_threads="6"
         thread_pool.max_threads="24"
         thread_pool.keep_alive_time="5000"
         thread_pool.queue_enabled="true"
         thread_pool.queue_max_size="10000"
         thread_pool.rejection_policy="discard"

         oob_thread_pool.enabled="true"
         oob_thread_pool.min_threads="3"
         oob_thread_pool.max_threads="24"
         oob_thread_pool.keep_alive_time="5000"
         oob_thread_pool.queue_enabled="false"
         oob_thread_pool.queue_max_size="100"
         oob_thread_pool.rejection_policy="Run"/>
2.1. ExoContainer info
2.1.1. Container hierarchy
2.2. Service Configuration for Beginners
2.2.1. Requirements
2.2.2. Services
2.2.3. Configuration File
2.2.4. Execution Modes
2.2.5. Containers
2.2.6. Configuration Retrieval
2.2.7. Service instantiation
2.2.8. Miscellaneous
2.2.9. Further Reading
2.3. Service Configuration in Detail
2.3.1. Requirements
2.3.2. Sample Service
2.3.3. Parameters
2.3.4. External Plugin
2.3.5. Import
2.3.6. System properties
2.3.7. Understanding the prefixes supported by the configuration manager
2.4. Container Configuration
2.4.1. Kernel configuration namespace
2.4.2. Understanding how configuration files are loaded
2.4.3. eXo Container hot reloading
2.4.4. System property configuration
2.4.5. Variable Syntaxes
2.4.6. Runtime configuration profiles
2.4.7. Component request life cycle
2.4.8. Thread Context Holder
2.5. Inversion Of Control
2.5.1. How
2.5.2. Injection
2.5.3. Side effects
2.6. Services Wiring
2.6.1. Portal Instance
2.6.2. Introduction to the XML schema of the configuration.xml file
2.6.3. Configuration retrieval and log of this retrieval
2.7. Component Plugin Priority
2.8. Understanding the ListenerService
2.8.1. What is the ListenerService ?
2.8.2. How does it work?
2.8.3. How to configure a listener?
2.8.4. Concrete Example
2.9. Initial Context Binder
2.9.1. API
2.10. Job Scheduler Service
2.10.1. Where is Job Scheduler Service used in eXo Products?
2.10.2. How does Job Scheduler work?
2.10.3. Reference
2.11. eXo Cache
2.11.1. Basic concepts
2.11.2. Advanced concepts
2.11.3. eXo Cache extension
2.11.4. eXo Cache based on Infinispan
2.11.5. eXo Cache based on Spymemcached
2.12. TransactionService
2.12.1. Existing TransactionService implementations
2.13. The data source provider
2.13.1. Configuration
2.14. JNDI naming
2.14.1. Prerequisites
2.14.2. How it works
2.14.3. Configuration examples
2.14.4. Recommendations for Application Developers
2.15. Logs configuration
2.15.1. Logs configuration initializer
2.15.2. Configuration examples
2.15.3. Tips and Troubleshooting
2.16. Manageability
2.16.1. Managed framework API
2.16.2. JMX Management View
2.16.3. Example
2.17. RPC Service
2.17.1. Configuration
2.17.2. The SingleMethodCallCommand
2.18. Extensibility
2.19. Dependency Injection (JSR 330)
2.19.1. Specificities and Limitations
2.19.2. Configuration
2.19.3. Scope Management
2.20. Container Integration
2.20.1. Google Guice
2.20.2. Spring
2.20.3. Weld
2.21. Auto Registration
2.22. Multi-threaded Kernel
2.23. HikariCP connection pool

eXo Kernel is the basis of all eXo platform products and modules. Any component available in eXo Platform is managed by the Exo Container, our micro container responsible for gluing the services through dependency injection

Therefore, each product is composed of a set of services and plugins registered to the container and configured by XML configuration files.

The Kernel module also contains a set of very low level services.

This section provides you the basic knowledge about modes, services and containers. You will find out where the service configuration files should be placed, and you will also see the overriding mechanism of configurations.

Finally, you will understand how the container creates the services one after the other and what Inversion of Control really means.

Related documents

Nearly everything could be considered a service! To get a better idea, let's look into the exo-tomcat/lib folder where you find all deployed jar files.

For example you find services for databases, caching, ldap and ftp:

Of course, there are many more services, in fact a lot of these jar files are services. To find out you have to open the jar file and then look into its /conf or /conf/portal directory. Only if there is a file named configuration.xml, you are sure to have found a service.

Interface - Implementation It's important to get the idea that you separate the interface and implementation for a service. That is a good concept to reduce dependencies on specific implementations. This concept is well known for JDBC. If you use standard JDBC (=interface), you can connect any database (=implementation) to your application. In a similar way any service in eXo is defined by a java interface and may have many different implementations. The service implementation is then injected by a container into the application.

Singleton Each service has to be implemented as a singleton, which means that each service is created only once - in one single instance.

Service = Component You always read about services, and you imagine a service as a large application which does big things, but that's not true, a service can be just a little component that reads or transforms a document, therefore the term component is often used instead of service - so bear in mind: a service and a component can safely be considered to be the same thing.

The jar file of a service should contain a default configuration, you find this configuration in the configuration.xml file which comes with the jar. A configuration file can specify several services, as well as there can be several services in one jar file.

For example open the exo.kernel.component.cache-2.0.5.jar file and inside this jar open /conf/portal/configuration.xml. You will see:

 
<component>
<key>org.exoplatform.services.cache.CacheService</key> 
<type>org.exoplatform.services.cache.impl.CacheServiceImpl</type> 
...

Here you will note that a service is specified between the <component> tags. Each service has got a key, which defines the kind of service. As you imagine the content of the <key> tag matches the qualified java interface name (org.exoplatform.services.cache.CacheService) of the service. The specific implementation class of the CacheService is defined in the <type> tag.

Parameters You have already opened some configuration files and seen that there are more than just <key> and <type> tags. You can provide your service with init parameters. The parameters can be simple parameters, properties, or object-params. There are also plugins and they are special because the container calls the setters of your service in order to inject your plugin in your service (called setter injection) see Service Configuration in Detail. In general your service is free to use init parameters, they are not required.

If you ever need to create your own service, the minimum is to create an empty interface, an empty class and a constructor for your class - that's all. Ok, you also should put your class and the interface in a jar file and add a default configuration file.

In order to access to a service you need to use a Container. Just open https://github.com/exoplatform/kernel/tree/stable/2.5.x/exo.kernel.container/src/main/java/org/exoplatform/container.

Among the classes you see in this directory, you only will be interested in these three container types:

  • RootContainer: This is a base container. This container plays an important role during startup, but you should not use it directly.

  • PortalContainer: Created at the startup of the portal web application (in the init() method of the PortalController servlet)

  • StandaloneContainer: A context independent eXo Container. The StandaloneContainer is also used for unit tests.

Use only one container Even if there are several container types you always use exactly one. The RootContainer is never directly used and it depends on the execution mode if you use the PortalContainer or the StandaloneContainer. You will ask how to find out the execution mode in my application and how to manage these two modes. It's easy, you don't have to worry about it because the ExoContainerContext class provides a static method that allows you to get the right container from anywhere (see info box).

PicoContainer All containers inherit from the ExoContainer class which itself inherits from a PicoContainer. PicoContainer is a framework which allows eXo to apply the IoC (Inversion of Control) principles. The precise implementations of any service is unknown at compile time. Various implementations can be used, eXo supplies different implementations but they also may be delivered by other vendors. The decision which service to use during runtime is made in configuration files.

These configuration files are read by the container, the container adds all services to a list or more exactly a java HashTable. It's completely correct to suppose that the configuration.xml you already saw plays an important role. But there are more places where a configuration for a service can be defined as you see in the next section.

Note

"In your java code you have to use

ExoContainer myContainer = ExoContainerContext.getCurrentContainer();

in order to access to the current container. It doesn't greatly matter to your application if the current container is a PortalContainer or a StandaloneContainer. Once you have your container you may access to any service registered in this container using

MyService myService = (MyService) myContainer.getComponentInstance(MyService.class);

You easily realize that MyService.class is the name of the service interface.

The configuration you find inside the jar file is considered as the default configuration. If you want to override this default configuration you can do it in different places outside the jar. When the container finds several configurations for the same service, the configuration which is found later replaces completely the one found previously. Let's call this the configuration override mechanism.

As both containers, PortalContainer and StandaloneContainer, depend on the RootContainer, we will start by looking into this one.

The retrieval sequence in short:

HashTable The RootContainer creates a java HashTable which contains key-value pairs for the services. The qualified interface name of each service is used as key for the hashtable. Hopefully you still remember that the <key> tag of the configuration file contains the interface name? The value of each hashtable pair is an object that contains the service configuration (yes, this means the whole structure between the <component> tags of your configuration.xml file).

The RootContainer runs over all jar files you find in exo-tomcat/lib and looks if there is a configuration file at /conf/configuration.xml, the services configured in this file are added to the hashtable. That way - at the end of this process - the default configurations for all services are stored in the hashtable.

If you wish to provide your own configurations for one or several services, you can do it in a general configuration file that has to be placed at exo-tomcat/exo-conf/configuration.xml. Do not search for such a file on your computer - you won't find one, because this option is not used in the default installation. Here again the same rule applies: The posterior configuration replaces the previous one.

The further configuration retrieval depends on the container type.

The PortalContainer takes the hashtable filled by the RootContainer and continues to look in some more places. Here you get the opportunity to replace RootContainer configurations by those which are specific to your portal. Again, the configurations are overridden whenever necessary.

In short PortalContainer configurations are retrieved in the following lookup sequence :

You see, here the /conf/portal/configuration.xml file of each jar enters the game, they are searched at first. Next, there is nearly always a configuration.xml in the portal.war file (or in the portal webapp folder), you find this file at /WEB-INF/conf/configuration.xml. If you open it, you will find a lot of import statements that point to other configuration files in the same portal.war (or portal webapp).

Multiple Portals Be aware that you might set up several different portals ("admin", "mexico", etc.), and each of these portals will use a different PortalContainer. And each of these PortalContainers can be configured separately. As of GateIn you also will be able to provide configurations from outside the jars and wars or webapps. Put a configuration file in exo-tomcat/exo-conf/portal/$portal_name/configuration.xml where $portal_name is the name of the portal you want to configure for . But normally you only have one portal which is called "portal" so you use exo-tomcat/exo-conf/portal/portal/configuration.xml.

In the same way as the PortalContainer the StandaloneContainer takes over the configuration of the RootContainer. After that our configuration gets a little bit more tricky because standalone containers can be initialized using an URL. This URL contains a link to an external configuration. As you probably never need a standalone configuration you can safely jump over the remaining confusing words of this section.

After taking over RootContainer's configuration, there are three cases which depend on the URL initialization, :

As you have already learned the services are all singletons, so that the container creates only one single instance of each container. The services are created by calling the constructors (called constructor injection). If there are only zero-arguments constructors (Foo public Foo(){}) there are no problems to be expected. That's easy.

But now look at OrganizationServiceImpl.java

This JDBC implementation of BaseOrganizationService interface has only one constructor:

public OrganizationServiceImpl(ListenerService listenerService, DatabaseService dbService);

You see this service depends on two other services. In order to be able to call this constructor the container first needs a ListenerService and a DatabaseService. Therefore these services must be instantiated before BaseOrganizationService, because BaseOrganizationService depends on them.

For this purpose the container first looks at the constructors of all services and creates a matrix of service dependencies in order to call the services in a proper order. If for any reason there are interdependencies or circular dependencies you will get a java Exception. In this way the dependencies are injected by the container.

Note

What happens if one service has more than one constructor? The container always tries first to use the constructor with a maximum number of arguments, if this is not possible the container continues step by step with constructors that have less arguments until arriving at the zero-argument constructor (if there is any).

Retrospection. Do you remember your last project where you had some small components and several larger services? How was this organized? Some services had their own configuration files, others had static values in the source code. Most components were probably tightly coupled to the main application, or you called static methods whenever you needed a service in your java class. Presumably you even copied the source code of an earlier project in order to adapt the implementation to your needs. In short:

New Approach. You have seen that eXo uses the Inversion of Control (IoC) pattern which means that the control of the services is given to an independent outside entity, in this case a container. Now the container takes care of everything:

Dependency Injection. You also saw two types of dependency injections:

Do you feel yourself to be an expert now? Not yet? Get a deeper look and read this Services Wiring article. You read so much about configuration, that you should wonder what the XML Schema of the configuration file looks like.

If you wish to see examples of service configurations you should study the Core. Where you find descriptions of some eXo's core services. Finally you might wish to read more about PicoContainer.

This section shows you how to set up a sample service with some configurations and how to access the configuration parameters. The later sections describe all details of the configuration file (parameters, object-params, plugins, imports, and more). It also shows how to access the configuration values. You may consider this document as a reference, but you can also use this document as a tutorial and read it from the beginning to the end.

Related documents

You should have read and understood Service Configuration for Beginners. Obviously you should know java and xml. We are working with examples that are created for teaching reasons only and you will see extracts from the eXo Products default installation. When reading this article, you do not forget that the terms service and component are interchangeable in eXo Products.

You see your service has a configuration file, but you wonder how the file can gain access to its configuration. Imagine that you are asked to implement two different calculation methods: fast and exact.

You create one init parameter containing the calculation methods. For the exact method, you wish to configure more details for the service. Let's enhance the word service configuration file:

  <component>
    <key>com.laverdad.services.ArticleStatsService</key>
    <type>com.laverdad.services.ArticleStatsServiceImpl</type>
    <init-params>
      <value-param>
        <name>calc-method</name>
        <description>calculation method: fast, exact</description>
        <value>fast</value>
      </value-param>
      <properties-param>
        <name>details-for-exact-method</name>
        <description>details for exact phrase counting</description>
        <property name="language" value="English" />
        <property name="variant" value="us" />
      </properties-param>
    </init-params>
  </component>

Now let's see how our service can read this configuration. The implementation of the calcSentences() method serves just as a simple example. It's up to your imagination to implement the exact method.

public class ArticleStatsServiceImpl implements ArticleStatsService {

  private String calcMethod = "fast";
  private String variant = "French";
  private String language = "France";
  
  public ArticleStatsServiceImpl(InitParams initParams) {
    super();
    calcMethod = initParams.getValueParam("calc-method").getValue();
    PropertiesParam detailsForExactMethod = initParams.getPropertiesParam("details-for-exact-method");
    if ( detailsForExactMethod != null) {
      language = detailsForExactMethod.getProperty("language");
      variant = detailsForExactMethod.getProperty("variant");
    }
  }
  
  public int calcSentences(String article) {
    if (calcMethod == "fast") {
      // just count the number of periods "." 
      int res = 0;
      int period = article.indexOf('.');
      while (period != -1) {
        res++;
        article = article.substring(period+1);
        period = article.indexOf('.');
      }      
      return  res;
    } 
    throw new RuntimeException("Not implemented");
    }
}

You see you just have to declare a parameter of org.exoplatform.container.xml.InitParams in your constructor. The container provides an InitParams object that correspond to the xml tree of init-param.

As you want to follow the principle of Inversion of Control, you must not access the service directly. You need a Container to access the service.

With this command you get your current container:

This might be a PortalContainer or a StandaloneContainer, dependant on the execution mode in which you are running your application.

Whenever you need one of the services that you have configured use the method:

  • myContainer.getComponentInstance(class)

In our case:

  • ArticleStatsService statsService = (ArticleStatsService) myContainer.getComponentInstance(ArticleStatsService.class);

Recapitulation:

package com.laverdad.common;

import org.exoplatform.container.ExoContainer;
import org.exoplatform.container.ExoContainerContext;
import com.laverdad.services.*;

public class Statistics {

  public int makeStatistics(String articleText) {
    ExoContainer myContainer = ExoContainerContext.getCurrentContainer();
    ArticleStatsService statsService = (ArticleStatsService)
        myContainer.getComponentInstance(ArticleStatsService.class);    
    int numberOfSentences = statsService.calcSentences(articleText);
    return numberOfSentences;
  }
  
  public static void main( String args[]) {
   Statistics stats = new Statistics();
   String newText = "This is a normal text. The method only counts the number of periods. "
   + "You can implement your own implementation with a more exact counting. "
   + "Let`s make a last sentence.";
  System.out.println("Number of sentences: " + stats.makeStatistics(newText));
  }
}

If you test this sample in standalone mode, you need to put all jars of eXo Kernel in your buildpath, furthermore picoContainer is needed.

Let's have a look at the configuration of the LDAPService. It's not important to know LDAP, we only discuss the parameters.

<component>
    <key>org.exoplatform.services.ldap.LDAPService</key>
    <type>org.exoplatform.services.ldap.impl.LDAPServiceImpl</type>
    <init-params>
      <object-param>
        <name>ldap.config</name>
        <description>Default ldap config</description>
        <object type="org.exoplatform.services.ldap.impl.LDAPConnectionConfig">         
   <field  name="providerURL"><string>ldaps://10.0.0.3:636</string></field>
   <field  name="rootdn"><string>CN=Administrator,CN=Users,DC=exoplatform,DC=org</string></field>
   <field  name="password"><string>exo</string></field>
   <field  name="version"><string>3</string></field>
     <field  name="minConnection"><int>5</int></field>
       <field  name="maxConnection"><int>10</int></field>    
       <field  name="referralMode"><string>ignore</string></field>
       <field  name="serverName"><string>active.directory</string></field>
       </object>
      </object-param>
    </init-params>
</component>

You see here an object-param is being used to pass the parameters inside an object (actually a java bean). It consists of a name, a description and exactly one object. The object defines the type and a number of fields.

Here you see how the service accesses the object:

package org.exoplatform.services.ldap.impl;

public class LDAPServiceImpl implements LDAPService {
...
  public LDAPServiceImpl(InitParams params) {
    LDAPConnectionConfig config = (LDAPConnectionConfig) params.getObjectParam("ldap.config")
                                                               .getObject();
...

The passed object is LDAPConnectionConfig which is a classic java bean. It contains all fields and also the appropriate getters and setters (not listed here). You also can provide default values. The container creates a new instance of your bean and calls all setters whose values are configured in the configuration file.

package org.exoplatform.services.ldap.impl;

public class LDAPConnectionConfig {
  private String providerURL        = "ldap://127.0.0.1:389";
  private String rootdn;
  private String password;                                  
  private String version;                                   
  private String authenticationType = "simple";
  private String serverName         = "default";
  private int    minConnection;
  private int    maxConnection;
  private String referralMode       = "follow";
...

You see that the types (String, int) of the fields in the configuration correspond with the bean. A short glance in the kernel_1_0.xsd file let us discover more simple types:

Have a look on this type test xml file: object.xml.

You also can use java collections to configure your service. In order to see an example, let's open the database-organization-configuration.xml file. This file defines a default user organization (users, groups, memberships/roles) of your portal. They use component-plugins which are explained later. You wil see that object-param is used again.

There are two collections: The first collection is an ArrayList. This ArrayList contains only one value, but there could be more. The only value is an object which defines the field of the NewUserConfig$JoinGroup bean.

The second collection is a HashSet that is a set of strings.

    <component-plugin>
      <name>new.user.event.listener</name>
      <set-method>addListenerPlugin</set-method>
      <type>org.exoplatform.services.organization.impl.NewUserEventListener</type>
      <description>this listener assign group and membership to a new created user</description>
      <init-params>
        <object-param>
          <name>configuration</name>
          <description>description</description>
          <object type="org.exoplatform.services.organization.impl.NewUserConfig">
            <field  name="group">
              <collection type="java.util.ArrayList">
                <value>
                  <object type="org.exoplatform.services.organization.impl.NewUserConfig$JoinGroup">
                    <field  name="groupId"><string>/platform/users</string></field>
                    <field  name="membership"><string>member</string></field>
                  </object>
                </value>               
              </collection>
            </field>
            <field  name="ignoredUser">
              <collection type="java.util.HashSet">
                <value><string>root</string></value>
                <value><string>john</string></value>
                <value><string>marry</string></value>
                <value><string>demo</string></value>
                <value><string>james</string></value>
              </collection>
            </field>
          </object>
        </object-param>
      </init-params>
    </component-plugin>

Let's look at the org.exoplatform.services.organization.impl.NewUserConfig bean:

public class NewUserConfig {
  private List    role;
  private List    group;
  private HashSet ignoredUser;

  ...

  public void setIgnoredUser(String user) {
    ignoredUser.add(user);

  ...

  static public class JoinGroup {
    public String  groupId;
    public String  membership;
  ...
}

You see the values of the HashSet are set one by one by the container, and it's the responsibility of the bean to add these values to its HashSet.

The JoinGroup object is just an inner class and implements a bean of its own. It can be accessed like any other inner class using NewUserConfig.JoinGroup.

The External Plugin allows you to add configuration on the fly.

As you have carefully read Service Configuration for Beginners you know that normally newer configurations always replaces previous configurations. An external plugin allows you to add configuration without replacing previous configurations.

That can be interesting if you adapt a service configuration for your project-specific needs (country, language, branch, project, etc.).

Let's have a look at the configuration of the TaxonomyPlugin of the CategoriesService:

 <external-component-plugins>
    <target-component>org.exoplatform.services.cms.categories.CategoriesService</target-component>    
    <component-plugin>
     <name>predefinedTaxonomyPlugin</name>
      <set-method>addTaxonomyPlugin</set-method>
      <type>org.exoplatform.services.cms.categories.impl.TaxonomyPlugin</type>
      <init-params>
       <value-param>
          <name>autoCreateInNewRepository</name>
          <value>true</value>
         </value-param>         
         <value-param>
          <name>repository</name>
          <value>repository</value>
         </value-param>         
       <object-param>
        <name>taxonomy.configuration</name>
           <description>configuration predefined taxonomies to inject in jcr</description>
           <object type="org.exoplatform.services.cms.categories.impl.TaxonomyConfig">            
            <field  name="taxonomies">
             <collection type="java.util.ArrayList">
               <!-- cms taxonomy -->
              <value>
               <object type="org.exoplatform.services.cms.categories.impl.TaxonomyConfig$Taxonomy">
                 <field  name="name"><string>cmsTaxonomy</string></field>                              
                <field  name="path"><string>/cms</string></field>                                              
               </object>
              </value>
              <value> 
               <object type="org.exoplatform.services.cms.categories.impl.TaxonomyConfig$Taxonomy">
                 <field  name="name"><string>newsTaxonomy</string></field>                              
                <field  name="path"><string>/cms/news</string></field>                                              
               </object>
              </value>
            </field>                     
           </object>
       </object-param>
     </init-params>
   </component-plugin>
<external-component-plugins>

The <target-component> defines the service for which the plugin is defined. The configuration is injected by the container using a method that is defined in <set-method>. The method has exactly one argument of the type org.exoplatform.services.cms.categories.impl.TaxonomyPlugin:

  • addTaxonomyPlugin(org.exoplatform.services.cms.categories.impl.TaxonomyPlugin plugin)

The content of <init-params> corresponds to the structure of the TaxonomyPlugin object.

Note

You can configure the component CategoriesService using the addTaxonomyPlugin as often as you wish, you can also call addTaxonomyPlugin in different configuration files. The method addTaxonomyPlugin is then called several times, everything else depends on the implementation of the method.

The configuration manager allows you to find files using URL with special prefixes that we describe in details below.

GateIn uses PicoContainer, which implements the Inversion of Control (IoC) design pattern. All eXo containers inherit from a PicoContainer. There are mainly two eXo containers used, each of them can provide one or several services. Each container service is delivered in a JAR file. This JAR file may contain a default configuration. The use of default configurations is recommended and most services provide it.

When a Pico Container searches for services and its configurations, each configurable service may be reconfigured to override default values or set additional parameters. If the service is configured in two or more places the configuration override mechanism will be used.

Confused? - You might be interested in the Service Configuration for Beginners section to understand the basics.

GateIn uses PicoContainer, which implements the Inversion of Control (IoC) design pattern. All eXo containers inherit from a PicoContainer. There are mainly two eXo containers used, each of them can provide one or several services. Each container service is delivered in a JAR file. This JAR file may contain a default configuration. The use of default configurations is recommended and most of services provide it.

When a Pico Container searches for services and its configurations, each configurable service may be reconfigured to override default values or set additional parameters. If the service is configured in two or more places, the configuration override mechanism will be used.

The container performs the following steps to make eXo Container configuration retrieval, depending on the container type.

After the processing of all configurations available in system, the container will initialize it and start each service in order of the dependency injection (DI).

The user/developer should be careful when configuring the same service in different configuration files. It's recommended to configure a service in its own JAR only. Or, in case of a portal configuration, strictly reconfigure the services in portal WAR files or in an external configuration.

There are services that can be (or should be) configured more than one time. This depends on business logic of the service. A service may initialize the same resource (shared with other services) or may add a particular object to a set of objects (shared with other services too). In the first case, it's critical who will be the last, i.e. whose configuration will be used. In the second case, it's no matter who is the first and who is the last (if the parameter objects are independent).

Since eXo JCR 1.12, we added a set of new features that have been designed to extend portal applications such as GateIn.

Now we can define precisely a portal container and its dependencies and settings thanks to the PortalContainerDefinition that currently contains the name of the portal container, the name of the rest context, the name of the realm, the web application dependencies ordered by loading priority (i.e. the first dependency must be loaded at first and so on..) and the settings.

To be able to define a PortalContainerDefinition, we need to ensure first of all that a PortalContainerConfig has been defined at the RootContainer level, see an example below:

  <component>
    <!-- The full qualified name of the PortalContainerConfig -->
    <type>org.exoplatform.container.definition.PortalContainerConfig</type>
    <init-params>
      <!-- The name of the default portal container -->
      <value-param>
        <name>default.portal.container</name>
        <value>myPortal</value>
      </value-param>
      <!-- The name of the default rest ServletContext -->
      <value-param>
        <name>default.rest.context</name>
        <value>myRest</value>
      </value-param>
      <!-- The name of the default realm -->
      <value-param>
        <name>default.realm.name</name>
        <value>my-exo-domain</value>
      </value-param>
     <!-- Indicates whether the unregistered webapps have to be ignored -->
     <value-param>
        <name>ignore.unregistered.webapp</name>
        <value>true</value>
     </value-param>
      <!-- The default portal container definition -->
      <!-- It cans be used to avoid duplicating configuration -->
      <object-param>
        <name>default.portal.definition</name>
        <object type="org.exoplatform.container.definition.PortalContainerDefinition">
          <!-- All the dependencies of the portal container ordered by loading priority -->
          <field name="dependencies">
            <collection type="java.util.ArrayList">
              <value>
                <string>foo</string>
              </value>
              <value>
                <string>foo2</string>
              </value>
              <value>
                <string>foo3</string>
              </value>
            </collection>
          </field>        
          <!-- A map of settings tied to the default portal container -->
          <field name="settings">
            <map type="java.util.HashMap">
              <entry>
                <key>
                  <string>foo5</string>
                </key>
                <value>
                  <string>value</string>
                </value>
              </entry>
              <entry>
                <key>
                  <string>string</string>
                </key>
                <value>
                  <string>value0</string>
                </value>
              </entry>
              <entry>
                <key>
                  <string>int</string>
                </key>
                <value>
                  <int>100</int>
                </value>
              </entry>
            </map>
          </field>
          <!-- The path to the external properties file -->
          <field name="externalSettingsPath">
            <string>classpath:/org/exoplatform/container/definition/default-settings.properties</string>
          </field>
        </object>
      </object-param>
    </init-params>
  </component>

Note

All the value of the parameters marked with a (*) can be defined thanks to System properties like any values in configuration files but also thanks to variables loaded by the PropertyConfigurator. For example in GateIn by default, it would be all the variables defined in the file configuration.properties.

A new PortalContainerDefinition can be defined at the RootContainer level thanks to an external plugin, see an example below:

  <external-component-plugins>
    <!-- The full qualified name of the PortalContainerConfig -->
    <target-component>org.exoplatform.container.definition.PortalContainerConfig</target-component>
    <component-plugin>
      <!-- The name of the plugin -->
      <name>Add PortalContainer Definitions</name>
      <!-- The name of the method to call on the PortalContainerConfig in order to register the PortalContainerDefinitions -->
      <set-method>registerPlugin</set-method>
      <!-- The full qualified name of the PortalContainerDefinitionPlugin -->
      <type>org.exoplatform.container.definition.PortalContainerDefinitionPlugin</type>
      <init-params>
        <object-param>
          <name>portal</name>
          <object type="org.exoplatform.container.definition.PortalContainerDefinition">
            <!-- The name of the portal container -->
            <field name="name">
              <string>myPortal</string>
            </field>
            <!-- The name of the context name of the rest web application -->
            <field name="restContextName">
              <string>myRest</string>
            </field>
            <!-- The name of the realm -->
            <field name="realmName">
              <string>my-domain</string>
            </field>
            <!-- All the dependencies of the portal container ordered by loading priority -->
            <field name="dependencies">
              <collection type="java.util.ArrayList">
                <value>
                  <string>foo</string>
                </value>
                <value>
                  <string>foo2</string>
                </value>
                <value>
                  <string>foo3</string>
                </value>
              </collection>
            </field>
            <!-- A map of settings tied to the portal container -->
            <field name="settings">
              <map type="java.util.HashMap">
                <entry>
                  <key>
                    <string>foo</string>
                  </key>
                  <value>
                    <string>value</string>
                  </value>
                </entry>
                <entry>
                  <key>
                    <string>int</string>
                  </key>
                  <value>
                    <int>10</int>
                  </value>
                </entry>
                <entry>
                  <key>
                    <string>long</string>
                  </key>
                  <value>
                    <long>10</long>
                  </value>
                </entry>
                <entry>
                  <key>
                    <string>double</string>
                  </key>
                  <value>
                    <double>10</double>
                  </value>
                </entry>
                <entry>
                  <key>
                    <string>boolean</string>
                  </key>
                  <value>
                    <boolean>true</boolean>
                  </value>
                </entry>                                
              </map>
            </field>            
            <!-- The path to the external properties file -->
            <field name="externalSettingsPath">
              <string>classpath:/org/exoplatform/container/definition/settings.properties</string>
            </field>
          </object>
        </object-param>
      </init-params>
    </component-plugin>
  </external-component-plugins>

Table 2.2. Descriptions of the fields of a PortalContainerDefinition when it is used to define a new portal container

name (*)The name of the portal container. This field is mandatory .
restContextName (*)The name of the context name of the rest web application. This field is optional. The default value will be defined at the PortalContainerConfig level.
realmName (*)The name of the realm. This field is optional. The default value will be defined at the PortalContainerConfig level.
dependenciesAll the dependencies of the portal container ordered by loading priority. This field is optional. The default value will be defined at the PortalContainerConfig level. The dependencies are in fact the list of the context names of the web applications from which the portal container depends. This field is optional. The dependency order is really crucial since it will be interpreted the same way by several components of the platform. All those components, will consider the 1st element in the list less important than the second element and so on. It is currently used to:
  • Know the loading order of all the dependencies.

  • If we have several PortalContainerConfigOwner

    • The ServletContext of all the PortalContainerConfigOwner will be unified, if we use the unified ServletContext (PortalContainer.getPortalContext()) to get a resource, it will try to get the resource in the ServletContext of the most important PortalContainerConfigOwner (i.e. last in the dependency list) and if it cans find it, it will try with the second most important PortalContainerConfigOwner and so on.

    • The ClassLoader of all the PortalContainerConfigOwner will be unified, if we use the unified ClassLoader (PortalContainer.getPortalClassLoader()) to get a resource, it will try to get the resource in the ClassLoader of the most important PortalContainerConfigOwner (i.e. last in the dependency list) and if it can find it, it will try with the second most important PortalContainerConfigOwner and so on.

settingsA java.util.Map of internal parameters that we would like to tie the portal container. Those parameters could have any type of value. This field is optional. If some internal settings are defined at the PortalContainerConfig level, the two maps of settings will be merged. If a setting with the same name is defined in both maps, it will keep the value defined at the PortalContainerDefinition level.
externalSettingsPathThe path of the external properties file to load as default settings to the portal container. This field is optional. If some external settings are defined at the PortalContainerConfig level, the two maps of settings will be merged. If a setting with the same name is defined in both maps, it will keep the value defined at the PortalContainerDefinition level. The external properties files can be either of type "properties" or of type "xml". The path will be interpreted as follows:
  1. The path doesn't contain any prefix of type "classpath:", "jar:", "ar:", or "file:", we assume that the file could be externalized so we apply the following rules:

    1. A file exists at ${exo-conf-dir}/portal/${portalContainerName}/${externalSettingsPath}, we will load this file.

    2. No file exists at the previous path, we then assume that the path cans be interpreted by the ConfigurationManager.

  2. The path contains a prefix, we then assume that the path cans be interpreted by the ConfigurationManager.


Table 2.3. Descriptions of the fields of a PortalContainerDefinition when it is used to define the default portal container

name (*)The name of the portal container. This field is optional. The default portal name will be:
  1. If this field is not empty, then the default value will be the value of this field.

  2. If this field is empty and the value of the parameter default.portal.container is not empty, then the default value will be the value of the parameter.

  3. If this field and the parameter default.portal.container are both empty, the default value will be "portal".

restContextName (*)The name of the context name of the rest web application. This field is optional. The default value wil be:
  1. If this field is not empty, then the default value will be the value of this field.

  2. If this field is empty and the value of the parameter default.rest.context is not empty, then the default value will be the value of the parameter.

  3. If this field and the parameter default.rest.context are both empty, the default value will be "rest".

realmName (*)The name of the realm. This field is optional. The default value wil be:
  1. If this field is not empty, then the default value will be the value of this field.

  2. If this field is empty and the value of the parameter default.realm.name is not empty, then the default value will be the value of the parameter.

  3. If this field and the parameter default.realm.name are both empty, the default value will be "exo-domain".

dependenciesAll the dependencies of the portal container ordered by loading priority. This field is optional. If this field has a non empty value, it will be the default list of dependencies.
settingsA java.util.Map of internal parameters that we would like to tie the default portal container. Those parameters could have any type of value. This field is optional.
externalSettingsPathThe path of the external properties file to load as default settings to the default portal container. This field is optional. The external properties files can be either of type "properties" or of type "xml". The path will be interpreted as follows:
  1. The path doesn't contain any prefix of type "classpath:", "jar:", "ar:" or "file:", we assume that the file could be externalized so we apply the following rules:

    1. A file exists at ${exo-conf-dir}/portal/${externalSettingsPath}, we will load this file.

    2. No file exists at the previous path, we then assume that the path cans be interpreted by the ConfigurationManager.

  2. The path contains a prefix, we then assume that the path cans be interpreted by the ConfigurationManager.


Note

All the value of the parameters marked with a (*) can be defined thanks to System properties like any values in configuration files but also thanks to variables loaded by the PropertyConfigurator. For example in GateIn by default, it would be all the variables defined in the file configuration.properties.

Internal and external settings are both optional, but if we give a non empty value for both the application will merge the settings. If the same setting name exists in both settings, we apply the following rules:

  1. The value of the external setting is null, we ignore the value.

  2. The value of the external setting is not null and the value of the internal setting is null, the final value will be the external setting value that is of type String.

  3. Both values are not null, we will have to convert the external setting value into the target type which is the type of the internal setting value, thanks to the static method valueOf(String), the following sub-rules are then applied:

    1. The method cannot be found, the final value will be the external setting value that is of type String.

    2. The method can be found and the external setting value is an empty String, we ignore the external setting value.

    3. The method can be found and the external setting value is not an empty String but the method call fails, we ignore the external setting value.

    4. The method can be found and the external setting value is not an empty String and the method call succeeds, the final value will be the external setting value that is of type of the internal setting value.

We can inject the value of the portal container settings into the portal container configuration files thanks to the variables which name start with "portal.container.", so to get the value of a setting called "foo", just use the following syntax ${portal.container.foo}. You can also use internal variables, such as:


You can find below an example of how to use the variables:

<configuration xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.exoplatform.org/xml/ns/kernel_1_3.xsd http://www.exoplatform.org/xml/ns/kernel_1_3.xsd"
  xmlns="http://www.exoplatform.org/xml/ns/kernel_1_3.xsd">
  <component>
    <type>org.exoplatform.container.TestPortalContainer$MyComponent</type>
    <init-params>
      <!-- The name of the portal container -->
      <value-param>
        <name>portal</name>
        <value>${portal.container.name}</value>
      </value-param>
      <!-- The name of the rest ServletContext -->
      <value-param>
        <name>rest</name>
        <value>${portal.container.rest}</value>
      </value-param>
      <!-- The name of the realm -->
      <value-param>
        <name>realm</name>
        <value>${portal.container.realm}</value>
      </value-param>
      <value-param>
        <name>foo</name>
        <value>${portal.container.foo}</value>
      </value-param>
      <value-param>
        <name>before foo after</name>
        <value>before ${portal.container.foo} after</value>
      </value-param>
    </init-params>
  </component>
</configuration>

In the properties file corresponding to the external settings, you can reuse variables previously defined (in the external settings or in the internal settings) to create a new variable. In this case, the prefix "portal.container." is not needed, see an example below:

my-var1=value 1
my-var2=value 2
complex-value=${my-var1}-${my-var2}

In the external and internal settings, you can also use create variables based on value of System paramaters. The System parameters can either be defined at launch time or thanks to the PropertyConfigurator (see next section for more details). See an example below:

temp-dir=${java.io.tmpdir}${file.separator}my-temp

However, for the internal settings, you can use System parameters only to define settings of type java.lang.String.

It cans be also very usefull to define a generic variable in the settings of the default portal container, the value of this variable will change according to the current portal container. See below an example:

my-generic-var=value of the portal container "${name}"

If this variable is defined at the default portal container level, the value of this variable for a portal container called "foo" will be value of the portal container "foo".

It is possible to use component-plugin elements in order to dynamically change a PortalContainerDefinition. In the example below, we add the dependency foo to the default portal container and to the portal containers called foo1 and foo2:

<external-component-plugins>
  <!-- The full qualified name of the PortalContainerConfig -->
  <target-component>org.exoplatform.container.definition.PortalContainerConfig</target-component>
  <component-plugin>
    <!-- The name of the plugin -->
    <name>Change PortalContainer Definitions</name>
    <!-- The name of the method to call on the PortalContainerConfig in order to register the changes on the PortalContainerDefinitions -->
    <set-method>registerChangePlugin</set-method>
    <!-- The full qualified name of the PortalContainerDefinitionChangePlugin -->
    <type>org.exoplatform.container.definition.PortalContainerDefinitionChangePlugin</type>
    <init-params>
      <value-param>
        <name>apply.default</name>
        <value>true</value>
      </value-param>
      <values-param>
        <name>apply.specific</name>
        <value>foo1</value>
        <value>foo2</value>
      </values-param>  
      <object-param>
        <name>change</name>
        <object type="org.exoplatform.container.definition.PortalContainerDefinitionChange$AddDependencies">
          <!-- The list of name of the dependencies to add -->
          <field name="dependencies">
            <collection type="java.util.ArrayList">
              <value>
                <string>foo</string>
              </value>
            </collection>
          </field>
        </object>
      </object-param>     
    </init-params>
  </component-plugin>
</external-component-plugins>

Note

All the value of the parameters marked with a (*) can be defined thanks to System properties like any values in configuration files but also thanks to variables loaded by the PropertyConfigurator. For example in GateIn by default, it would be all the variables defined in the file configuration.properties.

To identify the portal containers to which the changes have to be applied, we use the follwing algorithm:

  1. The parameter apply.all has been set to true. The corresponding changes will be applied to all the portal containers. The other parameters will be ignored.

  2. The parameter apply.default has been set to true and the parameter apply.specific is null. The corresponding changes will be applied to the default portal container only.

  3. The parameter apply.default has been set to true and the parameter apply.specific is not null. The corresponding changes will be applied to the default portal container and the given list of specific portal containers.

  4. The parameter apply.default has been set to false or has not been set and the parameter apply.specific is null. The corresponding changes will be applied to the default portal container only.

  5. The parameter apply.default has been set to false or has not been set and the parameter apply.specific is not null. The corresponding changes will be applied to the given list of specific portal containers.

The modifications that can be applied to a PortalContainerDefinition must be a class of type PortalContainerDefinitionChange. The product proposes out of the box some implementations that we describe in the next sub sections.

This modification adds a list of dependencies at the end of the list of dependencies defined into the PortalContainerDefinition. The full qualified name is org.exoplatform.container.definition.PortalContainerDefinitionChange$AddDependencies.


See an example below, that will add foo at the end of the dependency list of the default portal container:

<external-component-plugins>
  <!-- The full qualified name of the PortalContainerConfig -->
  <target-component>org.exoplatform.container.definition.PortalContainerConfig</target-component>
  <component-plugin>
    <!-- The name of the plugin -->
    <name>Change PortalContainer Definitions</name>
    <!-- The name of the method to call on the PortalContainerConfig in order to register the changes on the PortalContainerDefinitions -->
    <set-method>registerChangePlugin</set-method>
    <!-- The full qualified name of the PortalContainerDefinitionChangePlugin -->
    <type>org.exoplatform.container.definition.PortalContainerDefinitionChangePlugin</type>
    <init-params>
      <value-param>
        <name>apply.default</name>
        <value>true</value>
      </value-param>
      <object-param>
        <name>change</name>
        <object type="org.exoplatform.container.definition.PortalContainerDefinitionChange$AddDependencies">
          <!-- The list of name of the dependencies to add -->
          <field name="dependencies">
            <collection type="java.util.ArrayList">
              <value>
                <string>foo</string>
              </value>
            </collection>
          </field>
        </object>
      </object-param>     
    </init-params>
  </component-plugin>
</external-component-plugins>

This modification adds a list of dependencies before a given target dependency defined into the list of dependencies of the PortalContainerDefinition. The full qualified name is org.exoplatform.container.definition.PortalContainerDefinitionChange$AddDependenciesBefore.


See an example below, that will add foo before foo2 in the dependency list of the default portal container:

<external-component-plugins>
  <!-- The full qualified name of the PortalContainerConfig -->
  <target-component>org.exoplatform.container.definition.PortalContainerConfig</target-component>
  <component-plugin>
    <!-- The name of the plugin -->
    <name>Change PortalContainer Definitions</name>
    <!-- The name of the method to call on the PortalContainerConfig in order to register the changes on the PortalContainerDefinitions -->
    <set-method>registerChangePlugin</set-method>
    <!-- The full qualified name of the PortalContainerDefinitionChangePlugin -->
    <type>org.exoplatform.container.definition.PortalContainerDefinitionChangePlugin</type>
    <init-params>
      <value-param>
        <name>apply.default</name>
        <value>true</value>
      </value-param>
      <object-param>
        <name>change</name>
        <object type="org.exoplatform.container.definition.PortalContainerDefinitionChange$AddDependenciesBefore">
          <!-- The list of name of the dependencies to add -->
          <field name="dependencies">
            <collection type="java.util.ArrayList">
              <value>
                <string>foo</string>
              </value>
            </collection>
          </field>
          <!-- The name of the target dependency -->
          <field name="target">
            <string>foo2</string>
          </field>
        </object>
      </object-param>     
    </init-params>
  </component-plugin>
</external-component-plugins>

This modification adds a list of dependencies after a given target dependency defined into the list of dependencies of the PortalContainerDefinition. The full qualified name is org.exoplatform.container.definition.PortalContainerDefinitionChange$AddDependenciesAfter.


See an example below, that will add foo after foo2 in the dependency list of the default portal container:

<external-component-plugins>
  <!-- The full qualified name of the PortalContainerConfig -->
  <target-component>org.exoplatform.container.definition.PortalContainerConfig</target-component>
  <component-plugin>
    <!-- The name of the plugin -->
    <name>Change PortalContainer Definitions</name>
    <!-- The name of the method to call on the PortalContainerConfig in order to register the changes on the PortalContainerDefinitions -->
    <set-method>registerChangePlugin</set-method>
    <!-- The full qualified name of the PortalContainerDefinitionChangePlugin -->
    <type>org.exoplatform.container.definition.PortalContainerDefinitionChangePlugin</type>
    <init-params>
      <value-param>
        <name>apply.default</name>
        <value>true</value>
      </value-param>
      <object-param>
        <name>change</name>
        <object type="org.exoplatform.container.definition.PortalContainerDefinitionChange$AddDependenciesAfter">
          <!-- The list of name of the dependencies to add -->
          <field name="dependencies">
            <collection type="java.util.ArrayList">
              <value>
                <string>foo</string>
              </value>
            </collection>
          </field>
          <!-- The name of the target dependency -->
          <field name="target">
            <string>foo2</string>
          </field>
        </object>
      </object-param>     
    </init-params>
  </component-plugin>
</external-component-plugins>

This modification adds new settings to a PortalContainerDefinition. The full qualified name is org.exoplatform.container.definition.PortalContainerDefinitionChange$AddSettings.


See an example below, that will add the settings string and stringX to the settings of the default portal container:

<external-component-plugins>
  <!-- The full qualified name of the PortalContainerConfig -->
  <target-component>org.exoplatform.container.definition.PortalContainerConfig</target-component>
  <component-plugin>
    <!-- The name of the plugin -->
    <name>Change PortalContainer Definitions</name>
    <!-- The name of the method to call on the PortalContainerConfig in order to register the changes on the PortalContainerDefinitions -->
    <set-method>registerChangePlugin</set-method>
    <!-- The full qualified name of the PortalContainerDefinitionChangePlugin -->
    <type>org.exoplatform.container.definition.PortalContainerDefinitionChangePlugin</type>
    <init-params>
      <value-param>
        <name>apply.default</name>
        <value>true</value>
      </value-param>
      <object-param>
        <name>change</name>
        <object type="org.exoplatform.container.definition.PortalContainerDefinitionChange$AddSettings">
          <!-- The settings to add to the to the portal containers -->
          <field name="settings">
            <map type="java.util.HashMap">
              <entry>
                <key>
                  <string>string</string>
                </key>
                <value>
                  <string>value1</string>
                </value>
              </entry>
              <entry>
                <key>
                  <string>stringX</string>
                </key>
                <value>
                  <string>value1</string>
                </value>
              </entry>
            </map>
          </field>
        </object>
      </object-param>     
    </init-params>
  </component-plugin>
</external-component-plugins>

It is possible to use component-plugin elements in order to dynamically disable one or several portal containers. In the example below, we disable the portal container named foo:

<external-component-plugins>
  <!-- The full qualified name of the PortalContainerConfig -->
  <target-component>org.exoplatform.container.definition.PortalContainerConfig</target-component>
  <component-plugin>
    <!-- The name of the plugin -->
    <name>Disable a PortalContainer</name>
    <!-- The name of the method to call on the PortalContainerConfig in order to register the changes on the PortalContainerDefinitions -->
    <set-method>registerDisablePlugin</set-method>
    <!-- The full qualified name of the PortalContainerDefinitionDisablePlugin -->
    <type>org.exoplatform.container.definition.PortalContainerDefinitionDisablePlugin</type>
    <init-params>
      <!-- The list of the name of the portal containers to disable -->
      <values-param>
        <name>names</name>
        <value>foo</value>
      </values-param>
    </init-params>
  </component-plugin>
</external-component-plugins>

Note

All the value of the parameters marked with a (*) can be defined thanks to System properties like any values in configuration files but also thanks to variables loaded by the PropertyConfigurator. For example in GateIn by default, it would be all the variables defined in the file configuration.properties.

To prevent any accesses to a web application corresponding to PortalContainer that has been disabled, you need to make sure that the following Http Filter (or a sub class of it) has been added to your web.xml in first position as below:

<filter>
  <filter-name>PortalContainerFilter</filter-name>
  <filter-class>org.exoplatform.container.web.PortalContainerFilter</filter-class>
</filter>  

<filter-mapping>
  <filter-name>PortalContainerFilter</filter-name>
  <url-pattern>/*</url-pattern>
</filter-mapping>

Note

It is only possible to disable a portal container when at least one PortalContainerDefinition has been registered.

A new property configurator service has been developed for taking care of configuring system properties from the inline kernel configuration or from specified property files.

The services is scoped at the root container level because it is used by all the services in the different portal containers in the application runtime.

The kernel configuration is able to handle configuration profiles at runtime (as opposed to packaging time).

Profiles are configured in the configuration files of the eXo kernel.

A configuration element is profiles capable when it carries a profiles element.

The container package is responsible of building a hierarchy of containers. Each service will then be registered in one container or the other according to the XML configuration file it is defined in. It is important to understand that there can be several PortalContainer instances that all are children of the RootContainer.

The behavior of the hierarchy is similar to a class loader one, hence when you will lookup a service that depends on another one, the container will look for it in the current container and if it cannot be found, then it will look in the parent container. That way you can load all the reusable business logic components in the same container (here the RootContainer) and differentiate the service implementation from one portal instance to the other by just loading different service implementations in two sibling PortalContainers.

Therefore, if you look at the Portal Container as a service repository for all the business logic in a portal instance, then you understand why several PortalContainers allows you to manage several portals (each one deployed as a single war) in the same server by just changing XML configuration files.

The default configuration XML files are packaged in the service jar. There are three configuration.xml files, one for each container type. In that XML file, we define the list of services and their init parameters that will be loaded in the corresponding container.

After deploying you find the configuration.xml file in webapps/portal/WEB-INF/conf Use component registration tags. Let's look at the key tag that defines the interface and the type tag that defines the implementation. Note that the key tag is not mandatory, but it improves performance.

<!-- Portlet container hooks -->
  <component>
    <key>org.exoplatform.services.portletcontainer.persistence.PortletPreferencesPersister</key>
    <type>org.exoplatform.services.portal.impl.PortletPreferencesPersisterImpl</type>
  </component>

Register plugins that can act as listeners or external plugin to bundle some plugin classes in other jar modules. The usual example is the hibernate service to which we can add hbm mapping files even if those are deployed in an other maven artifact.

<external-component-plugins>
  <target-component>org.exoplatform.services.database.HibernateService</target-component>
  <component-plugin> 
    <name>add.hibernate.mapping</name>
    <set-method>addPlugin</set-method>
    <type>org.exoplatform.services.database.impl.AddHibernateMappingPlugin</type>
    <init-params>
      <values-param>
        <name>hibernate.mapping</name>
        <value>org/exoplatform/services/portal/impl/PortalConfigData.hbm.xml</value>
        <value>org/exoplatform/services/portal/impl/PageData.hbm.xml</value>
        <value>org/exoplatform/services/portal/impl/NodeNavigationData.hbm.xml</value>
      </values-param>        
    </init-params>
  </component-plugin>
</external-component-plugins>

In that sample we target the HibernateService and we will call its addPlugin() method with an argument of the type AddHibernateMappingPlugin. That object will first have been filled with the init parameters.

Therefore, it is possible to define services that will be able to receive plugins without implementing any framework interface.

Another example of use is the case of listeners as in the following code where a listener is added to the OrganisationService and will be called each time a new user is created:

<external-component-plugins>
  <target-component>org.exoplatform.services.organization.OrganizationService</target-component>
  <component-plugin>
    <name>portal.new.user.event.listener</name>
    <set-method>addListenerPlugin</set-method>
    <type>org.exoplatform.services.portal.impl.PortalUserEventListenerImpl</type>
    <description>this listener create the portal configuration for the new user</description>
    <init-params>
      <object-param>
        <name>configuration</name>
        <description>description</description>
        <object type="org.exoplatform.services.portal.impl.NewPortalConfig">
          <field  name="predefinedUser">
            <collection type="java.util.HashSet">
              <value><string>admin</string></value>
              <value><string>exo</string></value>
              <value><string>company</string></value>
              <value><string>community</string></value>
              <value><string>portal</string></value>
              <value><string>exotest</string></value>
            </collection>
          </field>
          <field  name="templateUser"><string>template</string></field>
          <field  name="templateLocation"><string>war:/conf/users</string></field>
        </object>
      </object-param>
    </init-params>
  </component-plugin>
...

In the previous XML configuration, we refer the organization service and we will call its method addListenerPlugin with an object of type PortalUserEventListenerImpl. Each time a new user will be created (apart the predefined ones in the list above) methods of the PortalUserEventListenerImpl will be called by the service.

As you can see, there are several types of init parameters, from a simple value param which binds a key with a value to a more complex object mapping that fills a JavaBean with the info defined in the XML.

Many other examples exist such as for the Scheduler Service where you can add a job with a simple XML configuration or the JCR Service where you can add a NodeType from your own configuration.xml file.

When the RootContainer is starting the configuration retrieval looks for configuration files in each jar available from the classpath at jar path /conf/portal/configuration.xml and from each war at path /WEB-INF/conf/configuration.xml. These configurations are added to a set. If a component was configured in a previous jar and the current jar contains a new configuration of that component the latest (from the current jar) will replace the previous configuration.

After the processing of all configurations available on the system the container will initialize it and start each component in order of the dependency injection (DI).

So, in general the user/developer should be careful when configuring the same components in different configuration files. It's recommended to configure service in its own jar only. Or, in case of a portal configuration, strictly reconfigure the component in portal files.

But, there are components that can be (or should be) configured more than one time. This depends on the business logic of the component. A component may initialize the same resource (shared with other players) or may add a particular object to a set of objects (shared with other players too). In the first case it's critical who will be the last, i.e. whose configuration will be used. In second case it doesn't matter who is the first and who is the last (if the parameter objects are independent).

In case of problems with configuration of component it's important to know from which jar/war it comes. For that purpose user/developer can set JVM system property org.exoplatform.container.configuration.debug, in command line:

java -Dorg.exoplatform.container.configuration.debug ...

With that property container configuration manager will report configuration adding process to the standard output (System.out).

   ......
   Add configuration jar:file:/D:/Projects/eXo/dev/exo-working/exo-tomcat/lib/exo.kernel.container-trunk.jar!/conf/portal/configuration.xml
   Add configuration jar:file:/D:/Projects/eXo/dev/exo-working/exo-tomcat/lib/exo.kernel.component.cache-trunk.jar!/conf/portal/configuration.xml
   Add configuration jndi:/localhost/portal/WEB-INF/conf/configuration.xml
        import jndi:/localhost/portal/WEB-INF/conf/common/common-configuration.xml
        import jndi:/localhost/portal/WEB-INF/conf/database/database-configuration.xml
        import jndi:/localhost/portal/WEB-INF/conf/ecm/jcr-component-plugins-configuration.xml
        import jndi:/localhost/portal/WEB-INF/conf/jcr/jcr-configuration.xml 
   ......

Since kernel version 2.0.6 it is possible to setup order of loading for ComponentPlugin. Use the ' priority' tag to define plugin's load priority. By default all plugins get priority '0'; they will be loaded in the container's natural way. If you want one plugin to be loaded later than the others then just set priority for it higher than zero.

Simple example of fragment of a configuration.xml.

...
<component>
  <type>org.exoplatform.services.Component1</type>
</component>

<external-component-plugins>
  <target-component>org.exoplatform.services.Component1</target-component>

  <component-plugin>
    <name>Plugin1</name>
    <set-method>addPlugin</set-method>
    <type>org.exoplatform.services.plugins.Plugin1</type>
    <description>description</description>
    <priority>1</priority>
  </component-plugin>

  <component-plugin>
    <name>Plugin2</name>
    <set-method>addPlugin</set-method>
    <type>org.exoplatform.services.plugins.Plugin2</type>
    <description>description</description>
    <priority>2</priority>
  </component-plugin>

</external-component-plugins>

<external-component-plugins>
  <target-component>org.exoplatform.services.Component1</target-component>
  <component-plugin>
    <name>Plugin3</name>
    <set-method>addPlugin</set-method>
    <type>org.exoplatform.services.plugins.Plugin3</type>
    <description>description</description>
  </component-plugin>
</external-component-plugins>
...

In the above example plugin 'Plugin3' will be loaded first because it has the default priority '0'. Then, plugin 'Plugin1' will be loaded and last one is plugin 'Plugin2'.

This section will first describe how the ListenerService works and then it will show you how to configure the ListenerService.

Related documents

Listeners must be subclasses of org.exoplatform.services.listener.Listener registered by the ListenerService.

To trigger an event, an application can call one of the broadcast() methods of ListenerService.

/**
 * This method is used to broadcast an event. This method should: 1. Check if
 * there is a list of listener that listen to the event name. 2. If there is a
 * list of listener, create the event object with the given name , source and
 * data 3. For each listener in the listener list, invoke the method
 * onEvent(Event)
 * 
 * @param <S> The type of the source that broadcast the event
 * @param <D> The type of the data that the source object is working on
 * @param name The name of the event
 * @param source The source object instance
 * @param data The data object instance
 * @throws Exception 
 */
public <S, D> void broadcast(String name, S source, D data) throws Exception {
   ...
}

/**
 * This method is used when a developer want to implement his own event object
 * and broadcast the event. The method should: 1. Check if there is a list of
 * listener that listen to the event name. 2. If there is a list of the
 * listener, For each listener in the listener list, invoke the method
 * onEvent(Event)
 * 
 * @param <T> The type of the event object, the type of the event object has
 *          to be extended from the Event type
 * @param event The event instance
 * @throws Exception
 */
public <T extends Event> void broadcast(T event) throws Exception {
   ...
}

The boadcast() methods retrieve the name of the event and find the registered listeners with the same name and call the method onEvent() on each listener found.

Each listener is a class that extends org.exoplatform.services.listener.Listener, as you can see below:

public abstract class Listener<S, D> extends BaseComponentPlugin {

   /**
    * This method should be invoked when an event with the same name is
    * broadcasted
    */
   public abstract void onEvent(Event<S, D> event) throws Exception;
}

Each listener is also a ComponentPlugin with a name and a description, in other words, the name of the listener will be the name given in the configuration file, for more details see the next section.

public interface ComponentPlugin {
   public String getName();

   public void setName(String name);

   public String getDescription();

   public void setDescription(String description);
}

The org.exoplatform.services.security.ConversationRegistry uses the ListenerService to notify that a user has just signed in or just left the application. For example, when a new user signs in, the following code is called:

listenerService.broadcast("exo.core.security.ConversationRegistry.register", this, state);

This code will in fact create a new Event which name is "exo.core.security.ConversationRegistry.register", which source is the current instance of ConversationRegistry and which data is the given state. The ListenerService will call the method onEvent(Event<ConversationRegistry, ConversationState> event) on all the listeners which name is "exo.core.security.ConversationRegistry.register".

In the example below, we define a Listener that will listen the event "exo.core.security.ConversationRegistry.register".

<?xml version="1.0" encoding="ISO-8859-1"?>
<configuration>
...
  <external-component-plugins>
    <!-- The full qualified name of the ListenerService --> 
    <target-component>org.exoplatform.services.listener.ListenerService</target-component>

    <component-plugin>
      <!-- The name of the listener that is also the name of the target event -->
      <name>exo.core.security.ConversationRegistry.register</name>
      <!-- The name of the method to call on the ListenerService in order to register the Listener -->
      <set-method>addListener</set-method>
      <!-- The full qualified name of the Listener -->
      <type>org.exoplatform.forum.service.AuthenticationLoginListener</type>
    </component-plugin>

  </external-component-plugins>
</configuration>
...

Job scheduler defines a job to execute a given number of times during a given period. It is a service that is in charge of unattended background executions, commonly known for historical reasons as batch processing. It is used to create and run jobs automatically and continuously, to schedule event-driven jobs and reports.

Jobs are scheduled to run when a given Trigger occurs. Triggers can be created with nearly any combination of the following directives:

Jobs are given names by their creator and can also be organized into named groups. Triggers may also be given names and placed into groups, in order to easily organize them within the scheduler. Jobs can be added to the scheduler once, but registered with multiple Triggers. Within a J2EE environment, Jobs can perform their work as part of a distributed (XA) transaction.

(Source: quartz-scheduler.org)

Kernel leverages Quartz for its scheduler service and wraps org.quartz.Scheduler in org.exoplatform.services.scheduler.impl.QuartzSheduler for easier service wiring and configuration like any other services. To work with Quartz in Kernel, you will mostly work with org.exoplatform.services.scheduler.JobSchedulerService (implemented by org.exoplatform.services.scheduler.impl.JobSchedulerServiceImpl.

To use JobSchedulerService, you can configure it as a component in the configuration.xml. Because JobSchedulerService requires QuartzSheduler and QueueTasks, you also have to configure these two components.

<?xml version="1.0" encoding="UTF-8"?>
<configuration
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://www.exoplatform.org/xml/ns/kernel_1_3.xsd http://www.exoplatform.org/xml/ns/kernel_1_3.xsd"
  xmlns="http://www.exoplatform.org/xml/ns/kernel_1_3.xsd">

  <component>
    <type>org.exoplatform.services.scheduler.impl.QuartzSheduler</type>
  </component>

  <component>
    <type>org.exoplatform.services.scheduler.QueueTasks</type>
  </component>

  <component>
    <key>org.exoplatform.services.scheduler.JobSchedulerService</key>
    <type>org.exoplatform.services.scheduler.impl.JobSchedulerServiceImpl</type>
  </component>

</configuration>

Work with JobSchedulerService by creating a sample project and use GateIn-3.1.0-GA for testing.

Firstly, create a project by using maven archetype plugin:

mvn archetype:generate
  • For project type: select maven-archetype-quickstart

  • For groupId: select org.exoplatform.samples

  • For artifactId: select exo.samples.scheduler

  • For version: select 1.0.0-SNAPSHOT

  • For package: select org.exoplatform.samples.scheduler

Edit the pom.xml as follows:

<project
  xmlns="http://maven.apache.org/POM/4.0.0"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">

  <modelVersion>4.0.0</modelVersion>

  <parent>
    <artifactId>exo.portal.parent</artifactId>
    <groupId>org.exoplatform.portal</groupId>
    <version>3.1.0-GA</version>
  </parent>

  <groupId>org.exoplatform.samples</groupId>
  <artifactId>exo.samples.scheduler</artifactId>
  <version>1.0.0-SNAPSHOT</version>
  <name>eXo Samples For Scheduler</name>
  <description>eXo Samples Code For Scheduler</description>
</project>    

Generate an eclipse project by using maven eclipse plugin and then import into eclipse:

mvn eclipse:eclipse

eXo Kernel makes it easier to work with job scheduler service. All you need is just to define your "job" class to be performed by implementing org.quartz.Job interface and add configuration for it.

After defining the "job", the only next step is to configure it by using external-component-plugin configuration for org.exoplatform.services.scheduler.JobSchedulerService. You can use these methods below for setting component plugin:

public void addPeriodJob(ComponentPlugin plugin) throws Exception;

The component plugin for this method must be the type of org.exoplatform.services.scheduler.PeriodJob. This type of job is used to perform actions that are executed in a period of time. You have to define when this job is performed, when it ends, when it performs the first action, how many times it is executed and the period of time to perform the action. See the configuration sample below to understand more clearly:

<external-component-plugins>
   <target-component>org.exoplatform.services.scheduler.JobSchedulerService</target-component>
    <component-plugin>
      <name>PeriodJob Plugin</name>
      <set-method>addPeriodJob</set-method>
      <type>org.exoplatform.services.scheduler.PeriodJob</type>
      <description>period job configuration</description>
      <init-params>
        <properties-param>
          <name>job.info</name>
          <description>dumb job executed  periodically</description>
          <property name="jobName" value="DumbJob"/>
          <property name="groupName" value="DumbJobGroup"/>
          <property name="job" value="org.exoplatform.samples.scheduler.jobs.DumbJob"/>
          <property name="repeatCount" value="0"/>
          <property name="period" value="60000"/>
          <property name="startTime" value="+45"/>
          <property name="endTime" value=""/>
        </properties-param>
      </init-params>
    </component-plugin>
 </external-component-plugins>
public void addCronJob(ComponentPlugin plugin) throws Exception;

The component plugin for this method must be the type of org.exoplatform.services.scheduler.CronJob. This type of job is used to perform actions at specified time with Unix 'cron-like' definitions. The plugin uses "expression" field for specifying the 'cron-like' definitions to execute the job. This is considered as the most powerful and flexible job to define when it will execute. For example, at 12pm every day => "0 0 12 * * ?"; or at 10:15am every Monday, Tuesday, Wednesday, Thursday and Friday => "0 15 10 ? * MON-FRI". To see more about Cron expression, please refer to this article:

CRON expression.

See the configuration sample below to understand more clearly:

<external-component-plugins>
    <target-component>org.exoplatform.services.scheduler.JobSchedulerService</target-component>
    <component-plugin>
      <name>CronJob Plugin</name>
      <set-method>addCronJob</set-method>
      <type>org.exoplatform.services.scheduler.CronJob</type>
      <description>cron job configuration</description>
      <init-params>
        <properties-param>
          <name>cronjob.info</name>
          <description>dumb job executed by cron expression</description>
          <property name="jobName" value="DumbJob"/>
          <property name="groupName" value="DumbJobGroup"/>
          <property name="job" value="org.exoplatform.samples.scheduler.jobs.DumbJob"/>
          <!-- The job will be performed at 10:15am every day -->
          <property name="expression" value="0 15 10 * * ?"/> 
        </properties-param>
      </init-params>
    </component-plugin>
  </external-component-plugins>
public void addGlobalJobListener(ComponentPlugin plugin) throws Exception;
public void addJobListener(ComponentPlugin plugin) throws Exception;

The component plugin for two methods above must be the type of org.quartz.JobListener. This job listener is used so that it will be informed when a org.quartz.JobDetail executes.

public void addGlobalTriggerListener(ComponentPlugin plugin) throws Exception;
public void addTriggerListener(ComponentPlugin plugin) throws Exception;

The component plugin for two methods above must be the type of org.quartz.TriggerListener. This trigger listener is used so that it will be informed when a org.quartz.Trigger fires.

Create conf.portal package in your sample project. Add the configuration.xml file with the content as follows:

<?xml version="1.0" encoding="UTF-8"?>
<configuration
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://www.exoplatform.org/xml/ns/kernel_1_3.xsd http://www.exoplatform.org/xml/ns/kernel_1_3.xsd"
  xmlns="http://www.exoplatform.org/xml/ns/kernel_1_3.xsd">

  <component>
    <type>org.exoplatform.services.scheduler.impl.QuartzSheduler</type>
  </component>
  <component>
    <type>org.exoplatform.services.scheduler.QueueTasks</type>
  </component>
  <component>
    <key>org.exoplatform.services.scheduler.JobSchedulerService</key>
    <type>org.exoplatform.services.scheduler.impl.JobSchedulerServiceImpl</type>
  </component>

  <external-component-plugins>
    <target-component>org.exoplatform.services.scheduler.JobSchedulerService</target-component>
    <component-plugin>
      <name>PeriodJob Plugin</name>
      <set-method>addPeriodJob</set-method>
      <type>org.exoplatform.services.scheduler.PeriodJob</type>
      <description>period job configuration</description>
      <init-params>
        <properties-param>
          <name>job.info</name>
          <description>dumb job executed periodically</description>
          <property name="jobName" value="DumbJob"/>
          <property name="groupName" value="DumbJobGroup"/>
          <property name="job" value="org.exoplatform.samples.scheduler.jobs.DumbJob"/>
          <property name="repeatCount" value="0"/>
          <property name="period" value="60000"/>
          <property name="startTime" value="+45"/>
          <property name="endTime" value=""/>
        </properties-param>
      </init-params>
    </component-plugin>
  </external-component-plugins>
</configuration>

mvn clean install the project. Copy .jar file to lib in tomcat bundled with GateIn-3.1.0-GA. Run bin/gatein.sh to see the DumbJob to be executed on the terminal when portal containers are initialized. Please look at the terminal to see the log message of DumbJob.

From now on, you can easily create any job to be executed in GateIn's portal by defining your job and configuring it.

This section will provide you all the basic knowledge about eXo Cache, from basic concepts to advanced concepts, sample codes, and more.

All applications on the top of eXo JCR that need a cache, can rely on an org.exoplatform.services.cache.ExoCache instance that is managed by the org.exoplatform.services.cache.CacheService. The main implementation of this service is org.exoplatform.services.cache.impl.CacheServiceImpl which depends on the org.exoplatform.services.cache.ExoCacheConfig in order to create new ExoCache instances. See the below example of org.exoplatform.services.cache.CacheService definition:

  <component>
    <key>org.exoplatform.services.cache.CacheService</key>
    <jmx-name>cache:type=CacheService</jmx-name>
    <type>org.exoplatform.services.cache.impl.CacheServiceImpl</type>
    <init-params>
      <object-param>
        <name>cache.config.default</name>
        <description>The default cache configuration</description>
        <object type="org.exoplatform.services.cache.ExoCacheConfig">
          <field name="name"><string>default</string></field>
          <field name="maxSize"><int>300</int></field>
          <field name="liveTime"><long>600</long></field>
          <field name="distributed"><boolean>false</boolean></field>
          <field name="implementation"><string>org.exoplatform.services.cache.concurrent.ConcurrentFIFOExoCache</string></field> 
        </object>
      </object-param>
    </init-params>
  </component>

See the below example about how to define a new ExoCacheConfig thanks to a external-component-plugin:

  <external-component-plugins>
    <target-component>org.exoplatform.services.cache.CacheService</target-component>
    <component-plugin>
      <name>addExoCacheConfig</name>
      <set-method>addExoCacheConfig</set-method>
      <type>org.exoplatform.services.cache.ExoCacheConfigPlugin</type>
      <description>Configures the cache for query service</description>
      <init-params>
        <object-param>
          <name>cache.config.wcm.composer</name>
          <description>The default cache configuration</description>
          <object type="org.exoplatform.services.cache.ExoCacheConfig">
            <field name="name"><string>wcm.composer</string></field>
            <field name="maxSize"><int>300</int></field>
            <field name="liveTime"><long>600</long></field>
            <field name="distributed"><boolean>false</boolean></field>
            <field name="implementation"><string>org.exoplatform.services.cache.concurrent.ConcurrentFIFOExoCache</string></field> 
          </object>
        </object-param>
      </init-params>
    </component-plugin>
  </external-component-plugins>

In case, you have big values or non serializable values and you need a replicated cache to at list invalidate the data when it is needed, you can use the invalidation mode that will work on top of any replicated cache implementations. This is possible thanks to the class InvalidationExoCache which is actually a decorator whose idea is to replicate the the hash code of the value in order to know if it is needed or not to invalidate the local data, if the new hash code of the value is the same as the old value, we assume that it is the same value so we don't invalidate the old value. This is required to avoid the following infinite loop that we will face with invalidation mode proposed out of the box by JBoss Cache for example:

In the use case above, thanks to the InvalidationExoCache since the value loaded at step #3 has the same hash code as the value loaded as step #1, the step #4 won't invalidate the data on the cluster node #1.

It exists 2 ways to use the invalidation mode which are the following:

In the previous versions of eXo kernel, it was quite complex to implement your own ExoCache because it was not open enough. Since kernel 2.0.8, it is possible to easily integrate your favorite cache provider in eXo Products.

You just need to implement your own ExoCacheFactory and register it in an eXo container, as described below:

package org.exoplatform.services.cache;
...
public interface ExoCacheFactory {
  
  /**
   * Creates a new instance of {@link org.exoplatform.services.cache.ExoCache}
   * @param config the cache to create
   * @return the new instance of {@link org.exoplatform.services.cache.ExoCache}
   * @exception ExoCacheInitException if an exception happens while initializing the cache
   */
  public ExoCache createCache(ExoCacheConfig config) throws ExoCacheInitException;  
}

As you can see, there is only one method to implement which can be seen as a converter of an ExoCacheConfig to get an instance of ExoCache. Once, you created your own implementation, you can simply register your factory by adding a file conf/portal/configuration.xml with a content of the following type:

<configuration>
  <component>
    <key>org.exoplatform.services.cache.ExoCacheFactory</key>
    <type>org.exoplatform.tutorial.MyExoCacheFactoryImpl</type>
    ...
  </component>   
</configuration>

The factory for infinispan, delegates the cache creation to ExoCacheCreator that is defined as below:

package org.exoplatform.services.cache.impl.infinispan;
...
public interface ExoCacheCreator {

   /**
    * Creates an eXo cache according to the given configuration {@link org.exoplatform.services.cache.ExoCacheConfig}
    * @param config the configuration of the cache to apply
    * @param confBuilder the configuration builder of the infinispan cache
    * @param cacheGetter a {@link Callable} instance from which we can get the cache
    * @exception ExoCacheInitException if an exception happens while initializing the cache
    */
   public ExoCache<Serializable, Object> create(ExoCacheConfig config, ConfigurationBuilder confBuilder, 
            Callable<Cache<Serializable, Object>> cacheGetter) throws ExoCacheInitException;

   /**
    * Returns the type of {@link org.exoplatform.services.cache.ExoCacheConfig} expected by the creator  
    * @return the expected type
    */
   public Class<? extends ExoCacheConfig> getExpectedConfigType();

   /**
    * Returns a set of all the implementations expected by the creator. This is mainly used to be backward compatible
    * @return the expected by the creator
    */
   public Set<String> getExpectedImplementations();
}

The ExoCacheCreator allows you to define any kind of infinispan cache instance that you would like to have. It has been designed to give you the ability to have your own type of configuration and to always be backward compatible.

In an ExoCacheCreator, you need to implement 3 methods which are:

By default, no cache creator are defined, so you need to define them yourself by adding them in your configuration files.

This is the generic cache creator that allows you to use any eviction strategies defined by default in Infinispan.

..
<object-param>
  <name>GENERIC</name>
  <description>The generic cache creator</description>
  <object type="org.exoplatform.services.cache.impl.infinispan.generic.GenericExoCacheCreator">
    <field name="implementations">
      <collection type="java.util.HashSet">
         <value>
            <string>NONE</string>
         </value>
         <value>
            <string>FIFO</string>
         </value>
         <value>
            <string>LRU</string>
         </value>
         <value>
            <string>UNORDERED</string>
         </value>
         <value>
            <string>LIRS</string>
         </value>
      </collection>        
    </field>
    <field name="defaultStrategy"><string>${my-value}</string></field>
    <field name="defaultMaxIdle"><long>${my-value}</long></field>
    <field name="defaultWakeUpInterval"><long>${my-value}</long></field>
  </object>
</object-param>
...

All the eviction strategies proposed by default in infinispan rely on the generic cache creator.

...
       <object-param>
        <name>myCache</name>
        <description>My cache configuration</description>
        <object type="org.exoplatform.services.cache.impl.infinispan.generic.GenericExoCacheConfig">
          <field name="name"><string>myCacheName</string></field>
          <field name="strategy"><int>${my-value}</int></field>
          <field name="maxEntries"><long>${my-value}</long></field>
          <field name="lifespan"><long>${my-value}</long></field>
          <field name="maxIdle"><long>${my-value}</long></field>
          <field name="wakeUpInterval"><long>${my-value}</long></field>
        </object>
      </object-param> 
...

  • Old configuration

...
      <object-param>
        <name>myCache-with-old-config</name>
        <description>My cache configuration</description>
          <field name="name"><string>myCacheName-with-old-config</string></field>
          <field name="maxSize"><int>${my-value}</int></field>
          <field name="liveTime"><long>${my-value}</long></field>
          <field name="implementation"><string>${my-value}</string></field>
        </object>
      </object-param> 
...

Note

For the fields maxIdle and wakeUpInterval needed by infinispan, we will use the default values provided by the creator.

In order to be able to use infinispan in distributed mode with the ability to launch external JVM instances that will manage a part of the cache, we need to configure the DistributedCacheManager. In the next sections, we will show how to configure the component and how to launch external JVM instances.

The DistributedCacheManager is the component that will manage all the cache instances that we expect to be distributed, it must be unique in the whole JVM which means that it must be declared at RootContainer level in portal mode or at StandaloneContainer in standalone mode. See below an example of configuration.

<component>
  <type>org.exoplatform.services.ispn.DistributedCacheManager</type>
  <init-params>
    <value-param>
      <name>infinispan-configuration</name>
      <value>jar:/conf/distributed-cache-configuration.xml</value>
    </value-param>
    <properties-param>
      <name>parameters</name>
      <description>The parameters of the configuration</description>
      <property name="configurationFile" value="${gatein.jcr.jgroups.config}"></property>
      <property name="invalidationThreshold" value="0"></property>
      <property name="numOwners" value="3"></property>
      <property name="numSegments" value="60"></property>
    </properties-param>     
  </init-params>
</component>

As described above, the configuration of infinispan must defined explicitly each cache using the nameCache block no dynamic configuration of cache is supported. Indeed to ensure that the whole cluster is consistent in term of defined cache, it is required to configure all the cache that you will need and register it using its future name.

For now, we have 2 supported cache name which are JCRCache and eXoCache. JCRCache is the name of the cache that we use in case we would like to store the data of the JCR into a distributed cache. eXoCache is the name of the cache that we use in case we would like to store the data of some eXo Cache instances into a distributed cache.

See below an example of infinispan configuration with both eXoCache and JCRCache defined:

<infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:infinispan:config:5.2 http://www.infinispan.org/schemas/infinispan-config-5.2.xsd"
  xmlns="urn:infinispan:config:5.2">
   <global>
      <globalJmxStatistics jmxDomain="exo" enabled="true" allowDuplicateDomains="true"/>
      <transport transportClass="org.infinispan.remoting.transport.jgroups.JGroupsTransport" clusterName="JCR-cluster" distributedSyncTimeout="20000">
        <properties>
          <property name="configurationFile" value="${configurationFile}"/>
        </properties>
      </transport>
      <shutdown hookBehavior="DEFAULT"/>
   </global>
   <namedCache name="JCRCache">
      <locking isolationLevel="READ_COMMITTED" lockAcquisitionTimeout="120000" writeSkewCheck="false" concurrencyLevel="500" useLockStriping="true" />
      <transaction transactionManagerLookupClass="org.infinispan.transaction.lookup.GenericTransactionManagerLookup" syncRollbackPhase="true" syncCommitPhase="true" eagerLockSingleNode="true" transactionMode="TRANSACTIONAL"/>
      <jmxStatistics enabled="true"/>
      <clustering mode="distribution">
        <l1 enabled="true" invalidationThreshold="${invalidationThreshold}"/>
         <hash numOwners="${numOwners}" numSegments="${numSegments}">
           <groups enabled="true"/>
         </hash>
         <sync replTimeout="180000"/>
      </clustering>
   </namedCache>
   <namedCache name="eXoCache">
      <locking isolationLevel="READ_COMMITTED" lockAcquisitionTimeout="120000" writeSkewCheck="false" concurrencyLevel="500" useLockStriping="true" />
      <transaction transactionManagerLookupClass="org.infinispan.transaction.lookup.GenericTransactionManagerLookup" syncRollbackPhase="true" syncCommitPhase="true" eagerLockSingleNode="true" transactionMode="TRANSACTIONAL"/>
      <jmxStatistics enabled="true"/>
      <clustering mode="distribution">
         <l1 enabled="true" invalidationThreshold="${invalidationThreshold}"/>
         <hash numOwners="${numOwners}" numSegments="${numSegments}"/>
         <sync replTimeout="180000"/>
      </clustering>
   </namedCache>
</infinispan>

In case you intend to use the distribued mode, you can launch external JVM in standalone mode to provide more memory to your current cache. To do so, you will need to get the file of type exo.jcr.component.core.impl.infinispan.v5-binary.zip in which you will find scripts to launch your cache servers. These scripts allow optional arguments that are described below:

help|?|<configuration-file-path>|udp|tcp <initial-hosts>


Note

If you intend to use the CacheServer in order to manage some of your eXo Cache instances, don't forget to add the jar files that define both the keys and the values in the lib directory of the CacheServer distribution and restarts your CacheServer instances otherwise the unmarshalling will fail with java.lang.ClassNotFoundException.

When you add, the eXo library in your classpath, the eXo service container will use the default configuration provided in the library itself but of course you can still redefine the configuration if you wish as you can do with any components.

The default configuration of the factory is:

<component>
   <key>org.exoplatform.services.cache.ExoCacheFactory</key>
   <type>org.exoplatform.services.cache.impl.memcached.ExoCacheFactoryImpl</type>
   <init-params>
      <value-param>
         <name>memcached.locations</name>
         <value>${memcached.locations:127.0.0.1:11211}</value>
      </value-param>
   </init-params>
</component>

Table 2.17. Fields description

memcached.locationsThis parameter allows you to define the locations of all the memcached servers to which you would like to access. The value of this parameter is a String containing whitespace or comma separated host or IP addresses and port numbers of the form "host:port host2:port" or "host:port, host2:port". By default, it will try to access to a local server on the default memcached port which is 11211. This value can be redefined thanks to the System property memcached.locations.
default.expiration.timeoutThis parameter allows you to define the default expiration timeout of a cache entry. This value is in milliseconds and it is set by default to 15 minutes.
connection.factory.creatorThis parameter allows you to define the ConnectionFactoryCreator that you would like to use. The ConnectionFactoryCreator is responsible for creating the ConnectionFactory that will be used by spymemcached. This parameter is an object parameter, the expected value must be an object of type ConnectionFactoryCreator. By default, it will use the BinaryConnectionFactoryCreator that will create a BinaryConnectionFactory with the default queue length, the default buffer size and the default hashing algorithm. If you would like to change the default parameters of the BinaryConnectionFactoryCreator, you can set as value to this parameter an object of type org.exoplatform.services.cache.impl.memcached.BinaryConnectionFactoryCreator and then set the value of its fields which are queueLength for the length of the queue, bufferSize for the buffer size and hash for the hashing algorithm that must be the name of one algorithm defined in the enumeration DefaultHashAlgorithm.

Note

You can define you own ConnectionFactoryCreator if needed but be aware that the ASCII protocol doesn't support some operations used by the eXoCache implementation based on spymemcached.


To be able to do what is needed for the implementation of an eXo Cache based on spymemcached, we need to use the binary protocol, indeed the ASCII protocol doesn't support all the required operations.

There are some operations such as getCachedObjects(), removeCachedObjects() and select(CachedObjectSelector) that cannot be supported because Memcached is a distributed cache that doesn't allow such operations as they would cause a scalability issue, indeed all the cache entries could potentially not feet into the memory of the JVM which would cause an OutOfMemoryError. Moreover, it could also consume a lot of IO which could overload the network.

The method putMap(Map) cannot be done within a transaction since Memcached is not a transactional resource which means that if one put fails and if other could pass, they won't be reverted.

The method getCacheSize() gives an approximate amount of cache entries that have been created locally, indeed:

Because of the exact same reasons that prevent to provide the right value for the cache size, if you use a cache listener you won't be notified if a cache entry has been evicted and you won't be notified of any modifications of the cache that have be done from other cluster nodes.

Finally it is not possible to set a max size for the cache, the only parameter on which you can rely is the expiration timeout so you need to set it properly.

TransactionServices provides access to the TransactionManager and the UserTransaction (See JTA specification for details).


eXo JCR proposes out of the box several implementations, they all implement the abstract class org.exoplatform.services.transaction.impl.AbstractTransactionService. This main class implement the biggest part of all the methods proposed by the TransactionService. For each sub-class of AbstractTransactionService, you can set the transaction timeout by configuration using the value parameter timeout that is expressed in seconds.

The DataSourceProvider is a service used to give access to a data source in an uniform manner in order to be able to support data sources that are managed by the application server.


The configuration of the DataSourceProvider should be defined only if you use managed data sources since by default all the data sources are considered as not managed. See below the default configuration

<configuration>
....  
   <component>
      <key>org.exoplatform.services.jdbc.DataSourceProvider</key>
      <type>org.exoplatform.services.jdbc.impl.DataSourceProviderImpl</type>
      <init-params>
         <!--  Indicates that the data source needs to check if a tx is active
              to decide if the provided connection needs to be managed or not.
              If it is set to false, the data source will provide only
              managed connections if the data source itself is managed.  -->
         <!--value-param>
            <name>check-tx-active</name>
            <value>true</value>
         </value-param-->
         <!-- Indicates that all the data sources are managed 
              If set to true the parameter never-managed and 
              managed-data-sources will be ignored -->
         <!--value-param>
            <name>always-managed</name>
            <value>true</value>
         </value-param-->
         <!-- Indicates the list of all the data sources that are 
              managed, each value tag can contain a list of
              data source names separated by a comma, in the
              example below we will register ds-foo1, ds-foo2 
              and ds-foo3 as managed data source. If always-managed
              and/or never-managed is set true this parameter is ignored -->
         <!--values-param>
            <name>managed-data-sources</name>
            <value>ds-foo1, ds-foo2</value>
            <value>ds-foo3</value>
         </values-param-->
      </init-params>
   </component>  
...
</configuration>

This section provides you the basic knowledge about JNDI naming, such as, what it is, how it works and how it is used.

Make sure you understand the Java Naming and Directory InterfaceTM (JNDI) concepts before using this service.

The InitialContextInitializer configuration example:

  <component>
    <type>org.exoplatform.services.naming.InitialContextInitializer</type>
    <init-params>
      <value-param>.
        <name>bindings-store-path</name>.
        <value>bind-references.xml</value>.
      </value-param>.
      <value-param> 
        <name>overload-context-factory</name> 
        <value>true</value> 
      </value-param>
      <properties-param>
        <name>default-properties</name>
        <description>Default initial context properties</description>
        <property name="java.naming.factory.initial" value="org.exoplatform.services.naming.SimpleContextFactory"/>
      </properties-param>
      <properties-param>
        <name>mandatory-properties</name>
        <description>Mandatory initial context properties</description>
        <property name="java.naming.provider.url" value="rmi://localhost:9999"/>
      </properties-param>
    </init-params>
  </component>

where

binding-store-path is file path which stores binded datasources in runtime

overload-context-factory allows to overload the default initial context factory by a context factory that is ExoContainer aware and that is able to delegate to the original initial context factory if it detects that it is not in the eXo scope. By default the feature is disabled since it is only required on AS that don't share the objects by default like tomcat but unlike JBoss AS

The BindReferencePlugin component plugin configuration example (for JDBC datasource):

  <component-plugins> 
    <component-plugin> 
      <name>bind.datasource</name>
      <set-method>addPlugin</set-method>
      <type>org.exoplatform.services.naming.BindReferencePlugin</type>
      <init-params>
        <value-param>
          <name>bind-name</name>
          <value>jdbcjcr</value>
        </value-param>
        <value-param>
          <name>class-name</name>
          <value>javax.sql.DataSource</value>
        </value-param>  
        <value-param>
          <name>factory</name>
          <value>org.apache.commons.dbcp.BasicDataSourceFactory</value>
        </value-param>
        <properties-param>
          <name>ref-addresses</name>
          <description>ref-addresses</description>
          <property name="driverClassName" value="org.hsqldb.jdbcDriver"/>
          <property name="url" value="jdbc:hsqldb:file:target/temp/data/portal"/>
          <property name="username" value="sa"/>
          <property name="password" value=""/>
        </properties-param>     
      </init-params>    
  </component-plugin>

In order to accommodate to the different target runtime where it can be deployed, eXo is capable of leveraging several logging systems. eXo lets you choose the underlying logging engine to use and even configure that engine (as a quick alternative to doing it directly in your runtime environment).

The currently supported logging engines are:

Log4J is a very popular and flexible logging system. It is a good option for JBoss.

  <component>
    <type>org.exoplatform.services.log.LogConfigurationInitializer</type>
    <init-params>
      <value-param>
        <name>configurator</name>
        <value>org.exoplatform.services.log.impl.Log4JConfigurator</value>
      </value-param>
      <properties-param>
        <name>properties</name>
        <description>Log4J properties</description>
        <property name="log4j.rootLogger" value="DEBUG, stdout, file"/>
        <property name="log4j.appender.stdout" value="org.apache.log4j.ConsoleAppender"/>
        <property name="log4j.appender.stdout.layout" value="org.apache.log4j.PatternLayout"/>
        <property name="log4j.appender.stdout.layout.ConversionPattern" value="%d {dd.MM.yyyy HH:mm:ss} %c {1}: %m (%F, line %L) %n"/>
        <property name="log4j.appender.file" value="org.apache.log4j.FileAppender"/>
        <property name="log4j.appender.file.File" value="jcr.log"/>
        <property name="log4j.appender.file.layout" value="org.apache.log4j.PatternLayout"/>
        <property name="log4j.appender.file.layout.ConversionPattern" value="%d{dd.MM.yyyy HH:mm:ss} %m (%F, line %L) %n"/>
      </properties-param >
    </init-params>
  </component>

The kernel has a framework for exposing a management view of the various sub systems of the platform. The management view is a lose term for defining how we can access relevant information about the system and how we can apply management operations. JMX is the de facto standard for exposing a management view in the Java Platform but we take in consideration other kind of views such as REST web services. Therefore, the framework is not tied to JMX, yet it provides a JMX part to define more precisely details related to the JMX management view.

The managed frameworks defines an API for exposing a management view of objects. The API is targeted for internal use and is not a public API. The framework leverages Java 5 annotations to describe the management view from an object.

The cache service delegates most of the work to the CacheServiceManaged class by using the @ManagedBy annotation. At runtime when a new cache is created, it calls the CacheServiceManaged class in order to let the CacheServiceManaged object register the cache.

@ManagedBy(CacheServiceManaged.class)
public class CacheServiceImpl implements CacheService {

  CacheServiceManaged managed;
  ...
  synchronized private ExoCache createCacheInstance(String region) throws Exception {
    ...
    if (managed != null) {
      managed.registerCache(simple);
    }
    ...
  }
}

The ExoCache interface is annotated to define its management view. The @NameTemplate is used to produce object name values when ExoCache instance are registered.

@Managed
@NameTemplate({@Property(key="service", value="cache"), @Property(key="name", value="{Name}")})
@ManagedDescription("Exo Cache")
public interface ExoCache {

  @Managed
  @ManagedName("Name")
  @ManagedDescription("The cache name")
  public String getName();

  @Managed
  @ManagedName("Capacity")
  @ManagedDescription("The maximum capacity")
  public int getMaxSize();

  @Managed
  @ManagedDescription("Evict all entries of the cache")
  public void clearCache() throws Exception;

  ...
}

The CacheServiceManaged is the glue code between the CacheService and the management view. The main reason is that only exo services are registered automatically against the management view. Any other managed bean must be registered manually for now. Therefore, it needs to know about the management layer via the management context. The management context allows an object implementing the ManagementAware interface to receive a context to perform further registration of managed objects.

@Managed
public class CacheServiceManaged implements ManagementAware {

  /** . */
  private ManagementContext context;

  /** . */
  private CacheServiceImpl cacheService;

  public CacheServiceManaged(CacheServiceImpl cacheService) {
    this.cacheService = cacheService;

    //
    cacheService.managed = this;
  }

  public void setContext(ManagementContext context) {
    this.context = context;
  }

  void registerCache(ExoCache cache) {
    if (context != null) {
      context.register(cache);
    }
  }
}

The RPCService is only needed in a cluser environment, it is used to communicate with the other cluster nodes. It allows to execute a command on all the cluster nodes or on the coordinator i.e. the oldest node in the cluster. The RPCService has been designed to rely on JGroups capabilities and should not be used for heavy load. It can be used, for example, to notify other nodes that something happened or to collect some information from the other nodes.

The RPCService relies on 3 main interfaces which are:

The arguments that will be given to the RemoteCommand must be Serializable and its return type also in order to prevent any issue due to the serialization. To prevent to execute any RemoteCommand that could be malicious and to allow to use non Serializable command, you need to register the command first before using it. Since the service will keep only one instance of RemoteCommand per command Id, the implementation of the RemoteCommand must be thread safe.

To be usable, all the RemoteCommands must be registered before being used on all the cluster nodes, which means that the command registration must be done in the constructor of your component in other words before that the RPCService is started. If you try to launch a command that has been registered but the RPCService is not yet launched, you will get an RPCException due to an illegal state. This has for consequences that you will be able to execute a command only once your component will be started.

See an example below:

public class MyService implements Startable
{
   private RPCService rpcService;
   private RemoteCommand sayHelloCommand;
   
   public MyService(RPCService rpcService)
   {
      this.rpcService = rpcService;
      // Register the command before that the RPCService is started
      sayHelloCommand = rpcService.registerCommand(new RemoteCommand()
      {
         public Serializable execute(Serializable[] args) throws Throwable
         {
            System.out.println("Hello !");
            return null;
         }

         public String getId()
         {
            return "hello-world-command";
         }
      });
   }

   public void start()
   {
      // Since the RPCService is a dependency of RPCService, it will be started before
      // so I can execute my command
      try
      {
         // This will make all the nodes say "Hello !"
         rpcService.executeCommandOnAllNodes(sayHelloCommand, false);
      }
      catch (SecurityException e)
      {
         e.printStackTrace();
      }
      catch (RPCException e)
      {
         e.printStackTrace();
      }
   }

   public void stop()
   {
   }
}

In the previous example, we register the command sayHelloCommand in the constructor of MyService and we execute this command in the start method.

The configuration of the RPCService should be added only in a cluster environment. See below an example of configuration:

<configuration>
....  
  <component>
    <key>org.exoplatform.services.rpc.RPCService</key>
    <type>org.exoplatform.services.rpc.jgv3.RPCServiceImpl</type>
    <init-params>
      <value-param>
        <name>jgroups-configuration</name>
        <value>classpath:/udp.xml</value>
      </value-param>
      <value-param>
        <name>jgroups-cluster-name</name>
        <value>RPCService-Cluster</value>
      </value-param>
      <value-param>
        <name>jgroups-default-timeout</name>
        <value>0</value>
      </value-param>
      <value-param>
        <name>allow-failover</name>
        <value>true</value>
      </value-param>
      <value-param>
        <name>retry-timeout</name>
        <value>20000</value>
      </value-param>
    </init-params>
  </component>   
...
</configuration>

In previous versions of eXo Kernel, it was hard to make it evolve mainly due to the fact that we directly depended on a very old library which is picocontainer 1.1. The kernel has totally been reviewed to remove the dependency to picocontainer, but for backward compatibility reasons we had to keep some interfaces of picocontainer such as Startable and Disposable.

In previous versions, we relied on a huge hierarchy of classes, indeed PortalContainer, RootContainer and StandaloneContainer were sub classes of ExoContainer. An ExoContainer itself extended ManageableContainer to give the ability to bind and unbind the components to a MBeanServer. A ManageableContainer extended CachingContainer to store and retrieve the components from a Cache. A CachingContainer extended MCIntegrationContainer to give the ability to do AOP thanks to MicroContainer. And finally MCIntegrationContainer extended ConcurrentPicoContainer which is a thread safe implementation of a MutablePicoContainer.

In other words, anytime we wanted to add a feature to the Kernel, we injected a new container somewhere in this hierarchy of classes which is of course really intrusive and not really flexible.

To make the kernel easily extensible, we replaced this hierarchy of classes with a chain of implementations of the interface Interceptor. Each implementation of an Interceptor will need to implement all the methods that define a Container. To focus only on the most important methods according to the main purpose of the interceptor, it is also possible to simply extend AbstractInterceptor, which simply delegates everything to the next Interceptor also called successor.

The kernel leverages SPI (Service Provider Interface) to dynamically define the chain of interceptors, eXo kernel proposes several ways to do so:

First, you can define the factory that will be reponsible for creating the chain of interceptors, to do so you will need to implement the interface InterceptorChainFactory, then create a file META-INF/services/org.exoplatform.container.spi.InterceptorChainFactory that will contain the FQN of your implementation and finally deploy your jar file that contains those files. The first factory found in the classpath wil be used, by default it will use DefaultInterceptorChainFactory which is enough for most use cases, the only cases where it makes sense to define a new factory is when you want to redefine one or several static Interceptors defined by the DefaultInterceptorChainFactory.

The DefaultInterceptorChainFactory has dynamic and static Interceptors. The static interceptors are ManageableContainer, CachingContainer and ConcurrentContainer (previously called ConcurrentPicoContainer), they represent the smallest possible chain of Interceptors as they are mandatory, the order is first ManageableContainer, then CachingContainer and finally ConcurrentContainer.

The DefaultInterceptorChainFactory leverages SPI to dynamically inject Interceptors in the chain of static interceptors. So if you want to add your own interceptor, you will need first to implement the interface Interceptor as mentioned above, then create a file META-INF/services/org.exoplatform.container.spi.Interceptor that will contain the FQN of your implementation and finally deploy your jar file that contains those files. By default, your Interceptor will be the head of the chain of interceptors but you can also use the annotations After and Before to inject your interceptor after or before a given Interceptor. Those annotations must be defined at class declaration level and they expect as value the id of the interceptor after or before which you would like to inject your interceptor knowing that the id of ManageableContainer is Management, the id of CachingContainer is Cache and the id of ConcurrentContainer is ConcurrentContainer.

In the next example, the interceptor MyInterceptor will be injected before CachingContainer such that the new chain will be ManageableContainer -> MyInterceptor -> CachingContainer -> ConcurrentContainer.

@Before("Cache")
public static class MyInterceptor extends AbstractInterceptor
{
...
}

In case you implement several interceptors, please note that they can all be defined within the same file META-INF/services/org.exoplatform.container.spi.Interceptor, to do so you simply need to list the FQN of all your interceptors, one FQN per line.

In developing mode (the system property exo.product.developing has been set to true), you will have access in the standard output stream to the FQN of the factory that the kernel will use and to the effective chain of interceptors that will be used for each ExoContainer in case the factory is DefaultInterceptorChainFactory.

It is now possible to use the Inject annotation in order to inject the dependencies of a given component as described in the JSR 330.

You can inject values thanks to a constructor:

@Inject
public JSR330_C(Provider<P> p, @Named("n2") N n, @N1 N n2)
{
   this.p = p;
   this.n = n;
   this.n2 = n2;
}

In the example above the kernel will inject thanks to the constructor an object of type N named n2, another object of type N annotated with the qualifier N1 and will inject the provider allowing to lazily access to an object of type P.

You can inject values to fields:

@Inject
@Named("n2")
private N n;

@Inject
@N1
private N n2;

@Inject
private Provider<P> p;

In the example above the kernel will inject an object of type N named n2, another object of type N annotated with the qualifier N1 and will inject the provider allowing to lazily access to an object of type P.

You can inject values thanks to methods usually setters but you can also make the kernel call methods for initialization purpose

@Inject
void setN(@Named("n2") N n)
{
   this.n = n;
}

@Inject
void setN2(@N1 N n2)
{
   this.n2 = n2;
}

@Inject
public void init()
{
...
}

In the example above the kernel will inject an object of type N named n2 thanks to the method setN, another object of type N annotated with the qualifier N1 thanks to the method setN2 and finally will call the init method.

The implementation can inject dependencies using private, package-private, protected and public constructors, methods and fileds. Injection on final and/or static fields is not supported. Injection on static methods is not supported. Only one constructor can be annotated with the Inject annotation.

As the old way to create a component only creates only singletons and according to the specification only components explicitly annotated with the Singleton annotation must be considered as singleton, the kernel will decide which mode to use thanks to the constructors. Indeed if you have no constructor annotated with the Inject annotation and you don't have only one constructor that is public with no argument, the kernel will consider that the component expects the old behavior as it won't be considered as JSR 330 compliant. This also helps to limit the overhead of data injection, it will only be done on compliant components only.

You can only use one qualifier and the only attribute supported for a qualifier is the value of the annotation Named. You can bind only one class to a given qualifier so make sure that the type is compatible with the object that must be injected by the kernel otherwise you will get ClassCastException.These limitations are for the sake of simplicity.

The names are unique within the context of a given container so to prevent any naming collision don't hesitate to add a prefix to your names.

A part of the JSR 346 also know as CDI 1.1 (Contexts and Dependency Injection for JavaTM EE), has been implemented in order to be able to set a scope to a component. For now, the implementation is limited as It is only possible to set a scope at class and/or interface definition level since the producers are not supported and it is not possible to set a scope at field declaration level either.

The pseudo-scope Singleton and the normal scope ApplicationScoped are both considered as singletons such that they are managed the exact same way by the kernel. The only difference is the fact that the scope ApplicationScoped can be inherited and the scope Singleton cannot be inherited by definition.

How the kernel gets the scope?

If you intend to use a passivating scope, you will have to make sure that the component with this scope is Serializable otherwise the container will prevent the instantiation of your component.

Any time the kernel will need to create an instance of a component that has a normal scope (except the ApplicationScoped), it will create automatically a proxy of your component implementation class as defined in the specification. So if you intend to use a normal scope, you will need to deploy the latest version of javassist otherwise you will get a ClassNotFoundException. Moreover, you need to make sure that the code that accesses to a component that has a normal scope, only accesses to methods of this component and never directly to fields otherwise you will get unexpected issues. The best wait to prevent issues is to make private all the fields of a component with a normal scope.

Out of the box, eXo kernel can manage the scopes Singleton, ApplicationScoped and Dependent, even if the context manager has not been defined. If you define the context manager, you will be able to define and use the scopes SessionScoped and RequestScoped.

The next code snippet shows how the context manager can be configured to allow you to use the scopes SessionScoped and RequestScoped:

<component>
 <key>org.exoplatform.container.context.ContextManager</key>
 <type>org.exoplatform.container.context.ContextManagerImpl</type>
 <component-plugins>
  <component-plugin>
   <name>main-scopes</name>
   <set-method>addContexts</set-method>
   <type>org.exoplatform.container.context.ContextPlugin</type>
   <init-params>
    <object-param>
     <name>request-scope</name>
     <object type="org.exoplatform.container.context.RequestContext"/>
    </object-param>
    <object-param>
     <name>session-scope</name>
     <object type="org.exoplatform.container.context.SessionContext"/>
    </object-param>
   </init-params>
  </component-plugin>
 </component-plugins>
</component>

You can find a simple example that shows you how to use the different supported scopes in eXo JCR in the project applications/exo.jcr.applications.examples available in the source code of eXo JCR. Read the file readme.txt for more details.

How to define your own scope?

Thanks to the extensibility of the kernel, it is now possible to interact with other containers. Indeed, you can inject components of an external container into a component of eXo kernel and you can also inject components of eXo kernel into a component of an external container.

To allow eXo kernel to interact with a given container, you will need to deploy the artifact of the extension corresponding to this particular container and its related dependencies. At startup, eXo kernel will detect automatically the new interceptor thanks to SPI, and will add it to the head of the interceptor chain.

When you will try to access to a component thanks to its type or key, the interceptor will first try to get it from the eXo kernel, if it can be found, it will return it, otherwise it will try to get it from the container. If you try to access to a list of components thanks to their type, it will first get the components from eXo kernel than add those found in the container.

As we don't want to create and initialize a container for each ExoContainer to limit the impact of such process, you will need to configure explicitly a specific component in the configuration of all the ExoContainers for which you would like to enable the interceptor. By default the interceptor will be disabled, so no interaction between eXo kernel and the container will be possible and no container will be created and initialized. This component is specific to each extension, it is also used to provide anything that the interceptor could need to be enabled. We rely on this component to indicate which components of the container should be shared with eXo kernel, knowing that the shared components can be either injected into components of eXo kernel or components of eXo kernel could be injected into those shared components.

All the components of eXo kernel that are shared with the container are the components defined before the starup of the ExoContainer and that are configured in the configuration of this particular ExoContainer. In other words the components of the parent ExoContainer won't be shared.

The next sub sections will describe the different integrations that are already available.

In case of Google Guice 3, the artifacts to deploy are exo.kernel.container.ext.provider.impl.guice.v3 and its dependencies. This artifact defines the interceptor called org.exoplatform.container.guice.GuiceContainer. To enable the GuiceContainer, you will need to configure explicitly in the configuration of the ExoContainer for which you want to enable the GuiceContainer, a component of type org.exoplatform.container.guice.ModuleProvider. This component is needed to enable the GuiceContainer but also to provide the Google Guice Module in which you define all the components that you would like to share with eXo Kernel. You will need to implement explicitly this interface and configure it.

Here is an example of a configuration of such component, please note that you need to use org.exoplatform.container.guice.ModuleProvider explicitly as key in the component definition otherwise the GuiceContainer won't be enabled.

   <component>
      <key>org.exoplatform.container.guice.ModuleProvider</key>
      <type>org.exoplatform.container.guice.TestGuiceContainer$MyModuleProvider</type>
   </component>

Here is an example of a ModuleProvider

   public static class MyModuleProvider implements ModuleProvider
   {
      public Module getModule()
      {
         return new AbstractModule()
         {
            @Override
            protected void configure()
            {
               bind(B.class);
               bind(C.class);
               bind(F.class);
               bind(G.class);
            }
         };
      }
   }

When you define a component in the configuration of the corresponding ExoContainer, this component is also defined in Google Guice allowing it to inject it in its components if needed. When the key used in the component definition is a class or an interface that is not an annotation, it will use this class/interface to bind the component. In case the key is a string, it will use the component implementation class and all its super classes and interfaces to bind the component and will annotate it with Named and the value of the key. If the key is an annotation, it will use the component implementation class and all its super classes and interfaces to bind the component and will annotate it with this annotation.

In case of Spring 3, the artifacts to deploy are exo.kernel.container.ext.provider.impl.spring.v3 and its dependencies. This artifact defines the interceptor called org.exoplatform.container.spring.SpringContainer. To enable the SpringContainer, you will need to configure explicitly in the configuration of the ExoContainer for which you want to enable the SpringContainer, a component of type org.exoplatform.container.spring.ApplicationContextProvider. This component is needed to enable the SpringContainer but also to create and provide the ApplicationContext in which you define all the components that you would like to share with eXo Kernel.

We propose out of the box two implementations of the ApplicationContextProvider which are org.exoplatform.container.spring.FileSystemXmlApplicationContextProvider and org.exoplatform.container.spring.AnnotationConfigApplicationContextProvider. When you configure one of them don't forget to use org.exoplatform.container.spring.ApplicationContextProvider as key in the component definition otherwise the SpringContainer won't be enabled.

When you define a component in the configuration of the corresponding ExoContainer, this component is also defined in Spring allowing it to inject it in its components if needed. When the key used in the component definition is a class or an interface, it will register the bean with the FQN of this class/interface as bean name. In case the key is a string, it will register the bean with the value of the key as bean name. In case the key is a class or an interface that is not an annotation, the corresponding bean will be defined as the primary autowire candidate otherwise we will only register the related qualifier which is Named and its value in case the key is a string or directly the key if the key is an annotaion.

When you try to access to a component instance or a component adapter thanks to a key, it will first check if there is an existing bean whose name is the FQN of the class in case the key is a class or whose name is value of the key if the key is a String. If no bean can be found, it will then get the bean that matches with the bind type and the key if the key is an annotation class or the key is a string. In case the key is an annotation class, it will get the first bean that has been bound with this annotation. In case the key is a string, it will get the first bean that has been bound with Named and the value of the key.

When you try to access to a component instance or a component adapter thanks to a type, it will first check if there is an existing bean whose name is the FQN of the type. If no bean can be found, it will then get the first bean that matches with the type.

In case of Weld 1 or 2, the artifacts to deploy are exo.kernel.container.ext.provider.impl.weld.v1 or exo.kernel.container.ext.provider.impl.weld.v2 and its dependencies. This artifact defines the interceptor called org.exoplatform.container.weld.WeldContainer. To enable the WeldContainer, you will need to configure explicitly in the configuration of the ExoContainer for which you want to enable the WeldContainer, a component of type org.exoplatform.container.weld.WeldContainerHelper.

This component is needed to enable the WeldContainer but also to provide the Weld extensions that you would like to register and the method isIncluded(Class<?> clazz) which will be used to identify all the classes of the components that should be managed by Weld otherwise by default Weld will manage everything the classes of the components managed by eXo kernel included.

All the components of Weld will be included automatically to the scope of Weld so we won't rely on the WeldContainerHelper to know if they need to be included to the scope of Weld or not.

We propose out of the box one implementation of the WeldContainerHelper which is org.exoplatform.container.weld.BasicWeldContainerHelper. When you configure it don't forget to use org.exoplatform.container.weld.WeldContainerHelper as key in the component definition otherwise the WeldContainer won't be enabled.

The BasicWeldContainerHelper is really a basic implementation as it doesn't provide any Weld extensions such that if you don't want to rely on the SPI approach proposed out of the box by Weld to register your extensions, you will need to propose your own implementation of the WeldContainerHelper.

The BasicWeldContainerHelper allows to define the scope of Weld thanks to two values-param which are include and exclude. They both expect as values, as set of class name prefixes The first one can be used to define a set of name prefixes of the classes to include and the second one can be used to define a set of name prefixes of the classes to exclude.

See below a configuration example:

   <component>
      <key>org.exoplatform.container.weld.WeldContainerHelper</key>
      <type>org.exoplatform.container.weld.BasicWeldContainerHelper</type>
      <init-params>
         <values-param>
            <name>include</name>
            <value>org.exoplatform.container.weld.TestWeldContainer$</value>
         </values-param>
         <values-param>
            <name>exclude</name>
            <value>org.exoplatform.container.weld.TestWeldContainer$A</value>
            <value>org.exoplatform.container.weld.TestWeldContainer$B</value>
            <value>org.exoplatform.container.weld.TestWeldContainer$C</value>
            <value>org.exoplatform.container.weld.TestWeldContainer$D</value>
            <value>org.exoplatform.container.weld.TestWeldContainer$E</value>
            <value>org.exoplatform.container.weld.TestWeldContainer$F</value>
            <value>org.exoplatform.container.weld.TestWeldContainer$G</value>
            <value>org.exoplatform.container.weld.TestWeldContainer$Marker</value>
         </values-param>
      </init-params>
   </component>

In this example we include all the inner classes of org.exoplatform.container.weld.TestWeldContainer but we exclude the inner classes A, B, C, D, E, F, G and Marker.

When you define a component in the configuration of the corresponding ExoContainer, this component is also defined in Weld allowing it to inject it in its components if needed. In case the key is a class or an interface that is not an annotation, the corresponding bean will be qualified using the annotations Default and Any otherwise it will be qualified with Named and its value in case the key is a string or directly the key if the key is an annotaion.

If the key is a class or an interface that is not an annotation, the bean will be bound to the key and the component implementation class otherwise the bean will be bound to the component implementation class and all its super classes and interfaces.

When you try to access to a component instance or a component adapter thanks to a key, if it finds several candidates, it will return the first one except if the key is a class or an interface that is not an annotation, in that case it will return the first one annotated with Default.

When you try to access to a component instance or a component adapter thanks to a type, if it finds several candidates, it will return the first one annotated with Default.

It is now possible to rely on a set of annotations in order to make the kernel lazily register your components, this will allow you to reduce the size of your configuration files. If your component is Startable and the container has already been started at the time your component is lazily created, it will be automatically started too.

You will be able to define the default implementation class to use in case you are looking for a component:

Definition by type:

To define a default implementation class when you try to find a component by type (thanks to getComponentInstanceOfType(Class<T> componentType) or getComponentAdapterOfType(Class<T> componentType)), you can use the annotation DefinitionByType that has to be set at the class or the interface definition level of the target component type (which is also the value of the parameter componentType). This annotation has two parameters which are:

Thanks to this annotation, you will be able to remove the next configuration snippet:

<component>
  <key>org.mycompany.myproject.MyService</key>
  <type>org.mycompany.myproject.MyServiceImpl</type>
</component>

And replace it with the next annotation (assuming that the previous configuration was part of the configuration of the PortalContainer and/or the StandaloneContainer):

@DefinitionByType(type = MyServiceImpl.class)
public interface MyService
{
...

Definition by qualifier:

To define the default implementation class when you try to find a component by qualifier (thanks to getComponentInstance(Object componentKey, Class<T> bindType) and getComponentAdapter(Object componentKey, Class<T> bindType) with a qualifier as componentKey), you can use the annotation DefinitionByQualifier that has to be set at the class or the interface definition level of the bind type (which is also the value of the parameter bindType). This annotation has three parameters which are:

Thanks to this annotation, you will be able to remove the next configuration snippet:

<component>
  <key>org.mycompany.myproject.MyQualifier</key>
  <type>org.mycompany.myproject.MyServiceImpl</type>
</component>

And replace it with the next annotation (assuming that the previous configuration was part of the configuration of the PortalContainer and/or the StandaloneContainer and that the bind type is org.mycompany.myproject.MyService):

@DefinitionByQualifier(qualifier = MyQualifier.class, type = MyServiceImpl.class)
public interface MyService
{
...

Definition by name:

To define the default implementation class when you try to find a component by name (thanks to getComponentInstance(Object componentKey, Class<T> bindType) and getComponentAdapter(Object componentKey, Class<T> bindType) with a String as componentKey), you can use the annotation DefinitionByName that has to be set at the class or the interface definition level of the bind type (which is also the value of the parameter bindType). This annotation has three parameters which are:

Thanks to this annotation, you will be able to remove the next configuration snippet:

<component>
  <key>MyName</key>
  <type>org.mycompany.myproject.MyServiceImpl</type>
</component>

And replace it with the next annotation (assuming that the previous configuration was part of the configuration of the PortalContainer and/or the StandaloneContainer and that the bind type is org.mycompany.myproject.MyService):

@DefinitionByName(named = "MyName", type = MyServiceImpl.class)
public interface MyService
{
...

There is an optional project called exo.kernel.container.mt that allows you to switch to a multi-threaded kernel. Indeed by default, if you have only the project exo.kernel.container deployed on your application, the kernel will use only one thread to create, initialize and start your components.

To reduce the boot time of your application in case you have a lot of components that can be created, initialized and started in parallel, you can deploy the artifact of exo.kernel.container.mt at the same location as the artifact of exo.kernel.container (make sure that the versions match). At startup, the kernel will detect automatically the artifact of exo.kernel.container.mt and then will switch to a multi-threaded kernel. To go back to the old kernel, you simply need to remove the artifact of exo.kernel.container.mt.

The multi-threaded kernel is pre-configured to match with most use cases, but you can still change the default behavior thanks to the next system properties:

Table 2.24. System Properties of the Multi-threaded kernel

NameDescription
org.exoplatform.container.mt.enabledThis parameter allows you to enable/disable the multi-threaded mode, it is mostly used for debugging purpose. By default it is set to true. It will be disabled automatically if you don't have more than one processor available.
org.exoplatform.container.dmtosc.enabledIn case the multi-threaded mode is enabled, you can use this parameter to enable/disable a special mode that will indicate the kernel that the multi-threaded mode should be disabled once all the containers have been fully started, this allows you to free the resources allocated for the threads of the kernel. As we assume that the multi-threaded kernel is mainly needed for the startup, we decided to enable it by default such that once started, the kernel will use only one thread as before.
org.exoplatform.container.as.enabledThis parameter allows you to enable/disable the "auto solve dependency issues" mode. If enabled, the kernel will detect automatically explicit calls to getComponentInstanceOfType and/or getComponentInstance in constructors or initializers and if the developing mode is enabled, it will print a strack trace in the log file to allow you to identify the location of the incorrect code that needs to be fixed. By default it is set to true.
org.exoplatform.container.mt.tpsThis parameter allows you to define explicitly the total amount of threads that you would like to allocate to the kernel. By default it will be set to twice the total amount of processors available unless this value is greater than 30, in that case it will be set to 30 in order to avoid consuming too much resources for it. If the default value is not suitable for you, please set this parameter to a higher or lower value, if it is set explicitly it can be greater than 30.

Note

To be able to launch tasks in parallel, the kernel needs to clearly know the dependencies. So If you get stack traces when you enable the multi-threaded kernel, it is probably due to the fact that you have some implicit dependencies that you need to define explicitly to make sure that those implicit dependencies are started before your component.

Note

In case the kernel detects cyclic "create dependencies" (dependencies that you define in the constructor) between two or several components, you will get an CyclicDependencyException. To fix it, you will need to move the dependency of at least one component affected by this cyclic dependency from the constructor to an initializer.

Note

Some times the kernel has to deal with cyclic dependencies of type: A has B as "create dependency" and B has A as "init dependency" (dependency that you define in the initializers which can be fields or methods annoted with Inject or component plugins), in that case it will always make sure that B will be started before A. In other words the "create dependencies" of a component will always be started before the component itself.

By default eXo JCR uses Apache DBCP connection pool. eXo JCR offers the possibility to use HikariCP as Java database connection pool (JDBC Pool). If you intend to use HikariCP connection pool , you will have to configure the object factory parameter of the component plugin org.exoplatform.services.naming.BindReferencePlugin as org.exoplatform.services.hikari.HikariDataSourceFactory , and set the hikariCP properties

A configuration example :

       <external-component-plugins>
    <target-component>org.exoplatform.services.naming.InitialContextInitializer</target-component>
    <component-plugin>
      <name>bind.datasource</name>
      <set-method>addPlugin</set-method>
      <type>org.exoplatform.services.naming.BindReferencePlugin</type>
      <init-params>
        <value-param>
          <name>bind-name</name>
          <value>jdbcexo</value>
        </value-param>
        <value-param>
          <name>class-name</name>
          <value>javax.sql.DataSource</value>
        </value-param>
        <value-param>
          <name>factory</name>
	  <value>org.exoplatform.services.hikari.HikariDataSourceFactory</value>
        </value-param>
        <properties-param>
          <name>ref-addresses</name>
          <description>ref-addresses</description>
          <property name="dataSourceClassName" value="com.mysql.jdbc.jdbc2.optional.MysqlDataSource" />
          <property name="dataSource.url" value="jdbc:mysql://localhost/portal" />
          <property name="dataSource.user" value="root" />
          <property name="dataSource.password" value="admin" />
          <property name="maximumPoolSize" value="600" />
          <property name="minimumPoolSize" value="5" />
	  <property name="dataSource.cachePrepStmts" value="true" />
	  <property name="dataSource.prepStmtCacheSize" value="250" />
	  <property name="dataSource.prepStmtCacheSqlLimit" value="2048" />
	  <property name="dataSource.useServerPrepStmts" value="true" />
        </properties-param>
      </init-params>
    </component-plugin>
  </external-component-plugins>
  

The eXo Core is a set of common services, such as Authentication and Security, Organization, Database, Logging, JNDI, LDAP, Document reader, and other services, that are used by eXo products and modules. It also can be used in the business logic.

Database creator DBCreator is responsible for execution DDL script in runtime. A DDL script may contain templates for database name, user name and password which will be replaced by real values at execution time.

Three templates supported:

Service's configuration.

   <component>
      <key>org.exoplatform.services.database.creator.DBCreator</key>
      <type>org.exoplatform.services.database.creator.DBCreator</type>
      <init-params>
      <properties-param>
            <name>db-connection</name>
            <description>database connection properties</description>
            <property name="driverClassName" value="com.mysql.jdbc.Driver" />
            <property name="url" value="jdbc:mysql://localhost/" />
            <property name="username" value="root" />
            <property name="password" value="admin" />
            <property name="additional_property" value="value">
            ...
            <property name="additional_property_n" value="value">
         </properties-param>
         <properties-param>
            <name>db-creation</name>.
            <description>database creation properties</description>.
            <property name="scriptPath" value="script.sql" />
            <property name="username" value="testuser" />
            <property name="password" value="testpwd" />
         </properties-param>
      </init-params>
   </component>

db-connection properties section contains parameters needed for connection to database server

There is four reserved and mandatory properties driverClassName, url, username and password. But db-connection may contain additonal properties.

For example, next additional proprites allows reconnect to MySQL database when connection was refused:

         <properties-param>
            <name>db-connection</name>
            ...
            <property name="validationQuery" value="select 1"/>
            <property name="testOnReturn" value="true"/>
            ...
         </properties-param>

db-creation properties section contains paramaters for database creation using DDL script:

Specific db-connection properties section for different databases.

MySQL:

<property name="driverClassName" value="com.mysql.jdbc.Driver" />
<property name="url" value="jdbc:mysql://localhost/" />
<property name="username" value="root" />
<property name="password" value="admin" />

PostgreSQL:

<property name="driverClassName" value="org.postgresql.Driver" />
<property name="url" value="jdbc:postgresql://localhost/" />
<property name="username" value="root" />
<property name="password" value="admin" />

PostgrePlus:

<property name="driverClassName" value="com.edb.Driver" />
<property name="url" value="jdbc:edb://localhost/" />
<property name="username" value="root" />
<property name="password" value="admin" />

MSSQL:

<property name="driverClassName" value="com.microsoft.sqlserver.jdbc.SQLServerDriver"/>
<property name="url" value="jdbc:sqlserver://localhost:1433;"/>
<property name="username" value="root"/>
<property name="password" value="admin"/>

Sybase:

<property name="driverClassName" value="com.sybase.jdbc3.jdbc.SybDriver" />
<property name="url" value="jdbc:sybase:Tds:localhost:5000/"/>
<property name="username" value="root"/>
<property name="password" value="admin"/>

Oracle:

<property name="driverClassName" value="oracle.jdbc.OracleDriver" />
<property name="url" value="jdbc:oracle:thin:@db2.exoua-int:1521:orclvm" />
<property name="username" value="root" />
<property name="password" value="admin" />

The purpose is to make a simple, unified way for the authentication and the storing/propagation of user sessions through all the eXo components and J2EE containers. JAAS is supposed to be the primary login mechanism but the Security Service framework should not prevent other (custom or standard) mechanisms from being used. You can learn more about JAAS in the Java Tutorial

The central point of this framework is the ConversationState object which stores all information about the state of the current user (very similar to the Session concept). The same ConversationState also stores acquired attributes of an Identity which is a set of principals to identify a user.

The ConversationState has definite lifetime. This object should be created when the user's identity becomes known by eXo (login procedure) and destroyed when the user leaves an eXo based application (logout procedure). Using JAAS it should happen in LoginModule's login() and logout() methods respectively.

An Authenticator is responsible for Identity creation, it consists of two methods:

public interface Authenticator
{
   /**
    * Authenticate user and return userId which can be different to username.
    * 
    * @param credentials - list of users credentials (such as name/password, X509
    *          certificate etc)
    * @return userId the user's identifier.
    * @throws LoginException in case the authentication fails
    * @throws Exception if any exception occurs
    */
   String validateUser(Credential[] credentials) throws LoginException, Exception;

   /**
    * @param userId the user's identifier
    * @return returns the Identity representing the user
    * @throws Exception if any exception occurs
    */
   Identity createIdentity(String userId) throws Exception;

   /**
    * Gives the last exception that occurs while calling {@link #validateUser(Credential[])}. This
    * allows applications outside JAAS like UI to be able to know which exception occurs
    * while calling {@link #validateUser(Credential[])}.
    * @return the original Exception that occurs while calling {@link #validateUser(Credential[])} 
    * for the very last time if an exception occurred, <code>null</code> otherwise.
    */
   Exception getLastExceptionOnValidateUser();
}

It is up to the application developer (and deployer) whether to use the Authenticator component(s) and how many implementations of this components should be deployed in eXo container. The developer is free to create an Identity object using a different way, but the Authenticator component is the highly recommended way from architectural considerations.

Typical functionality of the validateUser(Credential\[] credentials) method is the comparison of incoming credentials (username/password, digest etc) with those credentials that are stored in an implementation specific database. Then, validateUser(Credential\[] credentials) returns back the userId or throws a LoginException in a case of wrong credentials.

Default Authenticator implementation is org.exoplatform.services.organization.auth.OrganizationAuthenticatorImpl which compares incoming username/password credentials with the ones stored in OrganizationService. Configuration example:

<component>
  <key>org.exoplatform.services.security.Authenticator</key> 
  <type>org.exoplatform.services.organization.auth.OrganizationAuthenticatorImpl</type>
</component>

The framework described is not coupled with any authentication mechanism but the most logical and implemented by default is the JAAS Login module. The typical sequence looks as follows (see org.exoplatform.services.security.jaas.DefaultLoginModule):

Authenticator authenticator = (Authenticator) container()
          .getComponentInstanceOfType(Authenticator.class); 
// RolesExtractor can be null     
RolesExtractor rolesExtractor = (RolesExtractor) container().
getComponentInstanceOfType(RolesExtractor.class);


Credential[] credentials = new Credential[] {new UsernameCredential(username), new PasswordCredential(password) };
String userId = authenticator.validateUser(credentials);
identity = authenticator.createIdentity(userId);

When initializing the login module, you can set the option parameter "singleLogin". With this option you can disallow the same Identity to login for a second time.

By default singleLogin is disabled, so the same identity can be registered more than one time. Parameter can be passed in this form singleLogin=yes or singleLogin=true.

IdentityRegistry identityRegistry = (IdentityRegistry) getContainer().getComponentInstanceOfType(IdentityRegistry.class);
      
if (singleLogin && identityRegistry.getIdentity(identity.getUserId()) != null) 
  throw new LoginException("User " + identity.getUserId() + " already logined.");

identity.setSubject(subject);
identityRegistry.register(identity);

In the case of using several LoginModules, JAAS allows to place the login() and commit() methods in different REQUIRED modules.

After that, the web application must use SetCurrentIdentityFilter. This filter obtains the ConversationRegistry object and tries to get the ConversationState by sessionId (HttpSession). If there is no ConversationState, then SetCurrentIdentityFilter will create a new one, register it and set it as current one using ConversationState.setCurrent(state).

This listener must be configured in web.xml. The method sessionDestroyed(HttpSessionEvent) is called by the ServletContainer. This method removes the ConversationState from the ConversationRegistry ConversationRegistry.unregister(sesionId) and calls the method LoginModule.logout().

ConversationRegistry conversationRegistry = (ConversationRegistry) getContainer().getComponentInstanceOfType(ConversationRegistry.class);

ConversationState conversationState = conversationRegistry.unregister(sesionId);

if (conversationState != null) {
  log.info("Remove conversation state " + sesionId);
  if (conversationState.getAttribute(ConversationState.SUBJECT) != null) {
    Subject subject = (Subject) conversationState.getAttribute(ConversationState.SUBJECT); 
    LoginContext ctx = new LoginContext("exo-domain",  subject);
    ctx.logout();
} else {
  log.warn("Subject was not found in ConversationState attributes.");
}

OrganizationService is the service that allows to access the Organization model. This model is composed of:

It is the basis of eXo personalization and authorizations in eXo and is used to all over the platform. The model is abstract and does not rely on any specific storage. Multiple implementations exist in eXo:

To create a custom organization service you need to implement a several interfaces and extend some classes which will be listed below.

First of all you need to create classes implementing the following interfaces (each of which represent a basic unit of organization service):

Note

After each set method is called the developer must call UserHandler.saveUser (GroupHandler.saveGroup, MembershipHandler.saveMembership etc.) method to persist the changes.

You can find examples of the mentioned above implementations at github server:

After you created basic organization service unit instances you need to create classess to handle them e.g. to persist changes, to add listener etc. For that purpose you need to implement a several interfaces correspondingly:

You can find examples of the mentioned above implementations at github server:

Finally you need to create your main custom organization service class. It must extend org.exoplatform.services.organization.BaseOrganizationService. BaseOrganizationService class contains organization service unit handlers as protected fields, so you can initialize them in accordance to your purposes. It also has org.exoplatform.services.organization.OrganizationService interface methods' implementations. This is the class you need to mention in the configuration file if you want to use your custom organization service.

You can find example of such class at github server: JCROrganizationServiceImpl.

Make sure that your custom organization service implementation is fully compliant with Organization Service TCK tests. Tests are available as maven artifact:

groupId - org.exoplatform.core

artifactId - exo.core.component.organization.tests

You can find TCK tests package source code here

Note

In order to be able to run unit tests you may need to configure the following maven plugins:

Check pom.xml file to find out one of the ways to configure maven project object model. More detailed description you can find in the dedicated section called "Organization Service TCK tests configuration"

Use the Organization Service Initializer to create users, groups and membership types by default.

<external-component-plugins>
    <target-component>org.exoplatform.services.organization.OrganizationService</target-component>
    <component-plugin>
      <name>init.service.listener</name>
      <set-method>addListenerPlugin</set-method>
      <type>org.exoplatform.services.organization.OrganizationDatabaseInitializer</type>
      <description>this listener populate organization data for the first launch</description>
      <init-params>
        <value-param>
          <name>checkDatabaseAlgorithm</name>
          <description>check database</description>
          <value>entry</value>
        </value-param>
        <value-param>
          <name>printInformation</name>
          <description>Print information init database</description>
          <value>false</value>
        </value-param>
        <object-param>
          <name>configuration</name>
          <description>description</description>
          <object type="org.exoplatform.services.organization.OrganizationConfig">
            <field name="membershipType">
              <collection type="java.util.ArrayList">
                <value>
                  <object type="org.exoplatform.services.organization.OrganizationConfig$MembershipType">
                    <field name="type">
                      <string>manager</string>
                    </field>
                    <field name="description">
                      <string>manager membership type</string>
                    </field>
                  </object>
                </value>
              </collection>
            </field>
            
            <field name="group">
              <collection type="java.util.ArrayList">
                <value>
                  <object type="org.exoplatform.services.organization.OrganizationConfig$Group">
                    <field name="name">
                      <string>platform</string>
                    </field>
                    <field name="parentId">
                      <string></string>
                    </field>
                    <field name="description">
                      <string>the /platform group</string>
                    </field>
                    <field name="label">
                      <string>Platform</string>
                    </field>
                  </object>
                </value>
                <value>
                  <object type="org.exoplatform.services.organization.OrganizationConfig$Group">
                    <field name="name">
                      <string>administrators</string>
                    </field>
                    <field name="parentId">
                      <string>/platform</string>
                    </field>
                    <field name="description">
                      <string>the /platform/administrators group</string>
                    </field>
                    <field name="label">
                      <string>Administrators</string>
                    </field>
                  </object>
                </value>
               </collection>
            </field>
            
            <field name="user">
              <collection type="java.util.ArrayList">
                <value>
                  <object type="org.exoplatform.services.organization.OrganizationConfig$User">
                    <field name="userName">
                      <string>root</string>
                    </field>
                    <field name="password">
                      <string>exo</string>
                    </field>
                    <field name="firstName">
                      <string>Root</string>
                    </field>
                    <field name="lastName">
                      <string>Root</string>
                    </field>
                    <field name="email">
                      <string>root@localhost</string>
                    </field>
                    <field name="displayName">
                      <string>Root</string>
                    </field>
                    <field name="groups">
                      <string>
                        manager:/platform/administrators
                      </string>
                    </field>
                  </object>
                </value>
              </collection>
            </field>
          </object>
        </object-param>
      </init-params>
    </component-plugin>
  </external-component-plugins>

Params for membership type:

Params for group:

Params for user:

The Organization Service provides a mechanism to receive notifications when:

  • A User is created, deleted, modified, enabled or disabled.

  • A Group is created, deleted or modified.

  • A Membership is created or removed.

This mechanism is very useful to cascade some actions when the organization model is modified. For example, it is currently used to :

  • Initialize the personal portal pages.

  • Initialize the personal calendars, address books and mail accounts in CS.

  • Create drives and personal areas in ECM.

To implement your own listener, you just need to write extend some existing listener classes. These classes define hooks that are invoked before or after operations are performed on organization model.

Registering the listeners is then achieved by using the ExoContainer plugin mechanism. Learn more about it on the Service Configuration for Beginners article.

To effectively register organization service's listeners you simply need to use the addListenerPlugin seer injector.

So, the easiest way to register your listeners is to pack them into a .jar and create a configuration file into it under mylisteners.jar!/conf/portal/configuration.xml

<?xml version="1.0" encoding="ISO-8859-1"?>
<configuration>
 <external-component-plugins>
  <target-component>org.exoplatform.services.organization.OrganizationService</target-component>
   <component-plugin>
    <name>myuserplugin</name>
    <set-method>addListenerPlugin</set-method>
    <type>org.example.MyUserListener</type>
    <description></description>      
   </component-plugin>
   <component-plugin>
    <name>mygroupplugin</name>
    <set-method>addListenerPlugin</set-method>
    <type>org.example.MyGroupListener</type>
    <description></description>      
   </component-plugin>
   <component-plugin>
    <name>mymembershipplugin</name>
    <set-method>addListenerPlugin</set-method>
    <type>org.example.MyMembershipListener</type>
    <description></description>      
   </component-plugin>
  </external-component-plugins>
<configuration>

Now, simply deploy the jar under $TOMCAT_HOME/lib and your listeners are ready!

Note

Be aware that you need to set proper RuntimePermission to be able to add or remove Listeners. To do that you need to grant the following permission for your code

permission java.lang.RuntimePermission "manageListeners"

As usual, it is quite simple to use our configuration XML syntax to configure and parametrize different Databases for eXo tables but also for your own use.

The default DB configuration uses HSQLDB, a Java Database quite useful for demonstrations.

<component> 
   <key>org.exoplatform.services.database.HibernateService</key>
   <jmx-name>exo-service:type=HibernateService</jmx-name>
   <type>org.exoplatform.services.database.impl.HibernateServiceImpl</type>
   <init-params>
      <properties-param>
         <name>hibernate.properties</name>
         <description>Default Hibernate Service</description>
         <property name="hibernate.show_sql" value="false"/>
         <property name="hibernate.cglib.use_reflection_optimizer" value="true"/>
         <property name="hibernate.connection.url" value="jdbc:hsqldb:file:../temp/data/portal"/>
         <property name="hibernate.connection.driver_class" value="org.hsqldb.jdbcDriver"/>
         <property name="hibernate.connection.autocommit" value="true"/>
         <property name="hibernate.connection.username" value="sa"/>
         <property name="hibernate.connection.password" value=""/>
         <property name="hibernate.cache.region.factory_class" value="org.exoplatform.services.database.impl.ExoCacheRegionFactory"/>
         <property name="hibernate.cache.use_second_level_cache" value="true"/>
         <property name="hibernate.cache.use_query_cache" value="true"/>
         <property name="hibernate.hbm2ddl.auto" value="update"/>
         <property name="hibernate.c3p0.min_size" value="5"/>
         <property name="hibernate.c3p0.max_size" value="20"/>
         <property name="hibernate.c3p0.timeout" value="1800"/>
         <property name="hibernate.c3p0.max_statements" value="50"/>
      </properties-param>
   </init-params>
</component>

In the init parameter section, we define the default hibernate properties including the DB URL, the driver and the credentials in use.

For any portals that configuration can be overridden, depending on the needs of your environment.

Several databases have been tested and can be used in production....which is not the case of HSQLDB, HSQLDB can only be used for development environments and for demonstrations.

For MySQL

<component> 
   <key>org.exoplatform.services.database.HibernateService</key>
   <jmx-name>database:type=HibernateService</jmx-name>
   <type>org.exoplatform.services.database.impl.HibernateServiceImpl</type>
   <init-params>
      <properties-param>
         <name>hibernate.properties</name>
         <description>Default Hibernate Service</description>
         <property name="hibernate.show_sql" value="false"/>
         <property name="hibernate.cglib.use_reflection_optimizer" value="true"/>
         <property name="hibernate.connection.url" value="jdbc:mysql://localhost:3306/exodb?relaxAutoCommit=true&amp;amp;autoReconnect=true&amp;amp;useUnicode=true&amp;amp;characterEncoding=utf8"/>
         <property name="hibernate.connection.driver_class" value="com.mysql.jdbc.Driver"/>
         <property name="hibernate.connection.autocommit" value="true"/>
         <property name="hibernate.connection.username" value="sa"/>
         <property name="hibernate.connection.password" value=""/>
         <property name="hibernate.cache.region.factory_class" value="org.exoplatform.services.database.impl.ExoCacheRegionFactory"/>
         <property name="hibernate.cache.use_second_level_cache" value="true"/>
         <property name="hibernate.cache.use_query_cache" value="true"/>
         <property name="hibernate.hbm2ddl.auto" value="update"/>
         <property name="hibernate.c3p0.min_size" value="5"/>
         <property name="hibernate.c3p0.max_size" value="20"/>
         <property name="hibernate.c3p0.timeout" value="1800"/>
         <property name="hibernate.c3p0.max_statements" value="50"/>
       </properties-param>
   </init-params>
</component>

It is possible to use the eXo hibernate service and register your annotated classes or hibernate hbm.xml files to leverage some add-on features of the service such as the table automatic creation as well as the cache of the hibernate session in a ThreadLocal object during all the request lifecycle. To do so, you just have to add a plugin and indicate the location of your files.

Registering custom XML files can be done in this way:

<?xml version="1.0" encoding="ISO-8859-1"?>
<configuration>
  <external-component-plugins>
    <target-component>org.exoplatform.services.database.HibernateService</target-component>
    <component-plugin> 
      <name>add.hibernate.mapping</name>
      <set-method>addPlugin</set-method>
      <type>org.exoplatform.services.database.impl.AddHibernateMappingPlugin</type>
      <init-params>
        <values-param>
          <name>hibernate.mapping</name>
          <value>org/exoplatform/services/organization/impl/UserImpl.hbm.xml</value>
          <value>org/exoplatform/services/organization/impl/MembershipImpl.hbm.xml</value>
          <value>org/exoplatform/services/organization/impl/GroupImpl.hbm.xml</value>
          <value>org/exoplatform/services/organization/impl/MembershipTypeImpl.hbm.xml</value>
          <value>org/exoplatform/services/organization/impl/UserProfileData.hbm.xml</value>
        </values-param>
      </init-params>
    </component-plugin>
  </external-component-plugins>  
</configuration>

Registering custom annotated classes can be done in this way:

<?xml version="1.0" encoding="ISO-8859-1"?>
<configuration>
  <external-component-plugins>
    <target-component>org.exoplatform.services.database.HibernateService</target-component>
    <component-plugin> 
      <name>add.hibernate.annotations</name>
      <set-method>addPlugin</set-method>
      <type>org.exoplatform.services.database.impl.AddHibernateMappingPlugin</type>
      <init-params>
        <values-param>
          <name>hibernate.annotations</name>
          <value>org.exoplatform.services.organization.impl.UserProfileData</value>
          <value>org.exoplatform.services.organization.impl.MembershipImpl</value>
          <value>org.exoplatform.services.organization.impl.GroupImpl</value>
          <value>org.exoplatform.services.organization.impl.MembershipTypeImpl</value>
        </values-param>
      </init-params>
    </component-plugin>
  </external-component-plugins>  
</configuration>

You may decide to make eXo users to be mapped to an existing directory. eXo provides a flexible implementation of its OrganizationService on top of LDAP. It can be used on any LDAP compliant directory and even Active Directory. This page will guide you how to configure eXo Platform to work with your directory.

If you just want to have a look at how eXo works with LDAP, eXo comes with a predefined LDAP configuration. You just need to activate it and eXo will create everything it needs to work at startup.

You need to have a working LDAP server and a user with write permissions.

eXo starts and autocreates its organization model in your directory tree. Finally, the structure of the default LDAP schema looks like:

That's it! Now eXo uses your LDAP directory as its org model storage. Users, groups and memberships are now stored and retrieved from there. We suggest that you complete some guideline functions with eXo user management portlet and see what it changes in your directory tree.

If you have an existing LDAP server, the eXo predefined settings will likely not match your directory structure. eXo LDAP organization service implementation was written with flexibility in mind and can certainly be configured to meet your requirements.

The configuration is done in ldap-configuration.xml file, and this section will explain the numerous parameters it contains.

Firstly, start by connection settings which will tell eXo how to connect to your directory server. These settings are very close to JNDI API context parameters. This configuration is activated by the init-param ldap.config of service LDAPServiceImpl.

<component>
  <key>org.exoplatform.services.ldap.LDAPService</key>
  <type>org.exoplatform.services.ldap.impl.LDAPServiceImpl</type>
  <init-params>
    <object-param>
      <name>ldap.config</name>
      <description>Default ldap config</description>
      <object type="org.exoplatform.services.ldap.impl.LDAPConnectionConfig">
        <field name="providerURL"><string>ldap://127.0.0.1:389,10.0.0.1:389</string></field>
        <field name="rootdn"><string>CN=Manager,DC=exoplatform,DC=org</string></field>
        <field name="password"><string>secret</string></field>
        <!-- field  name="authenticationType"><string>simple</string></field-->           
        <field name="version"><string>3</string></field>
        <field  name="referralMode"><string>follow</string></field>            
        <!-- field  name="serverName"><string>active.directory</string></field-->
        <field name="minConnection"><int>5</int></field>
        <field name="maxConnection"><int>10</int></field>
        <field name="timeout"><int>50000</int></field>
      </object>
    </object-param>
  </init-params>
</component>
  • providerURL: LDAP server URL (see PROVIDER_URL). For multiple ldap servers, use comma separated list of host:port (Ex. ldap://127.0.0.1:389,10.0.0.1:389).

  • rootdn: dn of user that will be used by the service to authenticate on the server (see SECURITY_PRINCIPAL).

  • password: password for user rootdn (see SECURITY_CREDENTIALS).

  • authenticationType: type of authentication to be used (see SECURITY_AUTHENTICATION). Use one of none, simple, strong. Default is simple.

  • version: LDAP protocol version (see java.naming.ldap.version). Set to 3 if your server supports LDAP V3.

  • referalMode: one of follow, ignore,throw (see REFERRAL).

  • serverName: you will need to set this to active.directory in order to work with Active Directory servers. Any other value will be ignore and the service will act as on a standard LDAP.

  • maxConnection: the maximum number of connections per connection identity that can be maintained concurrently.

  • minConnection: the number of connections per connection identity to create when initially creating a connection for the identity.

  • timeout: the number of milliseconds that an idle connection may remain in the pool without being closed and removed from the pool.

Next, you need to configure the eXo OrganizationService to tell him how the directory is structured and how to interact with it. This is managed by a couple of of init-params : ldap.userDN.key and ldap.attribute.mapping in file ldap-configuration.xml (by default located at portal.war/WEB-INF/conf/organization)

<component>
  <key>org.exoplatform.services.organization.OrganizationService</key>
  <type>org.exoplatform.services.organization.ldap.OrganizationServiceImpl</type>
  [...]
  <init-params>
    <value-param>
      <name>ldap.userDN.key</name>
      <description>The key used to compose user DN</description>
      <value>cn</value>
    </value-param>
    <object-param>
      <name>ldap.attribute.mapping</name>
      <description>ldap attribute mapping</description>
      <object type="org.exoplatform.services.organization.ldap.LDAPAttributeMapping">
      [...]
    </object-param>
  </init-params>
  [...]
</component>

ldap.attribute.mapping maps your ldap to eXo. At first there are two main parameters to configure in it:

<field name="baseURL"><string>dc=exoplatform,dc=org</string></field>
<field name="ldapDescriptionAttr"><string>description</string></field>

Other parameters are discussed in the following sections.

Here are the main parameters to map eXo users to your directory :

<field name="userURL"><string>ou=users,ou=portal,dc=exoplatform,dc=org</string></field>
<field name="userObjectClassFilter"><string>objectClass=person</string></field>
<field name="userLDAPClasses"><string>top,person,organizationalPerson,inetOrgPerson</string></field>

Example :

uid=john,cn=People,o=MyCompany,c=com

However, if users exist deeply under userURL, eXo will be able to retrieve them.

Example :

uid=tom,ou=France,ou=EMEA,cn=People,o=MyCompany,c=com

Example : john and tom will be recognized as valid eXo users but EMEA and France entries will be ignored in the following subtree :

uid=john,cn=People,o=MyCompany,c=com
  objectClass: person
  …
ou=EMEA,cn=People,o=MyCompany,c=com
  objectClass: organizationalUnit
  …
    ou=France,ou=EMEA,cn=People,o=MyCompany,c=com
      objectClass: organizationalUnit
      …
        uid=tom,ou=EMEA,cn=People,o=MyCompany,c=com
          objectClass: person
          …

When creating a new user, an entry will be created with the given objectClass attributes. The classes must at least define cn and any attribute refernced in the user mapping.

Example : Adding the user Marry Simons could produce :

uid=marry,cn=users,ou=portal,dc=exoplatform,dc=org
  objectclass: top
  objectClass: person
  objectClass: organizationalPerson
  objectClass: inetOrgPerson
  …

eXo groups can be mapped to organizational or applicative groups defined in your directory.

<field name="groupsURL"><string>ou=groups,ou=portal,dc=exoplatform,dc=org</string></field>
<field name="groupLDAPClasses"><string>top,organizationalUnit</string></field>
<field name="groupObjectClassFilter"><string>objectClass=organizationalUnit</string></field>

Groups can be structured hierarchically under groupsURL.

Example: Groups communication, communication/marketing and communication/press would map to :

ou=communication,ou=groups,ou=portal,dc=exoplatform,dc=org
…
  ou=marketing,ou=communication,ou=groups,ou=portal,dc=exoplatform,dc=org
  …            
  ou=press,ou=communication,ou=groups,ou=portal,dc=exoplatform,dc=org                          
  …

When creating a new group, an entry will be created with the given objectClass attributes. The classes must define at least the required attributes: ou, description and l.

Example : Adding the group human-resources could produce:

ou=human-resources,ou=groups,ou=portal,dc=exoplatform,dc=org
  objectclass: top
  objectClass: organizationalunit
  ou: human-resources
  description: The human resources department
  l: Human Resources
  …

Example : groups WebDesign, WebDesign/Graphists and Sales could be retrieved in :

l=Paris,dc=sites,dc=mycompany,dc=com
  …
  ou=WebDesign,l=Paris,dc=sites,dc=mycompany,dc=com
  …
    ou=Graphists,WebDesign,l=Paris,dc=sites,dc=mycompany,dc=com
    …
l=London,dc=sites,dc=mycompany,dc=com
  …
  ou=Sales,l=London,dc=sites,dc=mycompany,dc=com
  …

Memberships are used to assign a role within a group. They are entries that are placed under the group entry of their scope group. Users in this role are defined as attributes of the membership entry.

Example: To designate tom as the manager of the group human-resources:

ou=human-resources,ou=groups,ou=portal,dc=exoplatform,dc=org
  …
  cn=manager,ou=human-resources,ou=groups,ou=portal,dc=exoplatform,dc=org
    member: uid=tom,ou=users,ou=portal,dc=exoplatform,dc=org
    …

The parameters to configure memberships are:

<field name="membershipLDAPClasses"><string>top,groupOfNames</string></field>
<field name="membershipTypeMemberValue"><string>member</string></field>                              
<field name="membershipTypeRoleNameAttr"><string>cn</string></field>
<field name="membershipTypeObjectClassFilter"><string>objectClass=organizationalRole</string></field>

When creating a new membership, an entry will be created with the given objectClass attributes. The classes must at least define the attribute designated by membershipTypeMemberValue.

Example : Adding membership validator would produce :

cn=validator,ou=human-resources,ou=groups,ou=portal,dc=exoplatform,dc=org
  objectclass: top
  objectClass: groupOfNames
  …

cn=validator,ou=human-resources,ou=groups,ou=portal,dc=exoplatform,dc=org objectclass: top objectClass: groupOfNames

Values should be a user dn.

Example: james and root have admin role within the group human-resources, would give:

cn=admin,ou=human-resources,ou=groups,ou=portal,dc=exoplatform,dc=org
  member: cn=james,ou=users,ou=portal,dc=exoplatform,dc=org
  member: cn=root,ou=users,ou=portal,dc=exoplatform,dc=org
  …

Example : In the following membership entry:

cn=manager,ou=human-resources,ou=groups,ou=portal,dc=exoplatform,dc=org

'cn' attribute is used to designate the 'manager' membership type. Which could also be said : The name of the role is given by 'cn' the attribute.

You can use rather complex filters.

Example: Here is a filter we used for a customer that needed to trigger a dynlist overlay on openldap.

(&amp;(objectClass=ExoMembership)(membershipURL=*)) 

Note: Pay attention to the xml escaping of the '&' (and) operator

Here is an alternative configuration for active directory that you can find in activedirectory-configuration.xml

here is how to use LDAPS protocol with Active Directory :

1 setup AD to use SSL:

    * add Active Directory Certificate Services role
    * install right certificate for DC machine

2 enable Java VM to use certificate from AD (note that this step is not AD related, it is applicable for any LDAP server when we want to enable the ssl protocol):

    * import root CA used in AD, to keystore, something like

      keytool -importcert -file 2008.cer -keypass changeit -keystore /home/user/java/jdk1.6/jre/lib/security/cacerts

    * set java options

      JAVA_OPTS="${JAVA_OPTS} -Djavax.net.ssl.trustStorePassword=changeit -Djavax.net.ssl.trustStore=/home/user/java/jdk1.6/jre/lib/security/cacerts"
[...]
  <component>
  <key>org.exoplatform.services.ldap.LDAPService</key>
[..]
       <object type="org.exoplatform.services.ldap.impl.LDAPConnectionConfig">         
         <!-- for multiple ldap servers, use comma seperated list of host:port (Ex. ldap://127.0.0.1:389,10.0.0.1:389) -->
         <!-- whether or not to enable ssl, if ssl is used ensure that the javax.net.ssl.keyStore & java.net.ssl.keyStorePassword properties are set -->
         <!-- ldap service will check protocol, if protocol is ldaps, ssl is enable (Ex. for enable ssl: ldaps://10.0.0.3:636 ;for disable ssl: ldap://10.0.0.3:389 ) -->
         <!-- when enable ssl, ensure server name is *.directory and port (Ex. active.directory) -->        
         <field  name="providerURL"><string>ldaps://10.0.0.3:636</string></field>
         <field  name="rootdn"><string>CN=Administrator,CN=Users, DC=exoplatform,DC=org</string></field>
         <field  name="password"><string>site</string></field>      
         <field  name="version"><string>3</string></field>             
         <field  name="referralMode"><string>ignore</string></field>                      
         <field  name="serverName"><string>active.directory</string></field>                  
       </object>
[..]
  <component>
    <key>org.exoplatform.services.organization.OrganizationService</key>
    [...]
        <object type="org.exoplatform.services.organization.ldap.LDAPAttributeMapping">                
          [...]
          <field  name="userLDAPClasses"><string>top,person,organizationalPerson,user</string></field>
          <field  name="userObjectClassFilter"><string>objectClass=user</string></field>
          <field  name="userAuthenticationAttr"><string>mail</string></field>
          <field  name="userUsernameAttr"><string>sAMAccountName</string></field>
          <field  name="userPassword"><string>unicodePwd</string></field> 
          <field  name="userLastNameAttr"><string>sn</string></field>
          <field  name="userDisplayNameAttr"><string>displayName</string></field>
          <field  name="userMailAttr"><string>mail</string></field>
          [..]
          <field  name="membershipTypeLDAPClasses"><string>top,group</string></field>
          <field  name="membershipTypeObjectClassFilter"><string>objectClass=group</string></field>
          [..]
          <field  name="membershipLDAPClasses"><string>top,group</string></field>
          <field  name="membershipObjectClassFilter"><string>objectClass=group</string></field>
        </object>
        [...]  
</component>  

If you use OpenLDAP, you may want to use the overlays. Here is how you can use the dynlist overlay to have memberships dynamically populated.

The main idea is to have your memberships populated dynamically by an ldap query. Thus, you no longer have to maintain manually the roles on users.

To configure the dynlist, add the following to your slapd.conf :

dynlist-attrset         ExoMembership membershipURL member

This snipet means : On entries that have ExoMembership class, use the URL defined in the value of attribute membershipURL as a query and populate results under the multivalues attribute member.

Now let's declare the corresponding schema (replace XXXXX to adapt to your own IANA code):

attributeType ( 1.3.6.1.4.1.XXXXX.1.59 NAME 'membershipURL' SUP memberURL )

membershipURL inherits from memberURL.

objectClass ( 1.3.6.1.4.1.XXXXX.2.12  NAME 'ExoMembership' SUP top MUST ( cn ) MAY (membershipURL $ member $ description ) )

ExoMembership must define cn and can have attributes :

  • membershipURL: trigger for the dynlist

  • member : attribute populated by the dynlist

  • description : used by eXo for display

# the TestGroup group
dn: ou=testgroup,ou=groups,ou=portal,o=MyCompany,c=com
objectClass: top
objectClass: organizationalUnit
ou: testgroup
l: TestGroup
description: the Test Group

On this group, we can bind an eXo membership where the overlay will occur:

# the manager membership on group TestGroup
dn: cn=manager, ou=TestGroup,ou=groups,ou=portal,o=MyCompany,c=com
objectClass: top
objectClass: ExoMembership
membershipURL: ldap:///ou=users,ou=portal,o=MyCompany,c=com??sub?(uid=*)
cn: manager

This dynlist assigns the role manager:/testgroup to any user.

To be able to disable or enable an user account which is supported since JCR 1.16, you will need to do additional steps according to your LDAP server.

In case of AD, you have nothing to do, it will be enabled automatically as the ability to enable/disable an user account is supported natively in AD. We rely on the special attribute userAccountControl. In the LDAPAttributeMapping make sure that the field userLDAPClasses is configured to top,person,organizationalPerson,user or any object class that includes the attribute userAccountControl.

For all other LDAP servers, you will need to define the attribute userAccountControl, to do so you can either (if allowed by your LDAP server) modify the object class inetOrgPerson in order to add the attribute userAccountControl or define a new object class on top of inetOrgPerson that includes the attribute userAccountControl.

See below the definition of the attribute userAccountControl which is needed in both cases and the definition of the object class user that is needed only for the second case in a schema format.

# ldap-disabled-accounts.schema -- Disabled Account Schema
#   This schema is an extension required in case we would like
#   to be able to support enabled and disabled account

attributetype ( 1.2.840.113556.1.4.8
 NAME 'userAccountControl'
 DESC 'Flags that control the behavior of the user account'
 EQUALITY integerMatch
 SYNTAX '1.3.6.1.4.1.1466.115.121.1.27'
 SINGLE-VALUE )

objectclass ( 1.2.840.113556.1.5.9
 NAME 'user'
 SUP inetOrgPerson
 STRUCTURAL 
 MAY (userAccountControl) )

In case you would need an ldif instead of a schema, it would be something like that

dn: cn=schema
changetype: modify
add: attributeTypes
attributeTypes: ( 1.2.840.113556.1.4.8 
 NAME 'userAccountControl' 
 DESC 'Flags that control the behavior of the user account' 
 EQUALITY integerMatch 
 SYNTAX '1.3.6.1.4.1.1466.115.121.1.27' 
 SINGLE-VALUE )

dn: cn=schema
changetype: modify
add: objectClasses
objectClasses: ( 1.2.840.113556.1.5.9 
 NAME 'user' 
 SUP inetOrgPerson 
 STRUCTURAL 
 MAY (userAccountControl) )

Once you have defined the new attribute userAccountControl in your LDAP server, you will need to configure it in the LDAPAttributeMapping to enable the ability to enable/disable an user, by adding a new field as next:

<field name="userAccountControlAttr"><string>userAccountControl</string></field>

In case you decided to define also the objectclass user, you will also need to modify the value of the field userLDAPClasses as next:

<field name="userLDAPClasses"><string>top,person,organizationalPerson,inetOrgPerson,user</string></field>

Once these steps done you will be able to enable/disable an user account thanks to the method UserHandler.setEnabled, in case old structure is detected it will automatically try to migrate to the new user object class.

You may decide to make eXo users to be mapped on top of JCR. eXo provides an implementation of its OrganizationService on top of JCR.

This is an implementation of the exo.core.component.organization.api API. The information will be stored in the root node exo:organization of the workspace. The workspace name has to be configured in the configuration file (see below)

<import>war:/conf/organization/idm-configuration.xml</import>

With

<import>war:/conf/organization/exo/jcr-configuration.xml</import>

eXo starts and autocreates its organization model in /exo:organization node

That's it! Now eXo uses your JCR node as its organization model storage. Users, groups and memberships are now stored and retrieved from there.

Sice eXo JCR 1.11 you can add two new params:

<value-param>
  <name>repository</name>
  <description>The name of repository where organization storage will be created</description>
  <value>db1</value>
</value-param>
<value-param>
  <name>storage-path</name>
  <description>The relative path where organization storage will be created</description>
  <value>/exo:organization</value>
</value-param>
      

where repository is the name of the repository where the organization storage will be created storage-path - the relative path to the stored data

Register JCR Organization service namespace and nodetypes via RepositoryService's plugins:

<component>
<key>org.exoplatform.services.jcr.RepositoryService</key>
<type>org.exoplatform.services.jcr.impl.RepositoryServiceImpl</type>
<component-plugins>
  <component-plugin>
    <name>add.namespaces</name>
    <set-method>addPlugin</set-method>
    <type>org.exoplatform.services.jcr.impl.AddNamespacesPlugin</type>
    <init-params>
      <properties-param>
        <name>namespaces</name>
        <property name="jos" value="http://www.exoplatform.com/jcr-services/organization-service/1.0/"/>
      </properties-param>
    </init-params>
  </component-plugin>
  <component-plugin>
    <name>add.nodeType</name>
    <set-method>addPlugin</set-method>
    <type>org.exoplatform.services.jcr.impl.AddNodeTypePlugin</type>
    <init-params>
      <values-param>
        <name>autoCreatedInNewRepository</name>
        <description>Node types configuration file</description>
        <value>jar:/conf/organization-nodetypes.xml</value>
      </values-param>
    </init-params>
  </component-plugin>
</component-plugins>
</component>
        

The process of launching the Organization Service TCK tests against your Organization Service is quite easy. For instance you may add TCK tests to your maven project and launch them during unit testing phase. To do that you need to complete the next two steps:

Organization Service TCK tests are available as a separate maven artifact, so the first thing you need to do is to add this artifact as a dependency to your pom.xml file

      <dependency>
        <groupId>org.exoplatform.core</groupId>
        <artifactId>exo.core.component.organization.tests</artifactId>
        <version>2.4.3-GA</version>
        <classifier>sources</classifier>
        <scope>test</scope>
      </dependency>

You will also need to unpack tests as they are archieved within jar file. For this purpose you may use maven-dependency-plugin

     <plugin>
         <groupId>org.apache.maven.plugins</groupId>
         <artifactId>maven-dependency-plugin</artifactId>
         <executions>
            <execution>
               <id>unpack</id>
               <phase>generate-test-sources</phase>
               <goals>
                  <goal>unpack</goal>
               </goals>
               <configuration>
                  <artifactItems>
                     <artifactItem>
                        <groupId>org.exoplatform.core</groupId>
                        <artifactId>exo.core.component.organization.tests</artifactId>
                        <classifier>sources</classifier>
                        <type>jar</type>
                        <overWrite>false</overWrite>
                     </artifactItem>
                  </artifactItems>
                  <outputDirectory>${project.build.directory}/org-service-tck-tests</outputDirectory>
               </configuration>
            </execution>
         </executions>
      </plugin>

Note

Remember the value of outputDirectory parameter as you will need it later.

After you have unpacked the tests you need to add the tests sources and resources, use build-helper-maven-plugin

      <plugin>
         <groupId>org.codehaus.mojo</groupId>
         <artifactId>build-helper-maven-plugin</artifactId>
         <version>1.3</version>
         <executions>
            <execution>
               <id>add-test-resource</id>
               <phase>generate-test-sources</phase>
               <goals>
                  <goal>add-test-resource</goal>
               </goals>
               <configuration>
                  <resources>
                     <resource>
                        <directory>${project.build.directory}/org-service-tck-tests</directory>
                     </resource>
                  </resources>
               </configuration>
            </execution> 
            <execution>
               <id>add-test-source</id>
               <phase>generate-test-sources</phase>
               <goals>
                  <goal>add-test-source</goal>
               </goals>
               <configuration>
                  <sources>
                     <source>${project.build.directory}/org-service-tck-tests</source>
                  </sources>
               </configuration>
            </execution>
         </executions>
      </plugin> 

Note

directory and source parameter should point to the location you've specified in outputDirectory parameter just above.

You also need to include all TCK tests using maven-surefire-plugin

      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-surefire-plugin</artifactId>
        <configuration>
          ...
          <includes>
              <include>org/exoplatform/services/tck/organization/Test*.java</include>
          </includes>                   
          ...
        </configuration>
      </plugin>

As a result you should have TCK being launched during your next maven clean install. Example of configured pom.xml file you can find at Git server

TCK tests use standalone container, so to launch TCK tests propertly you will also need to add Organization Service as a standalone component. For that purpose use configuration file, which is to be located in 'src/test/java/conf/standalone/test-configuration.xml' by default, but its location can be changed by system property called orgservice.test.configuration.file. Add your Organization Service configuration with all needed components there.

In addition you need to populate your Organization Service with organization data (TCK tests are designed to use this data):

      <external-component-plugins>
        <target-component>org.exoplatform.services.organization.OrganizationService</target-component>
        <component-plugin>
          <name>init.service.listener</name>
          <set-method>addListenerPlugin</set-method>
          <type>org.exoplatform.services.organization.OrganizationDatabaseInitializer</type>
          <description>this listener populate organization data for the first launch</description>
          <init-params>      
            <value-param>
              <name>checkDatabaseAlgorithm</name>
              <description>check database</description>
              <value>entry</value>
            </value-param>      
            <value-param>
              <name>printInformation</name>
              <description>Print information init database</description>
              <value>false</value>
            </value-param> 
            <object-param>
              <name>configuration</name>
              <description>description</description>
              <object type="org.exoplatform.services.organization.OrganizationConfig">
                <field  name="membershipType">
                  <collection type="java.util.ArrayList">
                  	<value>
                      <object type="org.exoplatform.services.organization.OrganizationConfig$MembershipType">
                        <field  name="type"><string>manager</string></field>
                        <field  name="description"><string>manager membership type</string></field>
                      </object>
                    </value>
                    <value>
                      <object type="org.exoplatform.services.organization.OrganizationConfig$MembershipType">
                        <field  name="type"><string>member</string></field>
                        <field  name="description"><string>member membership type</string></field>
                      </object>
                    </value>                
                    <value>
                      <object type="org.exoplatform.services.organization.OrganizationConfig$MembershipType">
                        <field  name="type"><string>validator</string></field>
                        <field  name="description"><string>validator membership type</string></field>
                      </object>
                    </value>
                  </collection>
                </field>

                <field  name="group">
                  <collection type="java.util.ArrayList">             
                    <value>
                      <object type="org.exoplatform.services.organization.OrganizationConfig$Group">
                        <field  name="name"><string>platform</string></field>
                        <field  name="parentId"><string></string></field>
                        <field  name="description"><string>the /platform group</string></field>
                        <field  name="label"><string>Platform</string></field>                    
                      </object>
                    </value>
                    <value>
                      <object type="org.exoplatform.services.organization.OrganizationConfig$Group">
                        <field  name="name"><string>administrators</string></field>
                        <field  name="parentId"><string>/platform</string></field>
                        <field  name="description"><string>the /platform/administrators group</string></field>
                        <field  name="label"><string>Administrators</string></field>
                      </object>
                    </value>
                    <value>
                      <object type="org.exoplatform.services.organization.OrganizationConfig$Group">
                        <field  name="name"><string>users</string></field>
                        <field  name="parentId"><string>/platform</string></field>
                        <field  name="description"><string>the /platform/users group</string></field>
                        <field  name="label"><string>Users</string></field>
                      </object>
                    </value>
                    <value>
                      <object type="org.exoplatform.services.organization.OrganizationConfig$Group">
                        <field  name="name"><string>guests</string></field>
                        <field  name="parentId"><string>/platform</string></field>
                        <field  name="description"><string>the /platform/guests group</string></field>
                        <field  name="label"><string>Guests</string></field>
                      </object>
                    </value>
                    <value>
                      <object type="org.exoplatform.services.organization.OrganizationConfig$Group">
                        <field  name="name"><string>organization</string></field>
                        <field  name="parentId"><string></string></field>
                        <field  name="description"><string>the organization group</string></field>
                        <field  name="label"><string>Organization</string></field>
                      </object>
                    </value>
                    <value>
                      <object type="org.exoplatform.services.organization.OrganizationConfig$Group">
                        <field  name="name"><string>management</string></field>
                        <field  name="parentId"><string>/organization</string></field>
                        <field  name="description"><string>the /organization/management group</string></field>
                        <field  name="label"><string>Management</string></field>
                      </object>
                    </value>
                    <value>
                      <object type="org.exoplatform.services.organization.OrganizationConfig$Group">
                        <field  name="name"><string>executive-board</string></field>
                        <field  name="parentId"><string>/organization/management</string></field>
                        <field  name="description"><string>the /organization/management/executive-board group</string></field>
                        <field  name="label"><string>Executive Board</string></field>
                      </object>
                    </value>
                    <value>
                      <object type="org.exoplatform.services.organization.OrganizationConfig$Group">
                        <field  name="name"><string>human-resources</string></field>
                        <field  name="parentId"><string>/organization/management</string></field>
                        <field  name="description"><string>the /organization/management/human-resource group</string></field>
                        <field  name="label"><string>Human Resources</string></field>
                      </object>
                    </value>
                    <value>
                      <object type="org.exoplatform.services.organization.OrganizationConfig$Group">
                        <field  name="name"><string>communication</string></field>
                        <field  name="parentId"><string>/organization</string></field>
                        <field  name="description"><string>the /organization/communication group</string></field>
                        <field  name="label"><string>Communication</string></field>
                      </object>
                    </value>
                    <value>
                      <object type="org.exoplatform.services.organization.OrganizationConfig$Group">
                        <field  name="name"><string>marketing</string></field>
                        <field  name="parentId"><string>/organization/communication</string></field>
                        <field  name="description"><string>the /organization/communication/marketing group</string></field>
                        <field  name="label"><string>Marketing</string></field>
                      </object>
                    </value>
                    <value>
                      <object type="org.exoplatform.services.organization.OrganizationConfig$Group">
                        <field  name="name"><string>press-and-media</string></field>
                        <field  name="parentId"><string>/organization/communication</string></field>
                        <field  name="description"><string>the /organization/communication/press-and-media group</string></field>
                        <field  name="label"><string>Press and Media</string></field>
                      </object>
                    </value>
                    <value>
                      <object type="org.exoplatform.services.organization.OrganizationConfig$Group">
                        <field  name="name"><string>operations</string></field>
                        <field  name="parentId"><string>/organization</string></field>
                        <field  name="description"><string>the /organization/operations and media group</string></field>
                        <field  name="label"><string>Operations</string></field>
                      </object>
                    </value>
                    <value>
                      <object type="org.exoplatform.services.organization.OrganizationConfig$Group">
                        <field  name="name"><string>sales</string></field>
                        <field  name="parentId"><string>/organization/operations</string></field>
                        <field  name="description"><string>the /organization/operations/sales group</string></field>
                        <field  name="label"><string>Sales</string></field>
                      </object>
                    </value>
                    <value>
                      <object type="org.exoplatform.services.organization.OrganizationConfig$Group">
                        <field  name="name"><string>finances</string></field>
                        <field  name="parentId"><string>/organization/operations</string></field>
                        <field  name="description"><string>the /organization/operations/finances group</string></field>
                        <field  name="label"><string>Finances</string></field>
                      </object>
                    </value>
                    <value>
                      <object type="org.exoplatform.services.organization.OrganizationConfig$Group">
                        <field  name="name"><string>customers</string></field>
                        <field  name="parentId"><string></string></field>
                        <field  name="description"><string>the /customers group</string></field>
                        <field  name="label"><string>Customers</string></field>
                      </object>
                    </value>                
                    <value>
                      <object type="org.exoplatform.services.organization.OrganizationConfig$Group">
                        <field  name="name"><string>partners</string></field>
                        <field  name="parentId"><string></string></field>
                        <field  name="description"><string>the /partners group</string></field>
                        <field  name="label"><string>Partners</string></field>
                      </object>
                    </value>                
                  </collection>
                </field>

                <field  name="user">
                  <collection type="java.util.ArrayList">
                    <value>
                      <object type="org.exoplatform.services.organization.OrganizationConfig$User">
                        <field  name="userName"><string>root</string></field>
                        <field  name="password"><string>exo</string></field>
                        <field  name="firstName"><string>Root</string></field>
                        <field  name="lastName"><string>Root</string></field>
                        <field  name="email"><string>root@localhost</string></field>
                        <field  name="groups">
                          <string>
                          	manager:/platform/administrators,member:/platform/users,
                          	member:/organization/management/executive-board
                          </string>
                        </field>
                      </object>
                    </value>
                    
                    <value>
                      <object type="org.exoplatform.services.organization.OrganizationConfig$User">
                        <field  name="userName"><string>john</string></field>
                        <field  name="password"><string>exo</string></field>
                        <field  name="firstName"><string>John</string></field>
                        <field  name="lastName"><string>Anthony</string></field>
                        <field  name="email"><string>john@localhost</string></field>
                        <field  name="groups">
                          <string>
                          	member:/platform/administrators,member:/platform/users,
                          	manager:/organization/management/executive-board
                          </string>
                        </field>
                      </object>
                    </value>                                                        
                    <value>
                      <object type="org.exoplatform.services.organization.OrganizationConfig$User">
                        <field  name="userName"><string>marry</string></field>
                        <field  name="password"><string>exo</string></field>
                        <field  name="firstName"><string>Marry</string></field>
                        <field  name="lastName"><string>Kelly</string></field>
                        <field  name="email"><string>marry@localhost</string></field>
                        <field  name="groups">
                          <string>member:/platform/users</string>
                        </field>
                      </object>
                    </value>
                    <value>
                      <object type="org.exoplatform.services.organization.OrganizationConfig$User">
                        <field  name="userName"><string>demo</string></field>
                        <field  name="password"><string>exo</string></field>
                        <field  name="firstName"><string>Demo</string></field>
                        <field  name="lastName"><string>exo</string></field>
                        <field  name="email"><string>demo@localhost</string></field>
                        <field  name="groups">
                          <string>member:/platform/guests,member:/platform/users</string>
                        </field>
                      </object>
                    </value>                       
                  </collection>
                </field>
              </object>
            </object-param>
          </init-params>
        </component-plugin>
      </external-component-plugins>

      <external-component-plugins>
        <target-component>org.exoplatform.services.organization.OrganizationService</target-component>
         <component-plugin>
            <name>tester.membership.type.listener</name>
            <set-method>addListenerPlugin</set-method>
            <type>org.exoplatform.services.organization.MembershipTypeEventListener</type>
            <description>Membership type listerner for testing purpose</description>
         </component-plugin>
      </external-component-plugins>

Ultimately you will have a configuration file which determines standalone container and consists of Organization Service configuration and initialization data. You can find prepared test-configuration.xml file at Git

DocumentReaderService provides API to retrieve DocumentReader by mimetype.

DocumentReader lets the user fetch content of document as String or, in case of TikaDocumentReader, as Reader.

How TikaDocumentReaderService Impl configuration looks like:

<component>
      <key>org.exoplatform.services.document.DocumentReaderService</key>
      <type>org.exoplatform.services.document.impl.tika.TikaDocumentReaderServiceImpl</type>

      <!-- Old-style document readers -->
      <component-plugins>
         <component-plugin>
            <name>pdf.document.reader</name>
            <set-method>addDocumentReader</set-method>
            <type>org.exoplatform.services.document.impl.PDFDocumentReader</type>
            <description>to read the pdf inputstream</description>
         </component-plugin>

         <component-plugin>
            <name>document.readerMSWord</name>
            <set-method>addDocumentReader</set-method>
            <type>org.exoplatform.services.document.impl.MSWordDocumentReader</type>
            <description>to read the ms word inputstream</description>
         </component-plugin>

         <component-plugin>
            <name>document.readerMSXWord</name>
            <set-method>addDocumentReader</set-method>
            <type>org.exoplatform.services.document.impl.MSXWordDocumentReader</type>
            <description>to read the ms word inputstream</description>
         </component-plugin>

         <component-plugin>
            <name>document.readerMSExcel</name>
            <set-method>addDocumentReader</set-method>
            <type>org.exoplatform.services.document.impl.MSExcelDocumentReader</type>
            <description>to read the ms excel inputstream</description>
         </component-plugin>

         <component-plugin>
            <name>document.readerMSXExcel</name>
            <set-method>addDocumentReader</set-method>
            <type>org.exoplatform.services.document.impl.MSXExcelDocumentReader</type>
            <description>to read the ms excel inputstream</description>
         </component-plugin>

         <component-plugin>
            <name>document.readerMSOutlook</name>
            <set-method>addDocumentReader</set-method>
            <type>org.exoplatform.services.document.impl.MSOutlookDocumentReader</type>
            <description>to read the ms outlook inputstream</description>
         </component-plugin>

         <component-plugin>
            <name>PPTdocument.reader</name>
            <set-method>addDocumentReader</set-method>
            <type>org.exoplatform.services.document.impl.PPTDocumentReader</type>
            <description>to read the ms ppt inputstream</description>
         </component-plugin>

         <component-plugin>
            <name>MSXPPTdocument.reader</name>
            <set-method>addDocumentReader</set-method>
            <type>org.exoplatform.services.document.impl.MSXPPTDocumentReader</type>
            <description>to read the ms pptx inputstream</description>
         </component-plugin>

         <component-plugin>
            <name>document.readerHTML</name>
            <set-method>addDocumentReader</set-method>
            <type>org.exoplatform.services.document.impl.HTMLDocumentReader</type>
            <description>to read the html inputstream</description>
         </component-plugin>

         <component-plugin>
            <name>document.readerXML</name>
            <set-method>addDocumentReader</set-method>
            <type>org.exoplatform.services.document.impl.XMLDocumentReader</type>
            <description>to read the xml inputstream</description>
         </component-plugin>

         <component-plugin>
            <name>TPdocument.reader</name>
            <set-method>addDocumentReader</set-method>
            <type>org.exoplatform.services.document.impl.TextPlainDocumentReader</type>
            <description>to read the plain text inputstream</description>
            <init-params>
               <!--
                  values-param> <name>defaultEncoding</name> <description>description</description> <value>UTF-8</value>
                  </values-param
               -->
            </init-params>
         </component-plugin>

         <component-plugin>
            <name>document.readerOO</name>
            <set-method>addDocumentReader</set-method>
            <type>org.exoplatform.services.document.impl.OpenOfficeDocumentReader</type>
            <description>to read the OO inputstream</description>
         </component-plugin>

      </component-plugins>

      <init-params>
        <value-param>
          <name>tika-configuration</name>
          <value>jar:/conf/portal/tika-config.xml</value>
        </value-param>
      </init-params>

   </component>
</configuration>

tika-config.xml example:

<properties>

  <mimeTypeRepository magic="false"/>
  <parsers>

    <parser name="parse-dcxml" class="org.apache.tika.parser.xml.DcXMLParser">
      <mime>application/xml</mime>
      <mime>image/svg+xml</mime>
      <mime>text/xml</mime>
      <mime>application/x-google-gadget</mime>
    </parser>

    <parser name="parse-office" class="org.apache.tika.parser.microsoft.OfficeParser">
      <mime>application/excel</mime>
      <mime>application/xls</mime>
      <mime>application/msworddoc</mime>
      <mime>application/msworddot</mime>
      <mime>application/powerpoint</mime>
      <mime>application/ppt</mime>

      <mime>application/x-tika-msoffice</mime>
      <mime>application/msword</mime>
      <mime>application/vnd.ms-excel</mime>
      <mime>application/vnd.ms-excel.sheet.binary.macroenabled.12</mime>
      <mime>application/vnd.ms-powerpoint</mime>
      <mime>application/vnd.visio</mime>
      <mime>application/vnd.ms-outlook</mime>
    </parser>

    <parser name="parse-ooxml" class="org.apache.tika.parser.microsoft.ooxml.OOXMLParser">
      <mime>application/x-tika-ooxml</mime>
      <mime>application/vnd.openxmlformats-package.core-properties+xml</mime>
      <mime>application/vnd.openxmlformats-officedocument.spreadsheetml.sheet</mime>
      <mime>application/vnd.openxmlformats-officedocument.spreadsheetml.template</mime>
      <mime>application/vnd.ms-excel.sheet.macroenabled.12</mime>
      <mime>application/vnd.ms-excel.template.macroenabled.12</mime>
      <mime>application/vnd.ms-excel.addin.macroenabled.12</mime>
      <mime>application/vnd.openxmlformats-officedocument.presentationml.presentation</mime>
      <mime>application/vnd.openxmlformats-officedocument.presentationml.template</mime>
      <mime>application/vnd.openxmlformats-officedocument.presentationml.slideshow</mime>
      <mime>application/vnd.ms-powerpoint.presentation.macroenabled.12</mime>
      <mime>application/vnd.ms-powerpoint.slideshow.macroenabled.12</mime>
      <mime>application/vnd.ms-powerpoint.addin.macroenabled.12</mime>
      <mime>application/vnd.openxmlformats-officedocument.wordprocessingml.document</mime>
      <mime>application/vnd.openxmlformats-officedocument.wordprocessingml.template</mime>
      <mime>application/vnd.ms-word.document.macroenabled.12</mime>
      <mime>application/vnd.ms-word.template.macroenabled.12</mime>
    </parser>

    <parser name="parse-html" class="org.apache.tika.parser.html.HtmlParser">
      <mime>text/html</mime>
    </parser>

    <parser mame="parse-rtf" class="org.apache.tika.parser.rtf.RTFParser">
      <mime>application/rtf</mime>
    </parser>

    <parser name="parse-pdf" class="org.apache.tika.parser.pdf.PDFParser">
      <mime>application/pdf</mime>
    </parser>

    <parser name="parse-txt" class="org.apache.tika.parser.txt.TXTParser">
      <mime>text/plain</mime>
      <mime>script/groovy</mime>
      <mime>application/x-groovy</mime>
      <mime>application/x-javascript</mime>
      <mime>application/javascript</mime>
      <mime>text/javascript</mime>
    </parser>

    <parser name="parse-openoffice" class="org.apache.tika.parser.opendocument.OpenOfficeParser">

      <mime>application/vnd.oasis.opendocument.database</mime>

      <mime>application/vnd.sun.xml.writer</mime>
      <mime>application/vnd.oasis.opendocument.text</mime>
      <mime>application/vnd.oasis.opendocument.graphics</mime>
      <mime>application/vnd.oasis.opendocument.presentation</mime>
      <mime>application/vnd.oasis.opendocument.spreadsheet</mime>
      <mime>application/vnd.oasis.opendocument.chart</mime>
      <mime>application/vnd.oasis.opendocument.image</mime>
      <mime>application/vnd.oasis.opendocument.formula</mime>
      <mime>application/vnd.oasis.opendocument.text-master</mime>
      <mime>application/vnd.oasis.opendocument.text-web</mime>
      <mime>application/vnd.oasis.opendocument.text-template</mime>
      <mime>application/vnd.oasis.opendocument.graphics-template</mime>
      <mime>application/vnd.oasis.opendocument.presentation-template</mime>
      <mime>application/vnd.oasis.opendocument.spreadsheet-template</mime>
      <mime>application/vnd.oasis.opendocument.chart-template</mime>
      <mime>application/vnd.oasis.opendocument.image-template</mime>
      <mime>application/vnd.oasis.opendocument.formula-template</mime>
      <mime>application/x-vnd.oasis.opendocument.text</mime>
      <mime>application/x-vnd.oasis.opendocument.graphics</mime>
      <mime>application/x-vnd.oasis.opendocument.presentation</mime>
      <mime>application/x-vnd.oasis.opendocument.spreadsheet</mime>
      <mime>application/x-vnd.oasis.opendocument.chart</mime>
      <mime>application/x-vnd.oasis.opendocument.image</mime>
      <mime>application/x-vnd.oasis.opendocument.formula</mime>
      <mime>application/x-vnd.oasis.opendocument.text-master</mime>
      <mime>application/x-vnd.oasis.opendocument.text-web</mime>
      <mime>application/x-vnd.oasis.opendocument.text-template</mime>
      <mime>application/x-vnd.oasis.opendocument.graphics-template</mime>
      <mime>application/x-vnd.oasis.opendocument.presentation-template</mime>
      <mime>application/x-vnd.oasis.opendocument.spreadsheet-template</mime>
      <mime>application/x-vnd.oasis.opendocument.chart-template</mime>
      <mime>application/x-vnd.oasis.opendocument.image-template</mime>
      <mime>application/x-vnd.oasis.opendocument.formula-template</mime>
    </parser>

    <parser name="parse-image" class="org.apache.tika.parser.image.ImageParser">
      <mime>image/bmp</mime>
      <mime>image/gif</mime>
      <mime>image/jpeg</mime>
      <mime>image/png</mime>
      <mime>image/tiff</mime>
      <mime>image/vnd.wap.wbmp</mime>
      <mime>image/x-icon</mime>
      <mime>image/x-psd</mime>
      <mime>image/x-xcf</mime>
    </parser>

    <parser name="parse-class" class="org.apache.tika.parser.asm.ClassParser">
      <mime>application/x-tika-java-class</mime>
    </parser>

    <parser name="parse-mp3" class="org.apache.tika.parser.mp3.Mp3Parser">
      <mime>audio/mpeg</mime>
    </parser>

    <parser name="parse-midi" class="org.apache.tika.parser.audio.MidiParser">
      <mime>application/x-midi</mime>
      <mime>audio/midi</mime>
    </parser>

    <parser name="parse-audio" class="org.apache.tika.parser.audio.AudioParser">
      <mime>audio/basic</mime>
      <mime>audio/x-wav</mime>
      <mime>audio/x-aiff</mime>
    </parser>

  </parsers>

</properties>

Digest access authentication is one of the agreed methods a web server can use to negotiate credentials with a web user's browser. It uses encryption to send the password over the network which is safer than the Basic access authentication that sends plaintext.

Technically digest authentication is an application of MD5 cryptographic hashing with usage of nonce values to discourage cryptanalysis. It uses the HTTP protocol.

To configure you server to use DIGEST authentication we need to edit serverside JAAS module implementation configuration file.

The Web Services module allows eXo technology to integrate with external products and services.

It is implementation of API for RESTful Web Services with extensions, Servlet and cross-domain AJAX web-frameworks and JavaBean-JSON transformer.

Representational State Transfer (REST) is a style of software architecture for distributed hypermedia systems such as the World Wide Web. The term was introduced in the doctoral dissertation in 2000 by Roy Fielding, one of the principal authors of the Hypertext Transfer Protocol (HTTP) specification, and has come into widespread use in the networking community.

REST strictly refers to a collection of network architecture principles that outline how resources are defined and addressed. The term is often used in a looser sense to describe any simple interface that transmits domain-specific data over HTTP without an additional messaging layer such as SOAP or session tracking via HTTP cookies.

The key abstraction of information in REST is a resource. Any information that can be named can be a resource: a document or image, a temporal service (e.g. "today's weather in Los Angeles"), a collection of other resources, a non-virtual object (e.g. a person), and so on. In other words, any concept that might be the target of an author's hypertext reference must fit within the definition of a resource. A resource is a conceptual mapping to a set of entities, not the entity that corresponds to the mapping at any particular point in time.

REST uses a resource identifier to identify the particular resource involved in an interaction between components. REST connectors provide a generic interface for accessing and manipulating the value set of a resource, regardless of how the membership function is defined or the type of software that is handling the request. URL or URN are the examples of a resource identifier.

REST components perform actions with a resource by using a representation to capture the current or intended state of that resource and transferring that representation between components. A representation is a sequence of bytes, plus representation metadata to describe those bytes. Other commonly used but less precise names for a representation include: document, file, and HTTP message entity, instance, or variant. A representation consists of data, metadata describing the data, and, on occasion, metadata to describe the metadata (usually for the purpose of verifying message integrity). Metadata are in the form of name-value pairs, where the name corresponds to a standard that defines the value's structure and semantics. The data format of a representation is known as a media type.


REST uses various connector types to encapsulate the activities of accessing resources and transferring resource representations. The connectors present an abstract interface for component communication, enhancing simplicity by providing a complete separation of concepts and hiding the underlying implementation of resources and communication mechanisms.


The primary connector types are client and server. The essential difference between the two is that a client initiates communication by making a request, whereas a server listens for connections and responds to requests in order to supply access to its services. A component may include both client and server connectors.

An important part of RESTful architecture is a well-defined interface to communicate, in particular it is a set of HTTP methods such as POST, GET, PUT and DELETE. These methods are often compared with the CREATE, READ, UPDATE, DELETE (CRUD) operations associated with database technologies. An analogy can also be made:

  • PUT is analogous to CREATE or PASTE OVER,

  • GET to READ or COPY,

  • POST to UPDATE or PASTE AFTER, and

  • DELETE to DELETE or CUT.

Note

RESTful architecture is not limited to those methods, one of good examples of extension is the WebDAV protocol.

The CRUD (Create, Read, Update and Delete) verbs are designed to operate with atomic data within the context of a database transaction. REST is designed around the atomic transfer of a more complex state and can be viewed as a mechanism for transferring structured information from one application to another.

HTTP separates the notions of a web server and a web browser. This allows the implementation of each to vary from the other based on the client/server principle. When used RESTfully, HTTP is stateless. Each message contains all the information necessary to understand the request.

As a result, neither the client nor the server needs to remember any communication-state between messages. Any state retained by the server must be modeled as a resource.

This section will show you how to overwrite the default providers in eXo JAX-RS implementation.

RestServicesList service provides information about REST services deployed to the application server.

The list can be provided in two formats: HTML and JSON.

To get the list of services in HTML format use listHTML() method:

@GET
@Produces({MediaType.TEXT_HTML})
public byte[] listHTML()
{
   ...
}  

To do this, perform a simple GET request to the RestServicesList link.

f.e. curl -u root:exo http://localhost:8080/rest/ will return such HTML code:

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" >
<html>
   <head>
      <title>eXo JAXRS Implementation</title>
   </head>
   <body>
      <h3 style="text-align:center;">Root resources</h3>
      <table   width="90%"   style="table-layout:fixed;">
         <tr>
            <th>Path</th>
            <th>Regex</th>
            <th>FQN</th>
         </tr>
         <tr>
            <td>script/groovy</td>
            <td>/script/groovy(/.*)?</td>
            <td>org.exoplatform.services.jcr.ext.script.groovy.GroovyScript2RestLoader</td>
         </tr>
         <tr>
            <td>/lnkproducer/</td>
            <td>/lnkproducer(/.*)?</td>
            <td>org.exoplatform.services.jcr.webdav.lnkproducer.LnkProducer</td>
         </tr>
         <tr>
            <td>/registry/</td>
            <td>/registry(/.*)?</td>
            <td>org.exoplatform.services.jcr.ext.registry.RESTRegistryService</td>
         </tr>
         <tr>
            <td>/jcr</td>
            <td>/jcr(/.*)?</td>
            <td>org.exoplatform.services.jcr.webdav.WebDavServiceImpl</td>
         </tr>
         <tr>
            <td>/</td>
            <td>(/.*)?</td>
            <td>org.exoplatform.services.rest.ext.service.RestServicesList</td>
         </tr>
      </table>
   </body>
</html>    

If you perform the same request with your browser, you'll see the table with the list of deployed services like this:


This section describes how to use Groovy scripts as REST services. Consider these operations:

In this section, we consider RESTful service to be compatible with JSR-311 specification. The last feature is currently available in version 1.11-SNAPSHOT.

There are two ways to save a script in JCR. The first way is to save it at server startup time by using configuration.xml and the second way is to upload the script via HTTP.

Load script at startup time

This way can be used for load prepared scripts, to use this way. we must configure org.exoplatform.services.jcr.ext.script.groovy.GroovyScript2RestLoaderPlugin. This is simple configuration example.

<external-component-plugins>
  <target-component>org.exoplatform.services.jcr.ext.script.groovy.GroovyScript2RestLoader</target-component>
  <component-plugin>
    <name>test</name>
    <set-method>addPlugin</set-method>
    <type>org.exoplatform.services.jcr.ext.script.groovy.GroovyScript2RestLoaderPlugin</type>
    <init-params>
      <value-param>
        <name>repository</name>
        <value>repository</value>
      </value-param>
      <value-param>
        <name>workspace</name>
        <value>production</value>
      </value-param>
      <value-param>
        <name>node</name>
        <value>/script/groovy</value>
      </value-param>
      <properties-param>
        <name>JcrGroovyTest.groovy</name>
        <property name="autoload" value="true" />
        <property name="path" value="file:/home/andrew/JcrGroovyTest.groovy" />
      </properties-param>
    </init-params>
  </component-plugin>
</external-component-plugins>

The first is value-param sets JCR repository, the second is value-param sets workspace and the third one is sets JCR node where scripts from plugin will be stored. If specified node does not exist, then it will be created. List of scripts is set by properties-params. Name of each properties-param will be used as node name for stored script, property autoload says to deploy this script at startup time, property path sets the source of script to be loaded. In this example we try to load single script from local file /home/andrew/JcrGroovyTest.groovy.

Load script via HTTP

This is samples of HTTP requests. In this example, we will upload script from file with name test.groovy.

andrew@ossl:~> curl -u root:exo \
-X POST \
-H 'Content-type:script/groovy' \
--data-binary @test.groovy \
http://localhost:8080/rest/script/groovy/add/repository/production/script/groovy/test.groovy

This example imitate sending data with HTML form ('multipart/form-data'). Parameter autoload is optional. If parameter autoload=true then script will be instantiate and deploy script immediately.

andrew@ossl:~> curl -u root:exo \
-X POST \
-F "file=@test.groovy;name=test" \
-F "autoload=true" \
http://localhost:8080/rest/script/groovy/add/repository/production/script/groovy/test1.groovy

If GroovyScript2RestLoader configured as was decribed in the previous section, then all "autoload" scripts deployed. In the first section, we added script from file /home/andrew/JcrGroovyTest.groovy to JCR node /script/groovy/JcrGroovyTest.groovy, repository repository, workspace production. In section "Load script via HTTP", it was refered about load scripts via HTTP, there is an opportunity to manage the life cycle of script.

Undeploy script, which is alredy deployed:

andrew@ossl:~> curl -u root:exo \
-X GET \
http://localhost:8080/rest/script/groovy/load/repository/production/script/groovy/JcrGroovyTest.groovy?state=false

Then deploy it again:

andrew@ossl:~> curl -u root:exo \
-X GET \
http://localhost:8080/rest/script/groovy/load/repository/production/script/groovy/JcrGroovyTest.groovy?state=true

or even more simple:

andrew@ossl:~> curl -u root:exo \
-X GET \
http://localhost:8080/rest/script/groovy/load/repository/production/script/groovy/JcrGroovyTest.groovy

Disable scripts autoloading, NOTE it does not change current state:

andrew@ossl:~> curl -u root:exo \
-X GET \
http://localhost:8080/rest/script/groovy/repository/production/script/groovy/JcrGroovyTest.groovy/autoload?state=false

Enable it again:

andrew@ossl:~> curl -u root:exo \
-X GET \
http://localhost:8080/rest/script/groovy/autoload/repository/production/script/groovy/JcrGroovyTest.groovy?state=true

and again more simpe variant:

andrew@ossl:~> curl -u root:exo \
-X GET \
http://localhost:8080/rest/script/groovy/autoload/repository/production/script/groovy/JcrGroovyTest.groovy

Change script source code:

andrew@ossl:~> curl -u root:exo \
-X POST \
-H 'Content-type:script/groovy' \
--data-binary @JcrGroovyTest.groovy \
http://localhost:8080/rest/script/groovy/update/repository/production/script/groovy/JcrGroovyTest.groovy

This example imitates sending data with HTML form ('multipart/form-data').

andrew@ossl:~> curl -u root:exo \
-X POST \
-F "file=@JcrGroovyTest.groovy;name=test" \
http://localhost:8080/rest/script/groovy/update/repository/production/script/groovy/JcrGroovyTest.groovy

Remove script from JCR:

andrew@ossl:~> curl -u root:exo \
-X GET \
http://localhost:8080/rest/script/groovy/delete/repository/production/script/groovy/JcrGroovyTest.groovy

Now we are going to try simple example of Groovy RESTfull service. There is one limitation, even if we use groovy, we should use Java style code and decline to use dynamic types, but of course we can use it in private methods and feilds. Create file JcrGroovyTest.groovy, in this example I save it in my home directory /home/andrew/JcrGroovyTest.groovy. Then, configure GroovyScript2RestLoaderPlugin as described in section Load script at startup time.

import javax.jcr.Node
import javax.jcr.Session
import javax.ws.rs.GET
import javax.ws.rs.Path
import javax.ws.rs.PathParam
import org.exoplatform.services.jcr.RepositoryService
import org.exoplatform.services.jcr.ext.app.ThreadLocalSessionProviderService

@Path("groovy/test/{repository}/{workspace}")
public class JcrGroovyTest {
  private RepositoryService                 repositoryService
  private ThreadLocalSessionProviderService sessionProviderService
  
  public JcrGroovyTest(RepositoryService repositoryService,
                       ThreadLocalSessionProviderService sessionProviderService) {
    this.repositoryService = repositoryService
    this.sessionProviderService = sessionProviderService
  }
  

  @GET
  @Path("{path:.*}")
  public String nodeUUID(@PathParam("repository") String repository,
                         @PathParam("workspace") String workspace,
                         @PathParam("path") String path) {
    Session ses = null
    try {
      ses = sessionProviderService.getSessionProvider(null).getSession(workspace, repositoryService.getRepository(repository))
      Node node = (Node) ses.getItem("/" + path)
      return node.getUUID() + "\n"
    } finally {
      if (ses != null)
        ses.logout()
    }
  }

After configuration is done, start the server. If configuration is correct and script does not have syntax error, you should see next:

In the screenshot, we can see the service deployed.

First, create a folder via WebDAV in the repository production, folder name 'test'. Now, we can try access this service. Open another console and type command:

andrew@ossl:~> curl -u root:exo \
http://localhost:8080/rest/groovy/test/repository/production/test

Whe you try to execute this command, you should have exception, because JCR node '/test' is not referenceable and has not UUID. We can try add mixin mix:referenceable. To do this, add one more method in script. Open script from local source code /home/andrew/JcrGroovyTest.groovy, add following code and save file.

@POST
@Path("{path:.*}")
public void addReferenceableMixin(@PathParam("repository") String repository,
                                  @PathParam("workspace") String workspace,
                                  @PathParam("path") String path) {
  Session ses = null
  try {
    ses = sessionProviderService.getSessionProvider(null).getSession(workspace, repositoryService.getRepository(repository))
    Node node = (Node) ses.getItem("/" + path)
    node.addMixin("mix:referenceable")
    ses.save()
  } finally {
    if (ses != null)
      ses.logout()
  }
}

Now we can try to change script deployed on the server without server restart. Type in console next command:

andrew@ossl:~> curl -i -v -u root:exo \
-X POST \
--data-binary @JcrGroovyTest.groovy \
-H 'Content-type:script/groovy' \
http://localhost:8080/rest/script/groovy/update/repository/production/script/groovy/JcrGroovyTest.groovy

Node '/script/groovy/JcrGroovyTest.groovy' has property exo:autoload=true so script will be re-deployed automatically when script source code changed.

Script was redeployed, now try to access a newly created method.

andrew@ossl:~> curl -u root:exo \
-X POST \
http://localhost:8080/rest/groovy/test/repository/production/test

Method excution should be quiet, without output, traces, etc. Then we can try again get node UUID.

andrew@ossl:~> curl -u root:exo \
http://localhost:8080/rest/groovy/test/repository/production/test
1b8c88d37f0000020084433d3af4941f

Node UUID: 1b8c88d37f0000020084433d3af4941f

We don't need this scripts any more, so remove it from JCR.

andrew@ossl:~> curl -u root:exo \
http://localhost:8080/rest/script/groovy/delete/repository/production/script/groovy/JcrGroovyTest.groovy

eXo Webservice provides a framework to cross-domain AJAX. This section shows you how to use this framework.

You can checkout the source code at https://github.com/exoplatform/ws/tree/stable/2.4.x/exo.ws.frameworks.javascript.cross-domain-ajax.

eXo Webservice supports out of the box JSON-P which is another way to workaround the same-origin policy imposed by the browsers. For more details, you can visit http://www.json-p.org/ and http://en.wikipedia.org/wiki/JSONP.

For the sake of simplicity, the current implementation imposes the name of the query parameter allowing to define the name of the method to use for the callback, the name of this special parameter is jsonp, if not set an exception will be thrown. The entity provider for jsonp allows to produce content of type application/javascript, text/javascript, application/json-p or text/json-p.

So for example, assuming that we have the following code in the client side:

<script type="text/javascript"
        src="http://localhost:8080/rest/Users/1234?jsonp=parseResponse">
</script>

The resource could be something like:

@GET
@Produces("text/javascript")
@Path("/Users/{userId}")      
public User findUser(@PathParam("userId") String userId)
{
    return em.find(User.class, userId);
}

The result would be of type:

parseResponse({"Name": "Foo", "Id": 1234, "Rank": 7});

This chapter provides you all FAQs related to the contents mentioned above.

It's the draft for a future FAQ of JCR usage.

So we have configured JCR in standalone mode and want to reconfigure it for clustered environment. First of all, let's check whether all requirements are satisfied:

So now, we need to configure the Container a bit. Check exo-configuration.xml to be sure that you are using JBossTS Transaction Service and Infinispan Transaction Manager, as shown below.

<component>
   <key>org.infinispan.transaction.lookup.TransactionManagerLookup</key>
   <type>org.exoplatform.services.transaction.infinispan.JBossStandaloneJTAManagerLookup</type>
</component>
   
<component>
  <key>org.exoplatform.services.transaction.TransactionService</key>
  <type>org.exoplatform.services.transaction.infinispan.JBossTransactionsService</type>
  <init-params>
    <value-param>
      <name>timeout</name>
      <value>3000</value>
    </value-param>
  </init-params>   
</component>

Next stage is actually the JCR configuration. We need Infinispan configuration templates for : data-cache, indexer-cache and lock-manager-cache. Later they will be used to configure JCR's core components. There are pre-bundled templates in EAR or JAR in conf/standalone/cluster. They can be used as is or re-written if needed. And now, re-configure a bit each workspace. Actually, a few parameters need changing, e.g. <cache>, <query-handler> and <lock-manager>.

Those properties have the same meaning and restrictions as in the previous block. The last property "max-volatile-time" is not mandatory but recommended. This notifies that the latest changes in index will be visible for each cluster node not later than in 60s.

That's all. The JCR is ready to join a cluster.

OS client (Windows, Linux etc) doesn't set an encoding in a request. But eXo JCR WebDAV server looks for an encoding in a Content-Type header and set it to jcr:encoding. See http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html, 14.17 Content-Type. e.g. Content-Type: text/html; charset=ISO-8859-4 So, if a client will set Content-Type header, e.g. JS code from a page, it will works for a text file as expected.

If WebDAV request doesn't contain a content encoding, it's possible to write a dedicated action in a customer application. The action will set jcr:encoding using its own logic, e.g. based on IP or user preferences.

Since GateIn beta 2, there are a set of features added to customize a GateIn instance without modifying the GateIn binary. This usecase will be called portal extension in this documentation. Those features are also required to be able to launch several portal instances at the same time, in "eXo terminology" that means to have several "portal.war".

To be able to migrate an application to GateIn, the first thing we need to do is to ensure that our application supports properly several portal container instances. The following section aims to help you to be compatible with GateIn.

Now all your HttpServlets that need to get the current ExoContainer must extends org.exoplatform.container.web.AbstractHttpServlet. This abstract class will ensure that the environment has been properly set, so you will be able to call the usual methods such as ExoContainerContext.getCurrentContainer() (if it must also be compatible with the standalone mode) or PortalContainer.getInstance() (if it will only work on a portal environment mode).

If you had to implement the method service(HttpServletRequest req, HttpServletResponse res), now you will need to implement onService(ExoContainer container, HttpServletRequest req, HttpServletResponse res), this method will directly give you the current ExoContainer in its signature.

If your Http Filter or your HttpServlet requires a PortalContainer to initialize, you need to convert your code in order to launch the code responsible for the initialization in the method onAlreadyExists of an org.exoplatform.container.RootContainer.PortalContainerInitTask.

We need to rely on init tasks, in order to be sure that the portal container is at the right state when the task is executed, in other words the task could be delayed if you try to execute it too early. Each task is linked to a web application, so when we add a new task, we first retrieve all the portal containers that depend on this web application according to the PortalContainerDefinitions, and for each container we add the task in a sorted queue which order is in fact the order of the web applications dependencies defined in the PortalContainerDefinition. If no PortalContainerDefinition can be found we execute synchronously the task which is in fact the old behavior (i.e. without the starter).

The supported init tasks are:

An init task is defined as below:

To add a task, you can either call:

We will take for example the class GadgetRegister that is used to register new google gadgets on a given portal container.

The old code was:

The new code relies on a org.exoplatform.container.RootContainer.PortalContainerPostInitTask, as you can see below

A PortalContainerDefinition allows you to indicate the platform how it must initialize and manage your portal. In a PortalContainerDefinition, you can define a set of properties, such as:

You can define and register a PortalContainerDefinition thanks to an external plugin that has to be treated at the RootContainer level. In other words, your configuration file must be a file conf/configuration.xml packaged into a jar file or $AS_HOME/exo-conf/configuration.xml (for more details, please have a look to the article Container Configuration).

See below an example of configuration file that define and register a PortalContainerDefinition:

<?xml version="1.0" encoding="UTF-8"?>
<configuration
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://www.exoplatform.org/xml/ns/kernel_1_3.xsd http://www.exoplatform.org/xml/ns/kernel_1_3.xsd"
  xmlns="http://www.exoplatform.org/xml/ns/kernel_1_3.xsd">
  <external-component-plugins>
    <!-- The full qualified name of the PortalContainerConfig -->
    <target-component>org.exoplatform.container.definition.PortalContainerConfig</target-component>
    <component-plugin>
      <!-- The name of the plugin -->
      <name>Add PortalContainer Definitions</name>
      <!-- The name of the method to call on the PortalContainerConfig in order to register the PortalContainerDefinitions -->
      <set-method>registerPlugin</set-method>
      <!-- The full qualified name of the PortalContainerDefinitionPlugin -->
      <type>org.exoplatform.container.definition.PortalContainerDefinitionPlugin</type>
      <init-params>
        <object-param>
          <name>portal</name>
          <object type="org.exoplatform.container.definition.PortalContainerDefinition">
            <!-- The name of the portal container -->
            <field name="name"><string>portal</string></field>
            <!-- The name of the context name of the rest web application -->
            <field name="restContextName"><string>rest</string></field>
            <!-- The name of the realm -->
            <field name="realmName"><string>exo-domain</string></field>
            <!-- All the dependencies of the portal container ordered by loading priority -->
            <field name="dependencies">
              <collection type="java.util.ArrayList">
                <value>
                  <string>eXoResources</string>
                </value>
                <value>
                  <string>portal</string>
                </value>
                <value>
                  <string>dashboard</string>
                </value>
                <value>
                  <string>exoadmin</string>
                </value>
                <value>
                  <string>eXoGadgets</string>
                </value>
                <value>
                  <string>eXoGadgetServer</string>
                </value>
                <value>
                  <string>rest</string>
                </value>
                <value>
                  <string>web</string>
                </value>
                <value>
                  <string>wsrp-producer</string>
                </value>
                <value>
                  <string>sample-ext</string>
                </value>
              </collection>
            </field>
          </object>
        </object-param>
      </init-params>
    </component-plugin>
  </external-component-plugins>
</configuration>

In the previous example, we define a portal container called "portal", which rest context name is "rest", which realm name is "exo-domain" and which dependencies are the web applications "eXoResources", "portal"... The platform will load first "eXoResources", then "portal" and so on.

To do that you need first to change the default values used by a PortalContainer that has not been defined thanks to a PortalContainerDefinition. Those default values can be modified thanks to a set of init parameters of the component PortalContainerConfig.

The component PortalContainerConfig must be registered at the RootContainer level. In other words, your configuration file must be a file conf/configuration.xml packaged into a jar file or $AS_HOME/exo-conf/configuration.xml (for more details please have a look to the article Container Configuration).

In the example below we will rename:

  • The portal name "portal" to "myPortal".

  • The rest servlet context name "rest" to "myRest".

  • The realm name "exo-domain" to "my-exo-domain".

See below an example

<?xml version="1.0" encoding="UTF-8"?>
<configuration
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://www.exoplatform.org/xml/ns/kernel_1_3.xsd http://www.exoplatform.org/xml/ns/kernel_1_3.xsd"
  xmlns="http://www.exoplatform.org/xml/ns/kernel_1_3.xsd">
  <component>
    <!-- The full qualified name of the PortalContainerConfig -->
    <type>org.exoplatform.container.definition.PortalContainerConfig</type>
    <init-params>
      <!-- The name of the default portal container -->
      <value-param>
        <name>default.portal.container</name>
        <value>myPortal</value>
      </value-param>
      <!-- The name of the default rest ServletContext -->
      <value-param>
        <name>default.rest.context</name>
        <value>myRest</value>
      </value-param>
      <!-- The name of the default realm -->
      <value-param>
        <name>default.realm.name</name>
        <value>my-exo-domain</value>
      </value-param>
    </init-params>
  </component>
</configuration>

Once your configuration is ready, you need to:

  • Update the file WEB-INF/web.xml of the file "portal.war" by changing the "display-name" (the new value is "myPortal") and the "realm-name" in the "login-config" (the new value is "my-exo-domain").

  • If you use JBoss AS: Update the file WEB-INF/jboss-web.xml of the file "portal.war" by changing the "security-domain" (the new value is "java:/jaas/my-exo-domain").

  • Rename the "portal.war" to "myPortal.war" (or "02portal.war" to "02myPortal.war")

  • Update the file WEB-INF/web.xml of the file "rest.war" by changing the "display-name" (the new value is "myRest") and the "realm-name" in the "login-config" (the new value is "my-exo-domain").

  • If you use JBoss AS: Update the file WEB-INF/jboss-web.xml of the file "rest.war" by changing the "security-domain" (the new value is "java:/jaas/my-exo-domain").

  • Rename the "rest.war" to "myRest.war"

  • If "portal.war" and "rest.war" were embedded into an ear file: Update the file META-INF/application.xml of the file "exoplatform.ear" by remaming "02portal.war" to "02myPortal.war", "portal" to "myPortal", "rest.war" to "myRest.war" and "rest" to "myRest".

The end of the process depends on your application server

To indicate the platform that a given web application has configuration file to provide, you need to:

The simple fact to add this Servlet Context Listener, will add the Servlet Context of this web application to the Unified Servlet Context of all the PortalContainers that depend on this web application according to their PortalContainerDefinition.

See an example of a web.xml below:

<?xml version="1.0" encoding="ISO-8859-1" ?>
<!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN"
                 "http://java.sun.com/dtd/web-app_2_3.dtd">
<web-app>
  <display-name>sample-ext</display-name>

  <context-param>
    <param-name>org.exoplatform.frameworks.jcr.command.web.fckeditor.digitalAssetsWorkspace</param-name>
    <param-value>collaboration</param-value>
    <description>Binary assets workspace name</description>
  </context-param>

  <context-param>
    <param-name>org.exoplatform.frameworks.jcr.command.web.fckeditor.digitalAssetsPath</param-name>
    <param-value>/Digital Assets/</param-value>
    <description>Binary assets path</description>
  </context-param>

  <context-param>
    <param-name>CurrentFolder</param-name>
    <param-value>/Digital Assets/</param-value>
    <description>Binary assets workspace name</description>
  </context-param>

  <!-- ================================================================== -->
  <!--   RESOURCE FILTER TO CACHE MERGED JAVASCRIPT AND CSS               -->
  <!-- ================================================================== -->
  <filter>
    <filter-name>ResourceRequestFilter</filter-name>
    <filter-class>org.exoplatform.portal.application.ResourceRequestFilter</filter-class>
  </filter>

  <filter-mapping>
    <filter-name>ResourceRequestFilter</filter-name>
    <url-pattern>/*</url-pattern>
  </filter-mapping>


  <!-- ================================================================== -->
  <!--           LISTENER                                                 -->
  <!-- ================================================================== -->
  <listener>
    <listener-class>org.exoplatform.container.web.PortalContainerConfigOwner</listener-class>
  </listener>
  <!-- ================================================================== -->
  <!--           SERVLET                                                  -->
  <!-- ================================================================== -->
  <servlet>
    <servlet-name>GateInServlet</servlet-name>
    <servlet-class>org.gatein.wci.api.GateInServlet</servlet-class>
    <load-on-startup>0</load-on-startup>
  </servlet>
  <!--  =================================================================  -->
  <servlet-mapping>
    <servlet-name>GateInServlet</servlet-name>
    <url-pattern>/gateinservlet</url-pattern>
  </servlet-mapping>
</web-app>

A portal extension is in fact a web application declared as a PortalContainerConfigOwner (see previous section for more details about a PortalContainerConfigOwner) that has been added to the dependency list of the PortalContainerDefinition of a given portal.

See below an example of configuration file that add the portal extension "portal-ext" to the dependency list of the portal "portal":

<?xml version="1.0" encoding="UTF-8"?>
<configuration
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://www.exoplatform.org/xml/ns/kernel_1_3.xsd http://www.exoplatform.org/xml/ns/kernel_1_3.xsd"
  xmlns="http://www.exoplatform.org/xml/ns/kernel_1_3.xsd">
  <external-component-plugins>
    <!-- The full qualified name of the PortalContainerConfig -->
    <target-component>org.exoplatform.container.definition.PortalContainerConfig</target-component>
    <component-plugin>
      <!-- The name of the plugin -->
      <name>Add PortalContainer Definitions</name>
      <!-- The name of the method to call on the PortalContainerConfig in order to register the PortalContainerDefinitions -->
      <set-method>registerPlugin</set-method>
      <!-- The full qualified name of the PortalContainerDefinitionPlugin -->
      <type>org.exoplatform.container.definition.PortalContainerDefinitionPlugin</type>
      <init-params>
        <object-param>
          <name>portal</name>
          <object type="org.exoplatform.container.definition.PortalContainerDefinition">
            <!-- The name of the portal container -->
            <field name="name"><string>portal</string></field>
            <!-- The name of the context name of the rest web application -->
            <field name="restContextName"><string>rest</string></field>
            <!-- The name of the realm -->
            <field name="realmName"><string>exo-domain</string></field>
            <!-- All the dependencies of the portal container ordered by loading priority -->
            <field name="dependencies">
              <collection type="java.util.ArrayList">
                <value>
                  <string>eXoResources</string>
                </value>
                <value>
                  <string>portal</string>
                </value>
                <value>
                  <string>dashboard</string>
                </value>
                <value>
                  <string>exoadmin</string>
                </value>
                <value>
                  <string>eXoGadgets</string>
                </value>
                <value>
                  <string>eXoGadgetServer</string>
                </value>
                <value>
                  <string>rest</string>
                </value>
                <value>
                  <string>web</string>
                </value>
                <value>
                  <string>wsrp-producer</string>
                </value>
                <!-- The sample-ext has been added at the end of the dependency list in order to have the highest priority towards
                the other web applications and particularly towards "portal" -->
                <value>
                  <string>sample-ext</string>
                </value>
              </collection>
            </field>
          </object>
        </object-param>
      </init-params>
    </component-plugin>
  </external-component-plugins>
</configuration>

To duplicate the entire "portal.war" file to create a new portal, you just need to duplicate the following files from the original "portal.war":

You need also to duplicate the "rest.war" file to create a dedicated rest web application for your portal as we must have one rest web application per portal, in fact you just need to duplicate the following files from the original "rest.war":

Finally, you need to register and define the corresponding PortalContainerDefinition. The PortalContainerDefinition of your portal will be composed of:

See an example below:

<?xml version="1.0" encoding="UTF-8"?>
<configuration
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://www.exoplatform.org/xml/ns/kernel_1_3.xsd http://www.exoplatform.org/xml/ns/kernel_1_3.xsd"
  xmlns="http://www.exoplatform.org/xml/ns/kernel_1_3.xsd">
  <external-component-plugins>
    <!-- The full qualified name of the PortalContainerConfig -->
    <target-component>org.exoplatform.container.definition.PortalContainerConfig</target-component>
    <component-plugin>
      <!-- The name of the plugin -->
      <name>Add PortalContainer Definitions</name>
      <!-- The name of the method to call on the PortalContainerConfig in order to register the PortalContainerDefinitions -->
      <set-method>registerPlugin</set-method>
      <!-- The full qualified name of the PortalContainerDefinitionPlugin -->
      <type>org.exoplatform.container.definition.PortalContainerDefinitionPlugin</type>
      <init-params>
        <object-param>
          <name>sample-portal</name>
          <object type="org.exoplatform.container.definition.PortalContainerDefinition">
            <!-- The name of the portal container -->
            <field name="name"><string>sample-portal</string></field>
            <!-- The name of the context name of the rest web application -->
            <field name="restContextName"><string>rest-sample-portal</string></field>
            <!-- The name of the realm -->
            <field name="realmName"><string>exo-domain-sample-portal</string></field>
            <!-- All the dependencies of the portal container ordered by loading priority -->
            <field name="dependencies">
              <collection type="java.util.ArrayList">
                <value>
                  <string>eXoResources</string>
                </value>
                <value>
                  <string>portal</string>
                </value>
                <value>
                  <string>dashboard</string>
                </value>
                <value>
                  <string>exoadmin</string>
                </value>
                <value>
                  <string>eXoGadgets</string>
                </value>
                <value>
                  <string>eXoGadgetServer</string>
                </value>
                <value>
                  <string>rest-sample-portal</string>
                </value>
                <value>
                  <string>web</string>
                </value>
                <value>
                  <string>wsrp-producer</string>
                </value>
                <value>
                  <string>sample-portal</string>
                </value>
              </collection>
            </field>
          </object>
        </object-param>
      </init-params>
    </component-plugin>
  </external-component-plugins>
</configuration>

Now, the ConfigurationManager uses by default the unified servlet context of the portal in order to get any resources in particular the configuration files. The unified servlet context is aware of the priorities that has been set in the PortalContainerDefinition of the portal. In other words, if you want for instance to import the file war:/conf/database/database-configuration.xml and this file exists in 2 different web applications, the file from the last (according to the dependency order) web application will be loaded.

So, in order to avoid issues when we would like to package several products at the same time (i.e. WCM, DMS, CS, KS), we need to:

The example below, is an the example of a file WEB-INF/conf/configuration.xml of the product "sample-ext".

<?xml version="1.0" encoding="ISO-8859-1"?>
<configuration
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://www.exoplatform.org/xml/ns/kernel_1_3.xsd http://www.exoplatform.org/xml/ns/kernel_1_3.xsd"
  xmlns="http://www.exoplatform.org/xml/ns/kernel_1_3.xsd">
  <import>war:/conf/sample-ext/common/common-configuration.xml</import>
  <import>war:/conf/sample-ext/jcr/jcr-configuration.xml</import>
  <import>war:/conf/sample-ext/portal/portal-configuration.xml</import>
  <import>war:/conf/sample-ext/web/web-inf-extension-configuration.xml</import>
</configuration>

In your configuration file, you can use a special variable called container.name.suffix in order to add a suffix to values that could change between portal containers. The value of this variable will be an empty sting if no PortalContainerDefinition has been defined otherwise the value will be \-$portal.container.name. See an example below:

<?xml version="1.0" encoding="ISO-8859-1"?>
<configuration
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xsi:schemaLocation="http://www.exoplatform.org/xml/ns/kernel_1_3.xsd http://www.exoplatform.org/xml/ns/kernel_1_3.xsd"
   xmlns="http://www.exoplatform.org/xml/ns/kernel_1_3.xsd">
  <component>
    <key>org.exoplatform.services.database.HibernateService</key>
    <jmx-name>database:type=HibernateService</jmx-name>
    <type>org.exoplatform.services.database.impl.HibernateServiceImpl</type>
    <init-params>
      <properties-param>
        <name>hibernate.properties</name>
        <description>Default Hibernate Service</description>
        <property name="hibernate.show_sql" value="false"/>
        <property name="hibernate.cglib.use_reflection_optimizer" value="true"/>
        <property name="hibernate.connection.url" value="jdbc:hsqldb:file:../temp/data/exodb${container.name.suffix}"/>
        <property name="hibernate.connection.driver_class" value="org.hsqldb.jdbcDriver"/>
        <property name="hibernate.connection.autocommit" value="true"/>
        <property name="hibernate.connection.username" value="sa"/>
        <property name="hibernate.connection.password" value=""/>
        <property name="hibernate.dialect" value="org.hibernate.dialect.HSQLDialect"/>
        <property name="hibernate.c3p0.min_size" value="5"/>
        <property name="hibernate.c3p0.max_size" value="20"/>
        <property name="hibernate.c3p0.timeout" value="1800"/>
        <property name="hibernate.c3p0.max_statements" value="50"/>
      </properties-param>
    </init-params>
  </component>
</configuration>

Now you can add new JCR repositories or workspaces thanks to an external plugin, the configuration of your JCR Repositories will be merged knowing that the merge algorithm will:

See an example of jcr-configuration.xml below:

<?xml version="1.0" encoding="ISO-8859-1"?>
<configuration
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://www.exoplatform.org/xml/ns/kernel_1_3.xsd http://www.exoplatform.org/xml/ns/kernel_1_3.xsd"
  xmlns="http://www.exoplatform.org/xml/ns/kernel_1_3.xsd">
  <external-component-plugins>
    <!-- The full qualified name of the RepositoryServiceConfiguration -->
    <target-component>org.exoplatform.services.jcr.config.RepositoryServiceConfiguration</target-component>
    <component-plugin>
      <!-- The name of the plugin -->
      <name>Sample RepositoryServiceConfiguration Plugin</name>
      <!-- The name of the method to call on the RepositoryServiceConfiguration in order to add the RepositoryServiceConfigurations -->
      <set-method>addConfig</set-method>
      <!-- The full qualified name of the RepositoryServiceConfigurationPlugin -->
      <type>org.exoplatform.services.jcr.impl.config.RepositoryServiceConfigurationPlugin</type>
      <init-params>
        <value-param>
          <name>conf-path</name>
          <description>JCR configuration file</description>
          <value>war:/conf/sample-ext/jcr/repository-configuration.xml</value>
        </value-param>
      </init-params>
    </component-plugin>
  </external-component-plugins>
</configuration>

See an example of repository-configuration.xml below:

<repository-service default-repository="repository">
  <repositories>
    <repository name="repository" system-workspace="system" default-workspace="portal-system">
      <security-domain>exo-domain</security-domain>
      <access-control>optional</access-control>
      <authentication-policy>org.exoplatform.services.jcr.impl.core.access.JAASAuthenticator</authentication-policy>
      <workspaces>
        <workspace name="sample-ws">
          <container class="org.exoplatform.services.jcr.impl.storage.jdbc.optimisation.CQJDBCWorkspaceDataContainer">
            <properties>
              <property name="source-name" value="jdbcexo${container.name.suffix}" />
              <property name="dialect" value="hsqldb" />
              <property name="multi-db" value="false" />
              <property name="max-buffer-size" value="204800" />
              <property name="swap-directory" value="../temp/swap/sample-ws${container.name.suffix}" />
            </properties>
            <value-storages>
              <value-storage id="sample-ws" class="org.exoplatform.services.jcr.impl.storage.value.fs.TreeFileValueStorage">
                <properties>
                  <property name="path" value="../temp/values/sample-ws${container.name.suffix}" />
                </properties>
                <filters>
                  <filter property-type="Binary" />
                </filters>
              </value-storage>
            </value-storages>
          </container>
          <initializer class="org.exoplatform.services.jcr.impl.core.ScratchWorkspaceInitializer">
            <properties>
              <property name="root-nodetype" value="nt:unstructured" />
              <property name="root-permissions"
                value="any read;*:/platform/administrators read;*:/platform/administrators add_node;*:/platform/administrators set_property;*:/platform/administrators remove" />
            </properties>
          </initializer>
          <cache enabled="true">
            <properties>
              <property name="max-size" value="20000" />
              <property name="live-time" value="30000" />
            </properties>
          </cache>
          <query-handler class="org.exoplatform.services.jcr.impl.core.query.lucene.SearchIndex">
            <properties>
              <property name="index-dir" value="../temp/jcrlucenedb/sample-ws${container.name.suffix}" />
            </properties>
          </query-handler>
          <lock-manager class="org.exoplatform.services.jcr.impl.core.lock.infinispan.ISPNCacheableLockManagerImpl">
              <properties>
                  <property name="time-out" value="15m" />
                  <property name="infinispan-configuration" value="conf/standalone/cluster/test-infinispan-lock.xml" />
                  <property name="jgroups-configuration" value="udp-mux.xml" />
                  <property name="infinispan-cluster-name" value="JCR-cluster" />
                  <property name="infinispan-cl-cache.jdbc.table.name" value="lk" />
                  <property name="infinispan-cl-cache.jdbc.table.create" value="true" />
                  <property name="infinispan-cl-cache.jdbc.table.drop" value="false" />
                  <property name="infinispan-cl-cache.jdbc.id.column" value="id" />
                  <property name="infinispan-cl-cache.jdbc.data.column" value="data" />
                  <property name="infinispan-cl-cache.jdbc.timestamp.column" value="timestamp" />
                  <property name="infinispan-cl-cache.jdbc.datasource" value="jdbcjcr" />
                  <property name="infinispan-cl-cache.jdbc.dialect" value="${dialect}" />
                  <property name="infinispan-cl-cache.jdbc.connectionFactory" value="org.exoplatform.services.jcr.infinispan.ManagedConnectionFactory" />
              </properties>
          </lock-manager>
        </workspace>
      </workspaces>
    </repository>
  </repositories>
</repository-service>

Now you can add new Resource Bundles, thanks to an external plugin.

See an example below:

<?xml version="1.0" encoding="ISO-8859-1"?>
<configuration
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://www.exoplatform.org/xml/ns/kernel_1_3.xsd http://www.exoplatform.org/xml/ns/kernel_1_3.xsd"
  xmlns="http://www.exoplatform.org/xml/ns/kernel_1_3.xsd">
  <external-component-plugins>
    <!-- The full qualified name of the ResourceBundleService -->
    <target-component>org.exoplatform.services.resources.ResourceBundleService</target-component>
    <component-plugin>
      <!-- The name of the plugin -->
      <name>Sample ResourceBundle Plugin</name>
      <!-- The name of the method to call on the ResourceBundleService in order to register the ResourceBundles -->
      <set-method>addResourceBundle</set-method>
      <!-- The full qualified name of the BaseResourceBundlePlugin -->
      <type>org.exoplatform.services.resources.impl.BaseResourceBundlePlugin</type>
      <init-params>
        <!--values-param>
          <name>classpath.resources</name>
          <description>The resources that start with the following package name should be load from file system</description>
          <value>locale.portlet</value>
        </values-param-->
        <values-param>
          <name>init.resources</name>
          <description>Store the following resources into the db for the first launch </description>
          <value>locale.portal.sample</value>
        </values-param>
        <values-param>
          <name>portal.resource.names</name>
          <description>The properties files of the portal , those file will be merged
            into one ResoruceBundle properties </description>
          <value>locale.portal.sample</value>
        </values-param>
      </init-params>
    </component-plugin>
  </external-component-plugins>
</configuration>

Now each portal container has its own ClassLoader which is automatically set for you at runtime (FYI: it could be retrieved thanks to portalContainer.getPortalClassLoader()). This ClassLoader is an unified ClassLoader that is also aware of the dependency order defined into the PortalContainerDefinition, so to add new keys or update key values, you just need to:

In the example below, we want to change the values of the keys UIHomePagePortlet.Label.Username and UIHomePagePortlet.Label.Password, and add the new key UIHomePagePortlet.Label.SampleKey into the Resource Bundle locale.portal.webui.

Now you can add new Portal Configurations, Navigations, Pages or Portlet Preferences thanks to an external plugin.

See an example below:

<?xml version="1.0" encoding="ISO-8859-1"?>
<configuration
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://www.exoplatform.org/xml/ns/kernel_1_3.xsd http://www.exoplatform.org/xml/ns/kernel_1_3.xsd"
  xmlns="http://www.exoplatform.org/xml/ns/kernel_1_3.xsd">
  <external-component-plugins>
    <!-- The full qualified name of the UserPortalConfigService -->
    <target-component>org.exoplatform.portal.config.UserPortalConfigService</target-component>
    <component-plugin>
      <!-- The name of the plugin -->
      <name>new.portal.config.user.listener</name>
      <!-- The name of the method to call on the UserPortalConfigService in order to register the NewPortalConfigs -->
      <set-method>initListener</set-method>
      <!-- The full qualified name of the NewPortalConfigListener -->
      <type>org.exoplatform.portal.config.NewPortalConfigListener</type>
      <description>this listener init the portal configuration</description>
      <init-params>
        <object-param>
          <name>portal.configuration</name>
          <description>description</description>
          <object type="org.exoplatform.portal.config.NewPortalConfig">
            <field name="predefinedOwner">
              <collection type="java.util.HashSet">
                <value>
                  <string>classic</string>
                </value>
              </collection>
            </field>
            <field name="ownerType">
              <string>portal</string>
            </field>
            <field name="templateLocation">
              <string>war:/conf/sample-ext/portal</string>
            </field>
          </object>
        </object-param>
        <object-param>
          <name>group.configuration</name>
          <description>description</description>
          <object type="org.exoplatform.portal.config.NewPortalConfig">
            <field name="predefinedOwner">
              <collection type="java.util.HashSet">
                <value>
                  <string>platform/users</string>
                </value>
              </collection>
            </field>
            <field name="ownerType">
              <string>group</string>
            </field>
            <field name="templateLocation">
              <string>war:/conf/sample-ext/portal</string>
            </field>
          </object>
        </object-param>
        <object-param>
          <name>user.configuration</name>
          <description>description</description>
          <object type="org.exoplatform.portal.config.NewPortalConfig">
            <field name="predefinedOwner">
              <collection type="java.util.HashSet">
                <value>
                  <string>root</string>
                </value>
              </collection>
            </field>
            <field name="ownerType">
              <string>user</string>
            </field>
            <field name="templateLocation">
              <string>war:/conf/sample-ext/portal</string>
            </field>
          </object>
        </object-param>
      </init-params>
    </component-plugin>
  </external-component-plugins>
</configuration>

We added a GenericFilter that allows you to define new Http Filters thanks to an external plugin. Your filter will need to implement the interface org.exoplatform.web.filter.Filter.

See an example of configuration below:

<?xml version="1.0" encoding="UTF-8"?>
<configuration
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://www.exoplatform.org/xml/ns/kernel_1_3.xsd http://www.exoplatform.org/xml/ns/kernel_1_3.xsd"
  xmlns="http://www.exoplatform.org/xml/ns/kernel_1_3.xsd">
  <external-component-plugins>
    <!-- The full qualified name of the ExtensibleFilter -->
    <target-component>org.exoplatform.web.filter.ExtensibleFilter</target-component>
    <component-plugin>
      <!-- The name of the plugin -->
      <name>Sample Filter Definition Plugin</name>
      <!-- The name of the method to call on the ExtensibleFilter in order to register the FilterDefinitions -->
      <set-method>addFilterDefinitions</set-method>
      <!-- The full qualified name of the FilterDefinitionPlugin -->
      <type>org.exoplatform.web.filter.FilterDefinitionPlugin</type>
      <init-params>
        <object-param>
          <name>Sample Filter Definition</name>
          <object type="org.exoplatform.web.filter.FilterDefinition">
            <!-- The filter instance -->
            <field name="filter"><object type="org.exoplatform.sample.ext.web.SampleFilter"/></field>
            <!-- The mapping to use -->
            <!-- WARNING: the mapping is expressed with regular expressions -->
            <field name="patterns">
              <collection type="java.util.ArrayList" item-type="java.lang.String">
                <value>
                  <string>/.*</string>
                </value>
              </collection>
            </field>
          </object>
        </object-param>
      </init-params>
    </component-plugin>
  </external-component-plugins>
</configuration>

See an example of Filter below:

We added a GenericHttpListener that allows you to define new HttpSessionListeners and/or ServletContextListeners thanks to an external plugin. Actually, the GenericHttpListener will broadcast events thanks to the ListenerService that you can easily capture. The events that it broadcasts are:

If you want to listen to org.exoplatform.web.GenericHttpListener.sessionCreated, you will need to create a Listener that extends _Listener<PortalContainer, HttpSessionEvent>_If you want to listen to \_org.exoplatform.web.GenericHttpListener.sessionDestroyed_, you will need to create a Listener that extends _Listener<PortalContainer, HttpSessionEvent>_If you want to listen to \_org.exoplatform.web.GenericHttpListener.contextInitialized_, you will need to create a Listener that extends _Listener<PortalContainer, ServletContextEvent>_If you want to listen to \_org.exoplatform.web.GenericHttpListener.contextDestroyed_, you will need to create a Listener that extends Listener<PortalContainer, ServletContextEvent>

See an example of configuration below:

<?xml version="1.0" encoding="UTF-8"?>
<configuration
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://www.exoplatform.org/xml/ns/kernel_1_3.xsd http://www.exoplatform.org/xml/ns/kernel_1_3.xsd"
  xmlns="http://www.exoplatform.org/xml/ns/kernel_1_3.xsd">
  <external-component-plugins>
    <!-- The full qualified name of the ListenerService -->
    <target-component>org.exoplatform.services.listener.ListenerService</target-component>
    <component-plugin>
      <!-- The name of the listener that is also the name of the target event -->
      <name>org.exoplatform.web.GenericHttpListener.sessionCreated</name>
      <!-- The name of the method to call on the ListenerService in order to register the Listener -->
      <set-method>addListener</set-method>
      <!-- The full qualified name of the Listener -->
      <type>org.exoplatform.sample.ext.web.SampleHttpSessionCreatedListener</type>
    </component-plugin>
    <component-plugin>
      <!-- The name of the listener that is also the name of the target event -->
      <name>org.exoplatform.web.GenericHttpListener.sessionDestroyed</name>
      <!-- The name of the method to call on the ListenerService in order to register the Listener -->
      <set-method>addListener</set-method>
      <!-- The full qualified name of the Listener -->
      <type>org.exoplatform.sample.ext.web.SampleHttpSessionDestroyedListener</type>
    </component-plugin>
    <component-plugin>
      <!-- The name of the listener that is also the name of the target event -->
      <name>org.exoplatform.web.GenericHttpListener.contextInitialized</name>
      <!-- The name of the method to call on the ListenerService in order to register the Listener -->
      <set-method>addListener</set-method>
      <!-- The full qualified name of the Listener -->
      <type>org.exoplatform.sample.ext.web.SampleContextInitializedListener</type>
    </component-plugin>
    <component-plugin>
      <!-- The name of the listener that is also the name of the target event -->
      <name>org.exoplatform.web.GenericHttpListener.contextDestroyed</name>
      <!-- The name of the method to call on the ListenerService in order to register the Listener -->
      <set-method>addListener</set-method>
      <!-- The full qualified name of the Listener -->
      <type>org.exoplatform.sample.ext.web.SampleContextDestroyedListener</type>
    </component-plugin>
  </external-component-plugins>
</configuration>

See an example of Session Listener below:

See an example of Context Listener below:

We assume that you have a clean JBoss version of GateIn, in other words, we assume that you have already the file exoplatform.ear in the deploy directory of JBoss and you have the related application policy in your conf/login-config.xml.

You need to:

We assume that you have a clean JBoss version of GateIn, in other words, we assume that you have already the file exoplatform.ear in the deploy directory of JBoss and you have the related application policy in your conf/login-config.xml.

You need to:

  <application-policy name="exo-domain-sample-portal">
    <authentication>
      <login-module code="org.exoplatform.web.security.PortalLoginModule" flag="required">
        <module-option name="portalContainerName">sample-portal</module-option>
        <module-option name="realmName">exo-domain-sample-portal</module-option>
      </login-module>
      <login-module code="org.exoplatform.services.security.jaas.SharedStateLoginModule" flag="required">
        <module-option name="portalContainerName">sample-portal</module-option>
        <module-option name="realmName">exo-domain-sample-portal</module-option>
      </login-module>
      <login-module code="org.exoplatform.services.security.j2ee.JbossLoginModule" flag="required">
        <module-option name="portalContainerName">sample-portal</module-option>
        <module-option name="realmName">exo-domain-sample-portal</module-option>
      </login-module>
    </authentication>
  </application-policy>

We assume that you have a clean Tomcat version of GateIn, in other words, we assume that you have already all the jar files of GateIn and their dependencies into tomcat/lib, you have all the war files of GateIn into tomcat/webapps and you have the realm name "exo-domain" defined into the file tomcat/conf/jaas.conf.

This section will show you how to use AS Managed DataSource under JBoss AS.

Under JBoss, just put a file XXX-ds.xml in the deploy server (example: \server\default\deploy). In this file, we will configure all datasources which eXo will need. (there should be 4 named: jdbcjcr_portal, jdbcjcr_portal-sample, jdbcidm_portal & jdbcidm_sample-portal).

Example:

<?xml version="1.0" encoding="UTF-8"?>
<datasources>
   <no-tx-datasource>
      <jndi-name>jdbcjcr_portal</jndi-name>
      <connection-url>jdbc:hsqldb:${jboss.server.data.dir}/data/jdbcjcr_portal</connection-url>
      <driver-class>org.hsqldb.jdbcDriver</driver-class>
      <user-name>sa</user-name>
      <password></password>
   </no-tx-datasource>

   <no-tx-datasource>
      <jndi-name>jdbcjcr_sample-portal</jndi-name>
      <connection-url>jdbc:hsqldb:${jboss.server.data.dir}/data/jdbcjcr_sample-portal</connection-url>
      <driver-class>org.hsqldb.jdbcDriver</driver-class>
      <user-name>sa</user-name>
      <password></password>
   </no-tx-datasource>

   <no-tx-datasource>
      <jndi-name>jdbcidm_portal</jndi-name>
      <connection-url>jdbc:hsqldb:${jboss.server.data.dir}/data/jdbcidm_portal</connection-url>
      <driver-class>org.hsqldb.jdbcDriver</driver-class>
      <user-name>sa</user-name>
      <password></password>
   </no-tx-datasource>

   <no-tx-datasource>
      <jndi-name>jdbcidm_sample-portal</jndi-name>
      <connection-url>jdbc:hsqldb:${jboss.server.data.dir}/data/jdbcidm_sample-portal</connection-url>
      <driver-class>org.hsqldb.jdbcDriver</driver-class>
      <user-name>sa</user-name>
      <password></password>
   </no-tx-datasource>
</datasources>

Which properties can be set for datasource can be found here: Configuring JDBC DataSources - The non transactional DataSource configuration schema