jBPMLogo

Getting Started

Introduction and getting started with jBPM

1. Overview

1.1. What is jBPM?

jBPM is a flexible Business Process Management (BPM) Suite. It is light-weight, fully open-source (distributed under Apache License 2.0) and written in Java. It allows you to model, execute, and monitor business processes and cases throughout their life cycle.

Process

A business process allows you to model your business goals by describing the steps that need to be executed to achieve those goals, and the order of those goals is depicted using a flow chart. This process greatly improves the visibility and agility of your business logic. jBPM focuses on executable business processes, which are business processes that contain enough detail so they can actually be executed on a BPM jBPM engine. Executable business processes bridge the gap between business users and developers as they are higher-level and use domain-specific concepts that are understood by business users but can also be executed directly.

Business processes need to be supported throughout their entire life cycle: authoring, deployment, process management and task lists, and dashboards and reporting.

The core of jBPM is a light-weight, extensible workflow engine written in pure Java that allows you to execute business processes using the latest BPMN 2.0 specification. It can run in any Java environment, embedded in your application or as a service.

On top of the jBPM engine, a lot of features and tools are offered to support business processes throughout their entire life cycle:

  • Pluggable human task service based on WS-HumanTask for including tasks that need to be performed by human actors.

  • Pluggable persistence and transactions (based on JPA / JTA).

  • Case management capabilities added to the jBPM engine to support more adaptive and flexible use cases

  • Web-based process designer to support the graphical creation and simulation of your business processes (drag and drop).

  • Web-based data modeler and form modeler to support the creation of data models and task forms

  • Web-based, customizable dashboards and reporting

  • All combined in one web-based Business Central application, supporting the complete BPM life cycle:

    • Modeling and deployment - author your processes, rules, data models, forms and other assets

    • Execution - execute processes, tasks, rules and events on the core runtime engine

    • Runtime Management - work on assigned task, manage process instances, etc

    • Reporting - keep track of the execution using Business Activity Monitoring capabilities

kie wb after login
  • Eclipse-based developer tools to support the modeling, testing and debugging of processes

  • Remote API to jBPM engine as a service (REST, JMS, Remote Java API)

  • Integration with Maven, Spring, OSGi, etc.

BPM creates the bridge between business analysts, developers and end users by offering process management features and tools in a way that both business users and developers like. Domain-specific nodes can be plugged into the palette, making the processes more easily understood by business users.

jBPM supports case management by offering more advanced features to support adaptive and dynamic processes that require flexibility to model complex, real-life situations that cannot easily be described using a rigid process. We bring control back to the end users by allowing them to control which parts of the process should be executed; this allows dynamic deviation from the process.

jBPM is not just an isolated jBPM engine. Complex business logic can be modeled as a combination of business processes with business rules and complex event processing. jBPM can be combined with the Drools project to support one unified environment that integrates these paradigms where you model your business logic as a combination of processes, rules and events.

1.2. Overview

Overview

This figure gives an overview of the different components of the jBPM project.

  • The core engine is the heart of the project and allows you to execute business processes in a flexible manner. It is a pure Java component that you can choose to embed as part of your application or deploy it as a service and connect to it through the web-based UI or remote APIs.

    • An optional core service is the human task service that will take care of the human task life cycle if human actors participate in the process.

    • Another optional core service is runtime persistence; this will persist the state of all your process instances and log audit information about everything that is happening at runtime.

    • Applications can connect to the core engine through its Java API or as a set of CDI services, but also remotely through a REST and JMS API.

  • Web-based tools allow you to model, simulate and deploy your processes and other related artifacts (like data models, forms, rules, etc.):

    • The process designer allows business users to design and simulate business processes in a web-based environment.

    • The data modeler allows non-technical users to view, modify and create data models for use in your processes.

    • A web-based form modeler also allows you to create, generate or edit forms related to your processes (to start the process or to complete one of the user tasks).

    • Rule authoring allows you to specify different types of business rules (decision tables, guided rules, etc.) for combination with your processes.

    • All assets are stored and managed by the Guvnor repository (exposed through Git) and can be managed (versioning), built and deployed.

  • The web-based management console allows business users to manage their runtime (manage business processes like start new processes, inspect running instances, etc.), to manage their task list and to perform Business Activity Monitoring (BAM) and see reports.

  • The Eclipse-based developer tools are an extension to the Eclipse IDE, targeted towards developers, and allows you to create business processes using drag and drop, test and debug your processes, etc.

Each of the component is described in more detail below.

1.3. Core Engine

The core engine is the heart of the project. It’s a light-weight workflow engine that executes your business processes. It can be embedded as part of your application or deployed as a service (possibly in the cloud). Its most important features are the following:

  • Solid, stable core engine for executing your process instances.

  • Native support for the latest BPMN 2.0 specification for modeling and executing business processes.

  • Strong focus on performance and scalability.

  • Light-weight (can be deployed on almost any device that supports a simple Java Runtime Environment; does not require any web container at all).

  • (Optional) pluggable persistence with a default JPA implementation.

  • Pluggable transaction support with a default JTA implementation.

  • Implemented as a generic jBPM engine, so it can be extended to support new node types or other process languages.

  • Listeners to get notified about various events.

  • Ability to migrate running process instances to a new version of their process definition

The jBPM engine can also be integrated with a few other (independent) core services:

  • The human task service can be used to manage human tasks when human actors need to participate in the process. It is fully pluggable and the default implementation is based on the WS-HumanTask specification and manages the life cycle of the tasks, task lists, task forms, and some more advanced features like escalation, delegation, rule-based assignments, etc.

  • The history log can store all information about the execution of all the processes in the jBPM engine. This is necessary if you need access to historic information as runtime persistence only stores the current state of all active process instances. The history log can be used to store all current and historic states of active and completed process instances. It can be used to query for any information related to the execution of process instances, for monitoring, analysis, etc.

1.4. Business Central

The Business Central web-based application covers the complete life cycle of BPM projects starting at authoring phase, going through implementation, execution and monitoring. It combines a series web-based tools into one configurable solution to manage all assets and runtime data needed for the business solution.

It supports the following:

  • A repository service to store your business processes and related artifacts, using a Git repository, which supports versioning, remote Git access (as a file system) and access via REST.

  • A web-based user interface to manage your business processes, targeted towards business users; it also supports the visualization (and editing) of your artifacts (the web-based editors like designer, data and form modeler are integrated here), but also categorisation, build and deployment, etc..

  • Collaboration features which enable multiple actors (for example business users and developers) to work together on the same project.

kie wb after login
Figure 1. Business Central application

1.4.1. Process Designer

The web-based jBPM Designer allows you to model your business processes in a web-based environment. It is targeted towards business users and offers a graphical editor for viewing and editing your business processes (using drag and drop), similar to the Eclipse plugin. It supports round-tripping between the Eclipse editor and the web-based designer. It also supports simulation of processes.

Designer
Figure 2. Web-based designer for creating BPMN2 processes

1.4.2. Data Modeler

Processes almost always have some kind of data to work with. The data modeler allows non-technical users to view, edit or create these data models.

Typically, a business process analyst or data analyst will capture the requirements for a process or application and turn these into a formal set of interrelated data structures. The new Data Modeler tool provides an easy, straightforward and visual aid for building both logical and physical data models, without the need for advanced development skills or explicit coding. The data modeler is transparently integrated into Business Central. Its main goals are to make data models first class citizens in the process improvement cycle and allow for full process automation through the integrated use of data structures (and the forms that will be used to interact with them).

1.4.3. Process Management

Business processes and all its related runtime information can be managed through Business Central. It is targeted towards process administrators users and its main features include:

  • Process definitions management: view the entire list of process currently deployed into a Kie Server and its details.

  • Process instances management: the ability to start new process instances, get a filtered list of process instances, visually inspect the state of a specific process instances.

  • Human tasks management: being able to get a list of all tasks, view details such as current assignees, comments, activity logs as well as send reminders and forward tasks to different users and more.

  • Execution Errors management: allows administrators to view any execution error reported in the Kie Server instance, inspect its details including stacktrace and perform the error acknowledgement.

  • Jobs management: possibility to view currently scheduled and schedule new Jobs to run in the Kie Server instance.

ProcessInstanceDiagram
Figure 3. Managing your process instances

For more details around the entire management section please read the process management chapter.

1.4.4. Task Inbox

As often part of any process execution, human involvement is needed to review, approve or provide extra information. Business Central provides a Task Inbox section where any user potentially involved with these task can manage its workload. In there, users are able to get a list of all tasks, complete tasks using customizable task forms, collaborate using comments and more.

TaskInbox
Figure 4. Task Inbox

1.4.5. Business Activity Monitoring

As of version 6.0, jBPM comes with a full-featured BAM tooling which allows non-technical users to visually compose business dashboards. With this brand new module, to develop business activity monitoring and reporting solutions on top of jBPM has never been so easy!

BAM
Figure 5. Business Activity Monitoring

Key features:

  • Visual configuration of dashboards (Drag’n’drop).

  • Graphical representation of KPIs (Key Performance Indicators).

  • Configuration of interactive report tables.

  • Data export to Excel and CSV format.

  • Filtering and search, both in-memory or SQL based.

  • Data extraction from external systems, through different protocols.

  • Granular access control for different user profiles.

  • Look’n’feel customization tools.

  • Pluggable chart library architecture.

Target users:

  • Managers / Business owners. Consumer of dashboards and reports.

  • IT / System architects. Connectivity and data extraction.

  • Analysts / Developers. Dashboard composition & configuration.

To get further information about the new and noteworthy BAM capabilities of jBPM please read the chapter Business Activity Monitoring.

1.5. Eclipse Developer Tools

The Eclipse-based tools are a set of plugins to the Eclipse IDE and allow you to integrate your business processes in your development environment. It is targeted towards developers and has some wizards to get started, a graphical editor for creating your business processes (using drag and drop) and a lot of advanced testing and debugging capabilities.

EclipseFlow
Figure 6. Eclipse editor for creating BPMN2 processes

It includes the following features:

  • Wizard for creating a new jBPM project

  • A graphical editor for BPMN 2.0 processes

  • The ability to plug in your own domain-specific nodes

  • Validation

  • Runtime support (so you can select which version of jBPM you would like to use)

  • Graphical debugging to see all running process instances of a selected session, to visualize the current state of one specific process instance, etc.

2. Getting Started

We recommend taking a look at our Getting Start page as a starting point for getting a full environment up and running with all the components you need in order to design, deploy, run and monitor a process. Alternatively, you can also take a quick tutorial that will guide you through most of the components using a simple example available in the Installer Chapter. This will teach you how to download and use the installer to create a demo setup, including most of the components. It uses a simple example to guide you through the most important features. Screencasts are available to help you out as well.

If you like to read more information first, the following chapters first focus on the core jBPM engine (API, BPMN 2.0, etc.). Further chapters will then describe the other components and other more complex topics like domain-specific processes, flexible processes, etc. After reading the core chapters, you should be able to jump to other chapters that you might find interesting.

You can also start playing around with some examples that are offered in a separate download. Check out the Examples chapter to see how to start playing with these.

After reading through these chapters, you should be ready to start creating your own processes and integrate the jBPM engine with your application. These processes can be started from the installer or be started from scratch.

2.1. Downloads

Latest releases can be downloaded from jBPM.org. Just pick the artifact you want:

  • server: single zip distribution with jBPM server (including WildFly, Business Central, jBPM case management showcase and service repository)

  • bin: all the jBPM binaries (JARs) and their transitive dependencies

  • src: the sources of the core components

  • docs: the documentation

  • examples: some jBPM examples, can be imported into Eclipse

  • installer: the jBPM Installer, downloads and installs a demo setup of jBPM

  • installer-full: full jBPM Installer, downloads and installs a demo setup of jBPM, already contains a number of dependencies prepackaged (so they don’t need to be downloaded separately)

Older releases are archived at http://downloads.jboss.org/jbpm/release/.

Alternatively, you can also use one of the many Docker images available for use at the Download section.

2.2. Community

Here are a lot of useful links part of the jBPM community:

Please feel free to join us in our IRC channel at chat.freenode.net#jbpm. This is where most of the real-time discussion about the project takes place and where you can find most of the developers most of their time as well. Don’t have an IRC client installed? Simply go to http://webchat.freenode.net/, input your desired nickname, and specify #jbpm. Then click login to join the fun.

2.3. Sources

2.3.1. License

The jBPM code itself is using the Apache License v2.0.

Some other components we integrate with have their own license:

  • The new Eclipse BPMN2 plugin is Eclipse Public License (EPL) v1.0.

  • The legacy web-based designer is based on Oryx/Wapama and is MIT License

  • The Drools project is Apache License v2.0.

2.3.2. Source code

jBPM now uses git for its source code version control system. The sources of the jBPM project can be found here (including all releases starting from jBPM 5.0-CR1):

The source of some of the other components can be found here:

2.3.3. Building from source

If you’re interested in building the source code, contributing, releasing, etc. make sure to read this README.

2.4. Getting Involved

We are often asked "How do I get involved". Luckily the answer is simple, just write some code and submit it :) There are no hoops you have to jump through or secret handshakes. We have a very minimal "overhead" that we do request to allow for scalable project development. Below we provide a general overview of the tools and "workflow" we request, along with some general advice.

If you contribute some good work, don’t forget to blog about it :)

2.4.1. Sign up to jboss.org

Signing to jboss.org will give you access to the JBoss wiki, forums and JIRA. Go to https://www.jboss.org/ and click "Register".

sign jbossorg

2.4.2. Sign the Contributor Agreement

The only form you need to sign is the contributor agreement, which is fully automated via the web. As the image below says "This establishes the terms and conditions for your contributions and ensures that source code can be licensed appropriately"

sign contributor

2.4.3. Submitting issues via JIRA

To be able to interact with the core development team you will need to use JIRA, the issue tracker. This ensures that all requests are logged and allocated to a release schedule and all discussions captured in one place. Bug reports, bug fixes, feature requests and feature submissions should all go here. General questions should be undertaken at the mailing lists.

Minor code submissions, like format or documentation fixes do not need an associated JIRA issue created.

submit jira

2.4.4. Fork GitHub

With the contributor agreement signed and your requests submitted to JIRA you should now be ready to code :) Create a GitHub account and fork any of the Drools, jBPM or Guvnor repositories. The fork will create a copy in your own GitHub space which you can work on at your own pace. If you make a mistake, don’t worry blow it away and fork again. Note each GitHub repository provides you the clone (checkout) URL, GitHub will provide you URLs specific to your fork.

fork github

2.4.5. Writing Tests

When writing tests, try and keep them minimal and self contained. We prefer to keep the DRL fragments within the test, as it makes for quicker reviewing. If their are a large number of rules then using a String is not practical so then by all means place them in separate DRL files instead to be loaded from the classpath. If your tests need to use a model, please try to use those that already exist for other unit tests; such as Person, Cheese or Order. If no classes exist that have the fields you need, try and update fields of existing classes before adding a new class.

There are a vast number of tests to look over to get an idea, MiscTest is a good place to start.

unit test

2.4.6. Commit with Correct Conventions

When you commit, make sure you use the correct conventions. The commit must start with the JIRA issue id, such as DROOLS-1946. This ensures the commits are cross referenced via JIRA, so we can see all commits for a given issue in the same place. After the id the title of the issue should come next. Then use a newline, indented with a dash, to provide additional information related to this commit. Use an additional new line and dash for each separate point you wish to make. You may add additional JIRA cross references to the same commit, if it’s appropriate. In general try to avoid combining unrelated issues in the same commit.

Don’t forget to rebase your local fork from the original master and then push your commits back to your fork.

jira crossreferenced

2.4.7. Submit Pull Requests

With your code rebased from original master and pushed to your personal GitHub area, you can now submit your work as a pull request. If you look at the top of the page in GitHub for your work area their will be a "Pull Request" button. Selecting this will then provide a gui to automate the submission of your pull request.

The pull request then goes into a queue for everyone to see and comment on. Below you can see a typical pull request. The pull requests allow for discussions and it shows all associated commits and the diffs for each commit. The discussions typically involve code reviews which provide helpful suggestions for improvements, and allows for us to leave inline comments on specific parts of the code. Don’t be disheartened if we don’t merge straight away, it can often take several revisions before we accept a pull request. Luckily GitHub makes it very trivial to go back to your code, do some more commits and then update your pull request to your latest and greatest.

It can take time for us to get round to responding to pull requests, so please be patient. Submitted tests that come with a fix will generally be applied quite quickly, where as just tests will often way until we get time to also submit that with a fix. Don’t forget to rebase and resubmit your request from time to time, otherwise over time it will have merge conflicts and core developers will general ignore those.

submit pull request

2.5. What to do if I encounter problems or have questions?

You can always contact the jBPM community for assistance.

IRC: #jbpm at chat.freenode.net

jBPM Setup Google Group - Installation, configuration, setup and administration discussions for Business Central, Eclipse, runtime environments and general enterprise architectures.

jBPM Usage Google Group - Authoring, executing and managing processes with jBPM. Any questions regarding the use of jBPM. General API help and best practices in building BPM systems.

Visit our website for more options on how to get help.

Legacy jBPM User Forum - serves as an archive; post new questions to one of the Google Groups above

3. Business applications

3.1. Overview

Business application can be defined as an automated solution, built with selected frameworks and capabilities that implements business functions and/or business problems. Capabilities can be (among others):

  • persistence

  • messaging

  • transactions

  • business processes, business rules

  • planning solutions

Business application is more of a logical grouping of individual services that represent certain business capabilities. Usually they are deployed separately and can also be versioned individually. Overall goal is that the complete business application will allow particular domain to achieve their business goals e.g. order management, accommodation management, etc.

Business application is
  • Build on any runtime (most popular options)

    • SpringBoot

    • WildFly

    • Thorntail

  • deployable to cloud with just single command

    • OpenShift

    • Kubernetes

    • Docker

  • UI agnostic

    • Doesn’t enforce any UI frameworks and let users to make their own choice

  • Configurable database profiles

    • to allow smooth transition from one database to another with just single parameter/switch

  • Generated

    • makes it really easy to start for developers so they don’t get upset with initial failures usually related to configuration

Business application consists of
  • Many project

    • data model project - shared data model between business assets and service

    • business assets (kjar) project - easily importable into Business Central

    • service project - actual service with various capabilities

  • Configuration for

    • maven repository - settings.xml

    • database profiles

    • deployment setup

      • local

      • docker

      • OpenShift

Service project is the one that is deployable but will in most of the cases include business assets and data model projects. Data model project represents the common data structures that will be shared between service implementation and business assets. That enables proper encapsulation and promotes reuse and at the same time reduces shortcuts to make data model classes something more than they are - include too much of implementation into data models.

Business applications you build are not restricted to having only one of each project types. In order to build the solutions you need your business app can:

  • Have multiple data model projects - each service project can expose its own public data model

  • Have multiple business assets (kjar) projects - in case there is a business need for it

  • Have multiple service projects - to split services into smaller components for better manageability

  • Have UI modules - either per service (embedded in the service project) or a federated one (separate project for UI only)

  • Service projects can communicate with each other either directly or via business processes

Following diagram represents the sample business application

Business application diagram

3.2. Create your business application

Business application can be created in multiple ways, depending on the project types you need.

3.2.1. Generate business application

The fastest and recommended way to quickly generate your business application is by using the jBPM online service: start.jbpm.org

Generate application at start.jbpm.org

With the online service you can:

  • generate your business app using a default (most commonly used) configuration

  • configure your business application to include specific features that you need

The generated application will be delivered as a zip archive will following structure

generated application

To provide more information about individual steps, let’s review different options that user can choose from

3.2.1.1. Capabilities

Capabilities essentially define the features that your business application will be equipped with. Available options are:

  • Business automation covers features for process management, case management, decision management and optimisation. These will be by default configured in the service project of your business application. Although you can turn them off via configuration.

  • Decision management covers mainly decision and rules related features (backed by Drools project)

  • Business optimisation covers planning problems and solutions related features (backed by OptaPlanner project)

3.2.1.2. Application information

General information about the application that is

  • name - the name that will be used for the projects generated

  • package - valid Java package name that will be created in the projects and used as group of maven projects

  • version - selected version of jBPM/KIE that should be used for service project

3.2.1.3. Project types

Selection of project types to be included in the business application

  • data model - basic maven/jar project to keep the data structures

  • business assets - kjar project that can be easily imported into Business Central for development

  • service - service project that will include chosen capabilities with all bits configured

3.2.2. Manually create business application

In case you can’t use jBPM online service to generate the application you can manually create individual projects. jBPM provides maven archetypes that can be easily used to generate the application. In fact jBPM online service uses these archetypes behind the scenes to generate business application.

Business assets project archetype

org.kie:kie-kjar-archetype:7.26.0.Final

Service project archetype

org.kie:kie-service-spring-boot-archetype:7.26.0.Final

Data model archetype

org.apache.maven.archetypes:maven-archetype-quickstart:1.3

Example that allows to generate all three types of projects

mvn archetype:generate -B -DarchetypeGroupId=org.kie -DarchetypeArtifactId=kie-model-archetype -DarchetypeVersion=7.26.0.Final -DgroupId=com.company -DartifactId=test-model -Dversion=1.0-SNAPSHOT -Dpackage=com.company.model

mvn archetype:generate -B -DarchetypeGroupId=org.kie -DarchetypeArtifactId=kie-kjar-archetype -DarchetypeVersion=7.26.0.Final -DgroupId=com.company -DartifactId=test-kjar -Dversion=1.0-SNAPSHOT -Dpackage=com.company

mvn archetype:generate -B -DarchetypeGroupId=org.kie -DarchetypeArtifactId=kie-service-spring-boot-archetype -DarchetypeVersion=7.26.0.Final -DgroupId=com.company -DartifactId=test-service -Dversion=1.0-SNAPSHOT -Dpackage=com.company.service -DappType=bpm

When generating projects from the archetypes in same directory you should end up with exactly the same structure as generated by jBPM online service.

3.3. Run your business application

Once your business application is created, the next step is to actually run it.

3.3.1. Launch application

By default business application has a single runnable project - that is the service project. The service project is equipped with two scripts (both for linux and windows)

  • launch.sh/launch.bat

  • launch-dev.sh/launch-dev.bat

the main difference between these two scripts is the target execution

  • launch.sh/bat is dedicated to start application in standalone mode, without additional requirements.

  • launch-dev.sh/bat is dedicated to start application in sort of development mode (in other words managed mode) so it will require Business Central to be available as jBPM controller.

Development mode is meant to allow people to work on the business assets projects and dynamically deploy changes to the business application without the need to restart it. At the same time it provides a complete monitoring environment over business automation capabilities (process instances, tasks, jobs, etc).

To launch your application just go into service project ({your business application name}-service) and invoke

./launch.sh clean install for Linux/Unix

./launch.bat clean install for Windows

the clean install part of the command is to tell maven how to build. It will then build projects in following order

  • Data model

  • Business assets

  • Service

the first time it might take a while as it will download all dependencies of the project. At the end of the build it will start the application and after few seconds you should see output similar to following..,

INFO  o.k.s.s.a.KieServerAutoConfiguration     : KieServer (id business-application-service (name business-application-service)) started initialization process
INFO  o.k.server.services.impl.KieServerImpl   : Server Default Extension has been successfully registered as server extension
INFO  o.k.server.services.impl.KieServerImpl   : Drools KIE Server extension has been successfully registered as server extension
INFO  o.k.server.services.impl.KieServerImpl   : DMN KIE Server extension has been successfully registered as server extension
INFO  o.k.s.api.marshalling.MarshallerFactory  : Marshaller extensions init
INFO  o.k.server.services.impl.KieServerImpl   : jBPM KIE Server extension has been successfully registered as server extension
INFO  o.k.server.services.impl.KieServerImpl   : Case-Mgmt KIE Server extension has been successfully registered as server extension
INFO  o.k.server.services.impl.KieServerImpl   : jBPM-UI KIE Server extension has been successfully registered as server extension
INFO  o.k.s.s.impl.policy.PolicyManager        : Registered KeepLatestContainerOnlyPolicy{interval=0 ms} policy under name KeepLatestOnly
INFO  o.k.s.s.impl.policy.PolicyManager        : Policy manager started successfully, activated policies are []
INFO  o.k.server.services.impl.KieServerImpl   : Selected startup strategy ControllerBasedStartupStrategy - deploys kie containers given by controller ignoring locally defined
INFO  o.k.s.services.impl.ContainerManager     : About to install containers '[]' on kie server 'KieServer{id='business-application-service'name='business-application-service'version='7.9.0.Final'location='http://localhost:8090/rest/server'}'
INFO  o.k.server.services.impl.KieServerImpl   : KieServer business-application-service is ready to receive requests
INFO  o.k.s.s.a.KieServerAutoConfiguration     : KieServer (id business-application-service) started successfully
INFO  o.s.j.e.a.AnnotationMBeanExporter        : Registering beans for JMX exposure on startup
INFO  s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 8090 (http)
INFO  c.c.b.service.Application                : Started Application in 14.534 seconds (JVM running for 15.193)

and you should be able to access your business application at http://localhost:8090/

Business application landing page

3.3.2. Launch application in development mode

Development mode requires Business Central to be available, by default at http://localhost:8080/jbpm-console. The easiest way to get that up and running is to use jBPM single distribution that can be downloaded at jbpm.org Look at the Getting Started guide to get yourself familiar with Business Central.

Make sure you have Business Central up and running before launching your business application in development mode.

3.3.3. Import your business assets project into Business Central

Business assets projects that was just created can be easily imported into Business Central as soon as it’s a valid git repository. To make it as such

  • Go into business assets project - {your business application name}-kjar

  • Execute git init

  • Execute git add -A

  • Execute git commit -m "Initial project structure"

  • Log in to Business Central and go to projects

  • Select import project and enter following URL file:///{path to your business application}/{your business application name}-kjar

  • Click import and confirm project to be imported

3.3.3.1. Work on your business assets

Once the business assets project is imported into Business Central you can start working on it. Just go to the project and add assets such as business process, rules, decision tables etc.

3.3.3.2. Launch business application in development mode

To launch your application just go into service project ({your business application name}-service) and invoke

./launch-dev.sh clean install for Linux/Unix

./launch-dev.bat clean install for Windows

this should print the first entry after the build as follows

Launching the application in development mode - requires connection to controller (Business Central)

and similar as to launching in the standalone more after couple of seconds should be able to access your business application at http://localhost:8090/

Once the application started, it should be successfully connect to jBPM controller and by that be visible in the servers perspective of Business Central.

Connected business application
3.3.3.3. Deploy business assets project into running business application

After adding assets to your project in Business Central you can just deploy it to a running server instance. Click the Deploy button on your project and in few seconds you should see the project deployed on your business application.

Connected business application with deployed project

You can use Process Definitions and Process Instance perspectives of Business Central to interact with your newly deployed business assets such as processes or user tasks.

3.4. Configure business application

There are several components that can be configured in the business application. Depending on selected capabilities during application generation, a number of components can differ.

Entire configuration of the business application (service project) is done via application.properties file that is a standard way to configure SpringBoot applications. It is located under the src/main/resources directory of {your business application}-service folder.

3.4.1. Configuring core components

3.4.1.1. Configuring server

One of the most important confguration is actually the server itself. That is the host, port and path for the REST endpoints.

# used for server binding
server.address=localhost
server.port=8090

# used to define path for REST apis
cxf.path=/rest
3.4.1.2. Configure authentication and authorization

Business application is secured by default by protecting all REST endpoints (URL pattern /rest/*).

Authentication is enabled for single test user named user with password user. Additionally there is default kieserver user that allows to easy connect to Business Central in development mode.

Both authentication and authorization is based on Spring Security and can be configured in DefaultWebSecurityConfig.java that is included in the generated service project (src/main/java/com/company/service/DefaultWebSecurityConfig.java)

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Configuration;
import org.springframework.security.config.annotation.authentication.builders.AuthenticationManagerBuilder;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity;
import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter;


@Configuration("kieServerSecurity")
@EnableWebSecurity
public class DefaultWebSecurityConfig extends WebSecurityConfigurerAdapter {

    @Override
    protected void configure(HttpSecurity http) throws Exception {
        http
        .csrf().disable()
        .authorizeRequests()
        .antMatchers("/rest/*").authenticated()
        .and()
        .httpBasic();
    }

    @Autowired
    public void configureGlobal(AuthenticationManagerBuilder auth) throws Exception {
        auth.inMemoryAuthentication().withUser("user").password("user").roles("kie-server");
        auth.inMemoryAuthentication().withUser("kieserver").password("kieserver1!").roles("kie-server");
    }
}
This security configuration is just starting point and should be altered for all business applications going to production like setup.
Use Keycloak as authentication provider

Configuring business applications to use Keycloak as authentication and authorisation requires few steps

  • Install Keycloak - follow official documentation at keycloak.org

  • Configure Keycloak once started

    • Use default master realm or create new one

    • Create client named springboot-app and set its AccessType to public

    • Set Valid redirect URI and Web Origin according to your local setup - with default setup they should be set to

    • Valid Redirect URIs: http://localhost:8090/*

    • Web Origins: http://localhost:8090

    • Create realm roles that are used in the application

    • Create users used in the application and assign roles to them

  • Configure dependencies in service project pom.xml

<dependencyManagement>
  <dependencies>
    <dependency>
      <groupId>org.keycloak.bom</groupId>
      <artifactId>keycloak-adapter-bom</artifactId>
      <version>${version.org.keycloak}</version>
      <type>pom</type>
      <scope>import</scope>
    </dependency>
  </dependencies>
</dependencyManagement>

  ....

<dependency>
  <groupId>org.keycloak</groupId>
  <artifactId>keycloak-spring-boot-starter</artifactId>
</dependency>

Business application includes jBPM (KIE) execution server that can be configured to be better identified

kieserver.serverId=business-application-service
kieserver.serverName=business-application-service
kieserver.location=http://localhost:8090/rest/server
kieserver.controllers=http://localhost:8080/business-central/rest/controller
  • Configure application.properties

# keycloak security setup
keycloak.auth-server-url=http://localhost:8100/auth
keycloak.realm=master
keycloak.resource=springboot-app
keycloak.public-client=true
keycloak.principal-attribute=preferred_username
keycloak.enable-basic-auth=true
  • Modify DefaultWebSecurityConfig.java to ensure that Spring Security will work correctly with Keycloak

import org.keycloak.adapters.KeycloakConfigResolver;
import org.keycloak.adapters.springboot.KeycloakSpringBootConfigResolver;
import org.keycloak.adapters.springsecurity.authentication.KeycloakAuthenticationProvider;
import org.keycloak.adapters.springsecurity.config.KeycloakWebSecurityConfigurerAdapter;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.security.config.annotation.authentication.builders.AuthenticationManagerBuilder;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity;
import org.springframework.security.core.authority.mapping.SimpleAuthorityMapper;
import org.springframework.security.core.session.SessionRegistryImpl;
import org.springframework.security.web.authentication.session.RegisterSessionAuthenticationStrategy;
import org.springframework.security.web.authentication.session.SessionAuthenticationStrategy;

@Configuration("kieServerSecurity")
@EnableWebSecurity
public class DefaultWebSecurityConfig extends KeycloakWebSecurityConfigurerAdapter {

    @Override
    protected void configure(HttpSecurity http) throws Exception {
        super.configure(http);
        http
        .csrf().disable()
        .authorizeRequests()
            .anyRequest().authenticated()
            .and()
        .httpBasic();
    }

    @Autowired
    public void configureGlobal(AuthenticationManagerBuilder auth) throws Exception {
        KeycloakAuthenticationProvider keycloakAuthenticationProvider = keycloakAuthenticationProvider();
        SimpleAuthorityMapper mapper = new SimpleAuthorityMapper();
        mapper.setPrefix("");
        keycloakAuthenticationProvider.setGrantedAuthoritiesMapper(mapper);
        auth.authenticationProvider(keycloakAuthenticationProvider);
    }

    @Bean
    public KeycloakConfigResolver KeycloakConfigResolver() {
       return new KeycloakSpringBootConfigResolver();
    }

    @Override
    protected SessionAuthenticationStrategy sessionAuthenticationStrategy() {
        return new RegisterSessionAuthenticationStrategy(new SessionRegistryImpl());
    }
}

These are the steps to configure you business application to use Keycloak as authentication and authorisation service.

3.4.1.3. Configuring execution server

server id and server name refer to how the business application will be identified when connecting to the jBPM controller (Business Central) and thus should provide as meaningful information as possible.

location is used to inform other components that might interact with REST api where the execution server is accessible. It should not be the exact same location as defined by server.address and server.port especially when running in containers (Docker/OpenShift).

controllers allows to specify a (comma separated) list of URLs.

3.4.1.4. Configuring capabilities

In case your business application selected 'Business Automation' as the capability then there you can control which of them should actually be turned on on runtime.

# used for decision management
kieserver.drools.enabled=true
kieserver.dmn.enabled=true

# used for business processes and cases
kieserver.jbpm.enabled=true
kieserver.jbpmui.enabled=true
kieserver.casemgmt.enabled=true

# used for planning
kieserver.optaplanner.enabled=true
3.4.1.5. Configuring data source
Data source configuration is only required for business automation (meaning when jBPM is used)
spring.datasource.username=sa
spring.datasource.password=sa
spring.datasource.url=jdbc:h2:./target/spring-boot-jbpm;MVCC=true
spring.datasource.driver-class-name=org.h2.Driver

Above configures shows the basic data source settings, next section will deal with connection pooling for efficient data access.

Depending on the driver class selected, make sure your application adds correct dependency that include the JDBC driver class or data source class.
narayana.dbcp.enabled=true
narayana.dbcp.maxTotal=20

this configuration enables the data source connection pool (that is based on commons-dbcp2 project) and a complete list of parameters can be found on configuration page. All parameters from the configuration page must be prefixed with narayana.dbcp.

3.4.1.6. Configuring JPA

jBPM uses Hibernate as the data base access layer and thus needs to be properly configured

spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.H2Dialect
spring.jpa.properties.hibernate.show_sql=false
spring.jpa.properties.hibernate.hbm2ddl.auto=update
spring.jpa.hibernate.naming.physical-strategy=org.hibernate.boot.model.naming.PhysicalNamingStrategyStandardImpl
JPA configuration is completely based on SpringBoot so all options for both hibernate and JPA can be found as SpringBoot configuration page

Application with business automation capability creates entity manager factory based on persistence.xml that comes with jBPM. In case there are more entities that should be added to this entity manager factory (e.g. custom entities for the business application) they can easily be added by specifying a comma separated list of packages to scan

spring.jpa.properties.entity-scan-packages=org.jbpm.springboot.samples.entities

All entities found in that package will be automatically added to entity manager factory and thus used in the same manner as any other JPA entity in the application.

3.4.1.7. Configuring jBPM executor

jBPM executor is the backbone for asynchronous execution in jBPM. By default it is disabled, but can easily be turned on by configuration parameters.

jbpm.executor.enabled=true
jbpm.executor.retries=5
jbpm.executor.interval=0
jbpm.executor.threadPoolSize=1
jbpm.executor.timeUnit=SECONDS
  • jbpm.executor.enabled = true|false - allows to completely disable executor component

  • jbpm.executor.threadPoolSize = Integer - allows to specify thread pool size where default is 1

  • jbpm.executor.retries = Integer - allows to specify number of retries in case of errors while running a job

  • jbpm.executor.interval = Integer - allows to specify interval (by default in seconds) that executor will use to synchronize with data base - default is 0 seconds which means it is disabled

  • jbpm.executor.timeUnit = String - allows to specify timer unit used for calculating interval, value must be a valid constant of java.util.concurrent.TimeUnit, by default it’s SECONDS.

3.4.1.8. Configuring distributed timers - Quartz

In case you plan to run your application in a cluster (multiple instances of it at the same time) then you need to take into account timer service setup. Since the business application is running on top of Tomcat web container the only option for timer service for distributed setup is Quartz based.

jbpm.quartz.enabled=true
jbpm.quartz.configuration=quartz.properties

Above are two mandatory parameters and the configuration file that need to be either on the classpath or on the file system (if the path is given).

For distributed timers data base storage should be used and properly configured via quartz.properties file.

#============================================================================
# Configure Main Scheduler Properties
#============================================================================
org.quartz.scheduler.instanceName = SpringBootScheduler
org.quartz.scheduler.instanceId = AUTO
org.quartz.scheduler.skipUpdateCheck=true
org.quartz.scheduler.idleWaitTime=1000
#============================================================================
# Configure ThreadPool
#============================================================================
org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool
org.quartz.threadPool.threadCount = 5
org.quartz.threadPool.threadPriority = 5
#============================================================================
# Configure JobStore
#============================================================================
org.quartz.jobStore.misfireThreshold = 60000
org.quartz.jobStore.class=org.quartz.impl.jdbcjobstore.JobStoreCMT
org.quartz.jobStore.driverDelegateClass=org.jbpm.process.core.timer.impl.quartz.DeploymentsAwareStdJDBCDelegate
org.quartz.jobStore.useProperties=false
org.quartz.jobStore.dataSource=myDS
org.quartz.jobStore.nonManagedTXDataSource=notManagedDS
org.quartz.jobStore.tablePrefix=QRTZ_
org.quartz.jobStore.isClustered=true
org.quartz.jobStore.clusterCheckinInterval = 5000
#============================================================================
# Configure Datasources
#============================================================================
org.quartz.dataSource.myDS.connectionProvider.class=org.jbpm.springboot.quartz.SpringConnectionProvider
org.quartz.dataSource.myDS.dataSourceName=quartzDataSource
org.quartz.dataSource.notManagedDS.connectionProvider.class=org.jbpm.springboot.quartz.SpringConnectionProvider
org.quartz.dataSource.notManagedDS.dataSourceName=quartzNotManagedDataSource
Data source names in quartz configuration file refer to Spring beans. Additionally connection provider needs to be set to org.jbpm.springboot.quartz.SpringConnectionProvider to allow integration with Spring based data sources.

By default Quartz requires two data sources:

  • managed data source so it can participate in transaction of the jBPM engine

  • not managed data source so it can look up for timers to trigger without any transaction handling

jBPM based business application assumes that quartz data base (schema) will be collocated with jBPM tables and by that produces data source used for transactional operations for Quartz.

The other (non transactional) data source needs to be configured but it should point to the same data base as the main data source.

# enable to use data base as storage
jbpm.quartz.db=true

quartz.datasource.name=quartz
quartz.datasource.username=sa
quartz.datasource.password=sa
quartz.datasource.url=jdbc:h2:./target/spring-boot-jbpm;MVCC=true
quartz.datasource.driver-class-name=org.h2.Driver

# used to configure connection pool
quartz.datasource.dbcp2.maxTotal=15

# used to initialize quartz schema
quartz.datasource.initialization=true
spring.datasource.schema=classpath*:quartz_tables_h2.sql
spring.datasource.initialization-mode=always

The last three lines of the above configuration is responsible for initialising database schema automatically. When configured it should point to a proper DDL script.

3.4.1.9. Configuring different data bases

Business application is generated with default H2 database - just to get started quickly and without any extra requirements. Since this default setup may not valid for production use the generated business applications come with configuration dedicated to:

  • MySQL

  • PostgreSQL

There are dedicated profiles - both Maven and Spring to get you started really fast without much of work. The only thing you need to do is to alight the configuration with your data bases.

MySQL configuration

spring.datasource.username=jbpm
spring.datasource.password=jbpm
spring.datasource.url=jdbc:mysql://localhost:3306/jbpm
spring.datasource.driver-class-name=com.mysql.jdbc.jdbc2.optional.MysqlXADataSource

#hibernate configuration
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5InnoDBDialect

PostgreSQL configuration

spring.datasource.username=jbpm
spring.datasource.password=jbpm
spring.datasource.url=jdbc:postgresql://localhost:5432/jbpm
spring.datasource.driver-class-name=org.postgresql.xa.PGXADataSource

#hibernate configuration
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.PostgreSQLDialect

Once the updates to the configuration are done you can launch your application via

./launch.sh clean install -Pmysql for MySQL on Linux/Unix

./launch.bat clean install -Pmysql for MySQL on Windows

./launch.sh clean install -Ppostgres for MySQL on Linux/Unix

./launch.bat clean install -Ppostgres for MySQL on Windows

3.4.1.10. Configuring user group providers

Business automation capability supports human centric activities to be managed, to provide integration with user and group repositories there is a built in mechanism in jBPM. There are two entry points

  • UserGroupCallback - responsible for verification if user/group exists and for collecting groups for given user

  • UserInfo - responsible for collecting additional information about user/group such as email address, preferred language, etc

Both of these can be configured by providing alternative implementation - either one of the provided out of the box or custom developed.

When it comes to UserGroupCallback it is recommended to stick to the default one as it is based on the security context of the application. That means whatever backend store is used for authentication and authorisation (e.g. Keycloak) it will be used as source information for collecting user/group information.

UserInfo requires more advanced information to be collected and thus is a separate component. Not all user/group repositories will provide expect data especially those that are purely used for authentication and authorisation.

Following code is needed to provide alternative implementation of UserGroupCallback

@Bean(name = "userGroupCallback")
public UserGroupCallback userGroupCallback(IdentityProvider identityProvider) throws IOException {
    return new MyCustomUserGroupCallback(identityProvider);
}

Following code is needed to provide alternative implementation of UserInfo

@Bean(name = "userInfo")
public UserInfo userInfo() throws IOException {
    return new MyCustomUserInfo();
}
3.4.1.11. Enable Swagger documentation

Business application can easily enable Swagger based documentation for all endpoints available in the service project.

Add required dependencies to service project pom.xml
<dependency>
  <groupId>org.apache.cxf</groupId>
  <artifactId>cxf-rt-rs-service-description-swagger</artifactId>
  <version>3.1.11</version>
</dependency>
<dependency>
  <groupId>io.swagger</groupId>
  <artifactId>swagger-jaxrs</artifactId>
  <version>1.5.15</version>
  <exclusions>
    <exclusion>
      <groupId>javax.ws.rs</groupId>
      <artifactId>jsr311-api</artifactId>
    </exclusion>
  </exclusions>
</dependency>
Enable Swagger support in application.properties
kieserver.swagger.enabled=true

Swagger document can be found at http://localhost:8090/rest/swagger.json

Enable Swagger UI

To enable Swagger UI add following dependency to pom.xml of the service project.

<dependency>
  <groupId>org.webjars</groupId>
  <artifactId>swagger-ui</artifactId>
  <version>2.2.10</version>
</dependency>

Once the Swagger UI is enabled and server is started, complete set of endpoints can be found at http://localhost:8090/rest/api-docs/?url=http://localhost:8090/rest/swagger.json

3.5. Develop your business application

Developing custom logic in business application strictly depends on your specific requirements. In this this guide we will provide some common steps that developers might need to get started.

3.5.1. Data model

The data model project in your generated business application promotes the idea (and best practice in fact) of designing data models with reuse in mind. At the same time it avoids putting too much in the model (which usually happens when model is colocated with the service itself).

Data model project should be seen as the API of the business application or one of the service. In case of application that is composed of several services it’s recommended that each service exposes its own data model (API).

That API then can be used by both service project and the business assets project.

Generated application model is not added as dependency to service nor business assets projects.

3.5.2. Business assets development

Business assets are usually developed in Business Central, where developers can create different assets types such as

  • Business processes

  • Case definitions

  • Rules

  • Decision tables

  • Data objects

  • Forms

  • Others

Before these assets can be created the business assets project needs to be imported into Business Central as described in Import your business assets project into Business Central

Whenever working with business assets you can easily try them out in your business application by running the application in development mode. That allows developer to build and deploy the assets project directly to a running application. Moreover Business Central can also be used to quickly interact with processes, tasks and cases. To learn more see Launch application in development mode

Once the work on business assets is finished it should be fetch back to your business application source.

  • go into business assets project - {your business application name}-kjar

  • execute git fetch origin

  • execute git rebase origin/master

With this your business assets are now part of the business application source tree and can be launched in standalone mode - without Business Central as jBPM controller.

To launch your application just go into service project ({your business application name}-service) and invoke

./launch.sh clean install for Linux/Unix

./launch.bat clean install for Windows

In case the version of your business assets project changes you will have to update that information in the service project. Locate the configuration file that is used for standalone mode {your business application name}-service.xml Edit it and update the version for the specific container.

Business assets project has two special files

  • pom.xml

  • src/main/resources/META-INF/kie-deployment-descriptor.xml

The first one is Apache Maven project file and is managed via Project Settings in Business Central. It allows to define project information (group id, artifact id, version, name, description). In addition it allows to define dependencies the project will have e.g. data model project.

Whenever dependencies are added from the following group ids they should be marked as scope provided

  • org.kie

  • org.drools

  • org.jbpm

  • org.optaplanner

Deployment descriptor allows to configure various components of the business automation capability such as

  • Persistence for jBPM

  • Runtime strategy

  • Event listeners

  • Work item handlers

  • Marshalling strategies

  • And more

for complete description of the deployment descriptor see Deployment descriptor

3.5.3. Work Item Handlers

Business processes can take advantage of so called domain specific services which are implemented as work items and their actual execution is carried out by work item handlers. Work items defined in the process or case definition are linked by name with work item handler (the implementation).

Work item handlers can be registered in three ways

  • via deployment descriptor - use this approach if you want to decouple life cycle of the handler from your business application

  • via auto registration of Spring Components - use this when you have your handlers implemented as Spring beans (components) that are bound to the life cycle of the application

  • via manual registration of any work handler implementation - use this when the handler is not implemented by you and thus there is no way to use the Spring Component approach or it has advanced initialisation logic that does not fit the deployment descriptor approach

3.5.3.1. Register Work Item Handler via deployment descriptor

Registration in deployment descriptor can be done directly in Business Central via Project settings → Deployments

Add the work item handler mapped to the name of the work item

deployment descriptor work item handler

this will result in following source code of the deployment descriptor

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<deployment-descriptor xsi:schemaLocation="http://www.jboss.org/jbpm deployment-descriptor.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
    <persistence-unit>org.jbpm.domain</persistence-unit>
    <audit-persistence-unit>org.jbpm.domain</audit-persistence-unit>
    <audit-mode>JPA</audit-mode>
    <persistence-mode>JPA</persistence-mode>
    <runtime-strategy>SINGLETON</runtime-strategy>
    <marshalling-strategies/>
    <event-listeners/>
    <task-event-listeners/>
    <globals/>
    <work-item-handlers>
        <work-item-handler>
            <resolver>mvel</resolver>
            <identifier>new org.jbpm.process.workitem.rest.RESTWorkItemHandler("user", "password", classLoader)</identifier>
            <parameters/>
            <name>Rest</name>
        </work-item-handler>
    </work-item-handlers>
    <environment-entries/>
    <configurations/>
    <required-roles/>
    <remoteable-classes/>
    <limit-serialization-classes>true</limit-serialization-classes>
</deployment-descriptor>
3.5.3.2. Register Work Item Handler via auto registration of Spring Components

The easiest way to register work item handlers is to rely on Spring discovery and configuration of beans. It’s enough to annotate your work item handler class with @Component("WorkItemName") and that bean will be automatically registered in jBPM.

import org.kie.api.runtime.process.WorkItem;
import org.kie.api.runtime.process.WorkItemHandler;
import org.kie.api.runtime.process.WorkItemManager;
import org.springframework.stereotype.Component;

@Component("Custom")
public class CustomWorkItemHandler implements WorkItemHandler {

    @Override
    public void executeWorkItem(WorkItem workItem, WorkItemManager manager) {

        manager.completeWorkItem(workItem.getId(), null);
    }

    @Override
    public void abortWorkItem(WorkItem workItem, WorkItemManager manager) {

    }

}

This will register CustomWorkItemHandler under Custom name so every work item named Custom will use that handler to execute it’s logic.

The name attribute of @Component annotations is mandatory for registration to happen. In case the name is missing work item handler won’t be registered and warning will be logged.
3.5.3.3. Register Work Item Handler programmatically

Last resort option is to get hold of DeploymentService and register handlers programatically

@Autowire
private SpringKModuleDeploymentService deploymentService;

...

@PostConstruct
public void configure() {

    deploymentService.registerWorkItemHandler("Custom", new CustomWorkItemHandler());
}

3.5.4. Event listeners

jBPM allows to register various event listeners that will be invoked upon various events triggered by the jBPM engine. Supported event listener types are

  • ProcessEventListener

  • AgendaEventListener

  • RuleRuntimeEventListener

  • TaskLifeCycleEventListener

  • CaseEventListener

Similar to work item handlers, event listeners can be registered in three ways

  • via deployment descriptor - use this approach if you want to decouple life cycle of the listener from your business application

  • via auto registration of Spring Components - use this when you have your listeners implemented as Spring beans (components) that are bound to the life cycle of the application

  • via manual registration of any work handler implementation - use this when the listener is not implemented by you and thus there is no way to use the Spring Component approach or it has advanced initialisation logic that does not fit the deployment descriptor approach

3.5.4.1. Register event listener via deployment descriptor

Registration in deployment descriptor can be done directly in Business Central via Project settings → Deployments

deployment descriptor event listener

this will result in following source code of the deployment descriptor

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<deployment-descriptor xsi:schemaLocation="http://www.jboss.org/jbpm deployment-descriptor.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
    <persistence-unit>org.jbpm.domain</persistence-unit>
    <audit-persistence-unit>org.jbpm.domain</audit-persistence-unit>
    <audit-mode>JPA</audit-mode>
    <persistence-mode>JPA</persistence-mode>
    <runtime-strategy>SINGLETON</runtime-strategy>
    <marshalling-strategies/>
    <event-listeners>
        <event-listener>
            <resolver>mvel</resolver>
            <identifier>new org.jbpm.listeners.CustomProcessEventListener</identifier>
            <parameters/>
        </event-listener>
    </event-listeners>
    <task-event-listeners/>
    <globals/>
    <work-item-handlers/>
    <environment-entries/>
    <configurations/>
    <required-roles/>
    <remoteable-classes/>
    <limit-serialization-classes>true</limit-serialization-classes>
</deployment-descriptor>
3.5.4.2. Register event listener via auto registration of Spring Components

The easiest way to register event listeners is to rely on Spring discovery and configuration of beans. It’s enough to annotate your event listener implementation class with @Component() and that bean will be automatically registered in jBPM.

import org.kie.api.event.process.ProcessCompletedEvent;
import org.kie.api.event.process.ProcessEventListener;
import org.kie.api.event.process.ProcessNodeLeftEvent;
import org.kie.api.event.process.ProcessNodeTriggeredEvent;
import org.kie.api.event.process.ProcessStartedEvent;
import org.kie.api.event.process.ProcessVariableChangedEvent;
import org.springframework.stereotype.Component;

@Component
public class CustomProcessEventListener implements ProcessEventListener {

    @Override
    public void beforeProcessStarted(ProcessStartedEvent event) {

    }

    ...

}
Event listener can extend default implementation of given event listener to avoid implementing all methods e.g. org.kie.api.event.process.DefaultProcessEventListener

Type of the event listeners is determined by the interface (or super class) it implements.

3.5.4.3. Register event listener programmatically

Last resort option is to get hold of DeploymentService and register handlers programatically

@Autowire
private SpringKModuleDeploymentService deploymentService;

...

@PostConstruct
public void configure() {

    deploymentService.registerProcessEventListener(new CustomProcessEventListener());
}

3.5.5. Custom REST endpoints

In many (if not all) cases there will be a need to expose additional REST endpoints for your business application (in your service project). This can be easily achieved by creating a JAX-RS compatible class (with JAX-RS annotations). It will automatically be registered with the running service when the following scanning option is configured in your apps application.properties config file:

cxf.jaxrs.classes-scan=true
cxf.jaxrs.classes-scan-packages=org.kie.server.springboot.samples.rest

The endpoint will be bound to the global REST api path defined in the cxf.path property.

An example of a custom endpoint can be found below

package org.kie.server.springboot.samples.rest;

import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;
import javax.ws.rs.core.Response;

@Path("extra")
public class AdditionalEndpoint {

    @GET
    @Produces({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON})
    public Response listContainers() {

        return Response.ok().build();
    }
}

3.6. Deploy business application

Business applications are designed to run in pretty much any environment but for production the usual target is cloud based runtimes that allow scalability and operational efficiency.

Business application deployable components are composed of services. Every application can consists of one or more services that are deployed in isolation and in many cases will follow different release cycle.

3.6.1. OpenShift deployment

Business applications can be easily deployed to OpenShift Container Platform. It’s as easy as starting the application locally, meaning by using launch.sh/bat scripts.

You need to have OpenShift installed (good choice for local installation is minishift) or remote installation that can be accessed over network.

So first of all login to OpenShift Cluster

oc login -u system:admin

once successfully logged in following output (or similar) should be displayed

Logged into "https://192.168.64.2:8443" as "system:admin" using existing credentials.

You have access to the following projects and can switch between them with 'oc project <projectname>':

    default
    kube-public
    kube-system
  * myproject
    openshift
    openshift-infra
    openshift-node
    openshift-web-console

Using project "myproject".

To deploy your application as to OpenShift Container Platform just go into service project ({your business application name}-service) and invoke

./launch.sh clean install -Popenshift,h2 for Linux/Unix

./launch.bat clean install -Popenshift,h2 for Windows

The launch script will perform the build with openshift profile (see pom.xml in the business assets project and service project for details). The significant difference that is done for openshift is that the business assets project will generate an offline maven repository with the project itself and all its dependencies. Next this maven repository will be included in the image itself and maven (used by business automation capability) will work in offline mode - meaning no access to internet will be attempted.

Launching the application on OpenShift...
--> Found image ef440f7 (15 seconds old) in image stream "myproject/business-application-service" under tag "1.0-SNAPSHOT" for "business-application-service:1.0-SNAPSHOT"

    * This image will be deployed in deployment config "business-application-service"
    * Ports 8090/tcp, 8778/tcp, 9779/tcp will be load balanced by service "business-application-service"
      * Other containers can access this service through the hostname "business-application-service"

--> Creating resources ...
    deploymentconfig "business-application-service" created
    service "business-application-service" created
--> Success
    Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
     'oc expose svc/business-application-service'
    Run 'oc status' to view your app.
route "business-application-service" exposed

You can then go to OpenShift Web Console and look at the Overview of your project (myproject by default)

Business application on OpenShift

By clicking on the route url (in this case http://business-application-service-myproject.192.168.64.2.nip.io) you can go to the application already deployed and running.

3.6.2. Docker deployment

Business applications are by default configured with option to deploy service as docker container.

This is done in very similar way as launching the service locally - via launch.sh/bat script.

You must have Docker installed on your machine to make this work!

To deploy your application as docker container just go into service project ({your business application name}-service) and invoke

./launch.sh clean install -Pdocker,h2 for Linux/Unix

./launch.bat clean install -Pdocker,h2 for Windows

When building with docker proper data base profile needs to be selected as well - this is done via -Pdocker,{db} so the image and the application gets proper JDBC driver selected.

The launch script will perform the build with docker profile (see pom.xml in the business assets project and service project for details). The significant difference that is done for docker container is that the business assets project will generate an offline maven repository with the project itself and all its dependencies. Next this maven repository will be included in the docker image itself and maven (used by business automation capability) will work in offline mode - meaning no access to internet will be attempted.

Once the build is complete launch script will directly create container and start it, this should be done once the following line is printed to console

Launching the application as docker container...
d40e4cdb662d3b1d9ddee27c5a843be31cb6e7dc4936b0fc1937ce8e48f440ae

the second line is the container id that can be later on used to interact with the container, for instance to follow the logs

docker logs -f d40e4cdb662d3b1d9ddee27c5a843be31cb6e7dc4936b0fc1937ce8e48f440ae

the business application will be accessible at the same port as configured by default that is 8090, simply go to http://localhost:8090 to see your application running as docker container.

3.6.3. Using external data base

Currently business application that requires external data base needs to provide the data base in advance - before the application is launched and properly configured within the application configuration files.

Further releases will improve this by relying on docker compose/OpenShift templates.

3.7. Tutorials

3.7.1. My First Business Application

3.7.1.1. What will you do

You will build a simple but fully functional business application. Once you build it you will explore basic services exposed by the application.

3.7.1.2. What do you need
  • About 10 minutes of your time

  • Java (JDK) 8 or later

  • Maven 3.5.x

  • Access to the Internet

3.7.1.3. What should I do

To get started with business applications the easiest way is to generate the,. Go to start.jbpm.org and click button Generate default business application.

This will generate and download a business-application.zip file that will consists of three projects

  • business-application-model

  • business-application-kjar

  • business-application-service

Unzip the business-application.zip file into desired location and go into business-application-service directory. There you will find launch scripts (for both linux/unix and windows).

./launch.sh clean install for Linux/Unix

./launch.bat clean install for Windows

Execute one applicable to your operating system and wait for it to finish.

It might take quite some time (depending on your network) as it will download bunch of dependencies required to execute both build and application itself.
3.7.1.4. Results

Once the build and launch is complete you can open your browser http://localhost:8090 to see your first business application up and running.

It presents with a welcome screen that is mainly for verification purpose to illustrate that application started successfully.

You can point the browser to http://localhost:8090/rest/server to see the actual Business Automation capability services

By default all REST endpoints (url pattern /rest/*) are secured and require authentication. Default user that can be used to logon is user with password user

Business Automation service supports three types of data format

  • XML (JAXB based)

  • JSON

  • XML (XStream based)

To display Business Automation capability service details in different format set HTTP headers

  • Accept: application/json for JSON format

  • Accept: application/xml for XML (JAXB based) format

  • X-KIE-ContentType: XSTREAM for XML (XStream based) format

3.7.1.5. Summary

Congratulations! you have just built and started your first business application.

3.7.1.6. Source code of the tutorial

Here is the complete source code of the tutorial.

3.7.2. Business Application with Business Assets

3.7.2.1. What will you do

You will enhance your business application with some business assets

  • business process (BPMN2)

and execute this business assets

  • via REST api of your business application

  • via Business Central UI

3.7.2.2. What do you need
  • About 15 minutes of your time

  • Java (JDK) 8 or later

  • Maven 3.5.x

  • Access to the Internet

  • Business Central deployed - see single distribution for instructions

3.7.2.3. What should I do

If you haven’t done it already, complete tutorial My First Business Application.

Start Business Central (if not already started) and open your browser at http://localhost:8080/jbpm-console and logon as user wbadmin with password wbadmin

Import your business assets project into Business Central
  • Go into business assets project - business-application-kjar

  • Execute git init

  • Execute git add -A

  • Execute git commit -m "my business assets project"

  • Log in to Business Central and go to projects

  • Select import project and enter following URL file:///{path to your business application}/business-application-kjar

  • Click import and confirm project to be imported

Create Business Process

In browser where you logged into Business Central go to Projects. You will see your newly imported project named business-application-kjar, go into that project.

  • go into business-application-kjar project

  • click Add asset button

  • select Business Process asset

  • provide name for this asset

  • create your business process

Sample business process could be a single user task that will be assigned to user wbadmin.

Business process - sample
Pull back your business assets to business application source code
  • Go to business-application-kjar

  • Execute git remote add origin ssh://wbadmin@localhost:8001/MySpace/business-application-kjar

  • Execute git pull origin master - when prompted enter wbadmin as password

Go to business-application-service directory and launch the application

./launch.sh clean install for Linux/Unix

./launch.bat clean install for Windows

3.7.2.4. Results

Once the build and launch is complete you can open your browser http://localhost:8090

Next, point the browser to http://localhost:8090/rest/server/containers to see that your business assets project has been properly deployed and is running.

By default all REST endpoints (url pattern /rest/*) are secured and require authentication. Default user that can be used to logon is wbadmin with password wbadmin

Next, point the browser to http://localhost:8090/rest/server/containers/business-application-kjar/processes to see business processes available for execution.

Execute business process

You can execute business process via REST api exposed by your business application (in fact by Business Automation capability).

Optionally HTTP headers can be set to change the format of data returned

  • Accept: application/json for JSON format

  • Accept: application/xml for XML (JAXB based) format

  • X-KIE-ContentType: XSTREAM for XML (XStream based) format

{processid} needs to be replaced with actual process id that is returned from the endpoint http://localhost:8090/rest/server/containers/business-application-kjar/processes

Remember that endpoints are protected so make sure you provide username and password when making the request.

In response to this request, a process instance id should be returned.

<long-type>
    <value>1</value>
</long-type>

You can examine details of that process instance by pointing your browser to http://localhost:8090/rest/server/containers/business-application-kjar/processes/instances/1

<process-instance>
  <process-instance-id>1</process-instance-id>
  <process-id>business-application-kjar.process</process-id>
  <process-name>process</process-name>
  <process-version>1.0</process-version>
  <process-instance-state>1</process-instance-state>
  <container-id>business-application-kjar-1_0-SNAPSHOT</container-id>
  <initiator>wbadmin</initiator>
  <start-date>2018-09-14T11:39:39.622+02:00</start-date>
  <process-instance-desc>process</process-instance-desc>
  <correlation-key>1</correlation-key>
  <parent-instance-id>-1</parent-instance-id>
  <sla-compliance>0</sla-compliance>
  <active-user-tasks>
    <task-summary>
      <task-id>1</task-id>
      <task-name>Task</task-name>
      <task-description/>
      <task-status>Reserved</task-status>
      <task-priority>0</task-priority>
      <task-actual-owner>wbadmin</task-actual-owner>
      <task-created-by>wbadmin</task-created-by>
      <task-created-on>2018-09-14T11:39:39.661+02:00</task-created-on>
      <task-activation-time>2018-09-14T11:39:39.661+02:00</task-activation-time>
      <task-proc-inst-id>1</task-proc-inst-id>
      <task-proc-def-id>business-application-kjar.process</task-proc-def-id>
      <task-container-id>business-application-kjar-1_0-SNAPSHOT</task-container-id>
    </task-summary>
  </active-user-tasks>
</process-instance>
Execute business process from Business Central UI

Stop the application if it’s running.

Go to business-application-service directory and launch the application in development mode

./launch-dev.sh clean install for Linux/Unix

./launch-dev.bat clean install for Windows

this will connect your business application to Business Central so can be administered from within its UI.

Go to Business Central in the browser and navigate to servers (from the home screen).

tutorial 2 empty server

As you can see the business-application-service Dev is there and connected. Although it does not have any kjars deployed. This is because it’s now running in managed mode meaning it’s Business Central that decides what kjars it should run.

So let’s deploy the business-application-kjar to our running application.

  • Go to projects from home screen of Business Central

  • Go into business-application-kjar project

  • Click Deploy button

  • Make sure that Server configuration is set to business-application-service-dev and click ok

The project should be successfully deployed and you can examine that state by going back to servers from home screen.

Next, go to process definitions (in Manage section of the Home screen) and select server configuration (right top corner) - again it should be business-application-service-dev the list of available process definition will be loaded and you should see your single process definition from the project business-application-kjar.

tutorial 2 process def

Examine details of that process definition by clicking on the row in the table. Switch to Diagram tab to see the visual representation of your process definition.

Start new instance of the business process by clicking on New instance button. This will bring up form (depending on your process definition) it might or might not have any fields. Just click on Submit button to start process instance.

Once started process instance details will be opened, you can examine different sections to learn more about your active process instance

tutorial 2 process instance
  • Instance details - base information about process instance

  • Process variables - latest values for process variables

  • Documents - list of documents managed by the process

  • Logs - detailed logs about what has been done within the process instance

  • Diagram - annotated diagram with completed (greyed out) and active (red borders) nodes

To look at user tasks, go to task inbox (in Track section of the Home screen). List of available tasks will be presented. This time there is no need to select server configuration because Business Central keeps track of recently selected configuration on different screens.

tutorial 2 task list
3.7.2.5. Summary

Congratulations! you have enhanced your business application to actually do something - execute business processes. At the same time you have created your first business process and made successful integration between your business application and Business Central.

3.7.2.6. Source code of the tutorial

Here is the complete source code of the tutorial.

3.7.3. Business Application with custom work item handlers and event listeners

3.7.3.1. What will you do

You will enhance your business application with business assets that execute custom business logic and monitors execution via event listeners.

  • business process (BPMN2) with custom service task (aka work item)

  • develop work item handler for the custom service task

  • develop process event listener that will receive events from the jBPM engine

and execute this business assets

  • via REST api of your business application

  • via Business Central UI

3.7.3.2. What do you need
  • About 20 minutes of your time

  • Java (JDK) 8 or later

  • Maven 3.5.x

  • IDE of your choice

  • Access to the Internet

  • Business Central deployed - see single distribution for instructions

3.7.3.3. What should I do

If you haven’t done it already, complete tutorial Business Application with Business Assets.

If you would like directly start with this tutorial you can get complete source of the Business Application with Business Assets tutorial from here

Start Business Central (if not already started) and open your browser at http://localhost:8080/jbpm-console and logon as user wbadmin with password wbadmin

Import your business assets project into Business Central

if not already imported proceed with points below to import business asset project

  • Go into business assets project - business-application-kjar

  • Execute git init

  • Execute git add -A

  • Execute git commit -m "my business assets project"

  • Log in to Business Central and go to projects

  • Select import project and enter following URL file:///{path to your business application}/business-application-kjar

  • Click import and confirm project to be imported

Create custom service task in Business Central
  • Go to Projects → business-application-kjar project

  • Click Add asset and select WorkItem Definition

  • Give it a name CustomTask

It should look like the following snippet

[
  [
    "name" : "MyTask",
    "parameters" : [
        "MyFirstParam" : new StringDataType(),
        "MySecondParam" : new StringDataType(),
        "MyThirdParam" : new ObjectDataType()
    ],
    "results" : [
        "Result" : new ObjectDataType("java.util.Map")
    ],
    "displayName" : "My Task",
    "icon" : ""
  ]
]
  • Save and close the editor

Create new process with service task (MyTask)
  • Click Add Asset button and select Business Process

  • Give it a name CustomTaskProcess

  • Open Service Tasks on the palette (cogs icon)

  • Drag and Drop the MyTask service task into the canvas

  • Connect it with start event and finish it with end event

It should look like this

tutorial 3 process with custom task
  • Save and close the editor

Implement custom work item handler
  • Import business-application-service project into IDE of your choice

  • Create new class MyTaskWorkItemHandler that implements org.kie.api.runtime.process.WorkItemHandler

  • Implement the executeWorkItemHandler by simply printing out work item and complete the work item

@Override
public void executeWorkItem(WorkItem workItem, WorkItemManager manager) {
    System.out.println("Work item being executed " + workItem);
    manager.completeWorkItem(workItem.getId(), null);
}
  • annotate the class with @Component annotation with name that matches the work ite defined in Business Central

Complete class of the handler should look like this

package com.company.service.handlers;

import org.kie.api.runtime.process.WorkItem;
import org.kie.api.runtime.process.WorkItemHandler;
import org.kie.api.runtime.process.WorkItemManager;
import org.springframework.stereotype.Component;

@Component("MyTask")
public class MyTaskWorkItemHandler implements WorkItemHandler {

    @Override
    public void executeWorkItem(WorkItem workItem, WorkItemManager manager) {
        System.out.println("Work item being executed " + workItem);
        manager.completeWorkItem(workItem.getId(), null);
    }

    @Override
    public void abortWorkItem(WorkItem workItem, WorkItemManager manager) {

    }

}
Implement custom event listener

To be able to monitor execution of our business assets such as business process an event listener can be implemented. In this tutorial we focus on ProcessEventListener but there are other types such as:

  • TaskLifeCycleEventListener

  • CaseEventListener

  • RuleRuntimeEventListener

  • AgendaEventListener

Go back to IDE where the business-application-service is imported

  • Create class MyProcessEventListener that implements org.kie.api.event.process.ProcessEventListener

  • Implement methods with simple print outs

  • Annotate the class with Component - in this case the name is not relevant

Complete class of the event listener should look like this

package com.company.service.listeners;

import org.kie.api.event.process.ProcessCompletedEvent;
import org.kie.api.event.process.ProcessEventListener;
import org.kie.api.event.process.ProcessNodeLeftEvent;
import org.kie.api.event.process.ProcessNodeTriggeredEvent;
import org.kie.api.event.process.ProcessStartedEvent;
import org.kie.api.event.process.ProcessVariableChangedEvent;
import org.springframework.stereotype.Component;

@Component
public class MyProcessEventListener implements ProcessEventListener {

    @Override
    public void beforeProcessStarted(ProcessStartedEvent event) {
        System.out.println("beforeProcessStarted " + event);
    }

    @Override
    public void afterProcessStarted(ProcessStartedEvent event) {
        System.out.println("afterProcessStarted " + event);
    }

    @Override
    public void beforeProcessCompleted(ProcessCompletedEvent event) {
        System.out.println("beforeProcessCompleted " + event);
    }

    @Override
    public void afterProcessCompleted(ProcessCompletedEvent event) {
        System.out.println("afterProcessCompleted " + event);
    }

    @Override
    public void beforeNodeTriggered(ProcessNodeTriggeredEvent event) {
        System.out.println("beforeNodeTriggered " + event);
    }

    @Override
    public void afterNodeTriggered(ProcessNodeTriggeredEvent event) {
        System.out.println("afterNodeTriggered " + event);
    }

    @Override
    public void beforeNodeLeft(ProcessNodeLeftEvent event) {
        System.out.println("beforeNodeLeft " + event);
    }

    @Override
    public void afterNodeLeft(ProcessNodeLeftEvent event) {
        System.out.println("afterNodeLeft " + event);
    }

    @Override
    public void beforeVariableChanged(ProcessVariableChangedEvent event) {
        System.out.println("beforeVariableChanged " + event);
    }

    @Override
    public void afterVariableChanged(ProcessVariableChangedEvent event) {
        System.out.println("afterVariableChanged " + event);
    }

}
Run the application

At this point all development effort is done, the last remaining thing is to pull back the business assets project into the business-application-kjar project

  • Go to business-application-kjar

  • Execute git remote add origin ssh://wbadmin@localhost:8001/MySpace/business-application-kjar (if not already added)

  • Execute git pull origin master - when prompted enter wbadmin as password

Go to business-application-service directory and launch the application

./launch.sh clean install for Linux/Unix

./launch.bat clean install for Windows

3.7.3.4. Results

Once the build and launch is complete you can open your browser http://localhost:8090

Next, point the browser to http://localhost:8090/rest/server/containers to see that your business assets project has been properly deployed and is running.

By default all REST endpoints (url pattern /rest/*) are secured and require authentication. Default user that can be used to logon is wbadmin with password wbadmin

Next, point the browser to http://localhost:8090/rest/server/containers/business-application-kjar/processes to see business processes available for execution. You should see two of them.

Execute business process

You can execute business process via REST api exposed by your business application (in fact by Business Automation capability).

Optionally HTTP headers can be set to change the format of data returned

  • Accept: application/json for JSON format

  • Accept: application/xml for XML (JAXB based) format

  • X-KIE-ContentType: XSTREAM for XML (XStream based) format

{processid} needs to be replaced with actual process id that is returned from the endpoint http://localhost:8090/rest/server/containers/business-application-kjar/processes

Remember that endpoints are protected so make sure you provide username and password when making the request.

In response to this request, a process instance id should be returned.

<long-type>
    <value>1</value>
</long-type>

You can examine details of that process instance by pointing your browser to http://localhost:8090/rest/server/containers/business-application-kjar/processes/instances/1

<process-instance>
  <process-instance-id>1</process-instance-id>
  <process-id>business-application-kjar.CustomTaskProcess</process-id>
  <process-name>CustomTaskProcess</process-name>
  <process-version>1.0</process-version>
  <process-instance-state>2</process-instance-state>
  <container-id>business-application-kjar-1_0-SNAPSHOT</container-id>
  <initiator>wbadmin</initiator>
  <start-date>2018-10-11T13:29:55.807+02:00</start-date>
  <process-instance-desc>CustomTaskProcess</process-instance-desc>
  <correlation-key>1</correlation-key>
  <parent-instance-id>-1</parent-instance-id>
  <sla-compliance>0</sla-compliance>
</process-instance>

Looking into the application logs (console) you should see that both the handler has been executed and event listener was notified about various events

beforeVariableChanged ==>[ProcessVariableChanged(id=initiator; instanceId=initiator; oldValue=null; newValue=wbadmin; processName=CustomTaskProcess; processId=business-application-kjar.CustomTaskProcess)]
afterVariableChanged ==>[ProcessVariableChanged(id=initiator; instanceId=initiator; oldValue=null; newValue=wbadmin; processName=CustomTaskProcess; processId=business-application-kjar.CustomTaskProcess)]
beforeProcessStarted ==>[ProcessStarted(name=CustomTaskProcess; id=business-application-kjar.CustomTaskProcess)]
beforeNodeTriggered ==>[ProcessNodeTriggered(nodeId=3; id=0; nodeName=null; processName=CustomTaskProcess; processId=business-application-kjar.CustomTaskProcess)]
beforeNodeLeft ==>[ProcessNodeLeft(nodeId=3; id=0; nodeName=null; processName=CustomTaskProcess; processId=business-application-kjar.CustomTaskProcess)]
beforeNodeTriggered ==>[ProcessNodeTriggered(nodeId=1; id=1; nodeName=My Task; processName=CustomTaskProcess; processId=business-application-kjar.CustomTaskProcess)]

Work item being executed WorkItem 1 [name=MyTask, state=0, processInstanceId=1, parameters{}]

beforeNodeLeft ==>[ProcessNodeLeft(nodeId=1; id=1; nodeName=My Task; processName=CustomTaskProcess; processId=business-application-kjar.CustomTaskProcess)]
beforeNodeTriggered ==>[ProcessNodeTriggered(nodeId=2; id=2; nodeName=null; processName=CustomTaskProcess; processId=business-application-kjar.CustomTaskProcess)]
beforeNodeLeft ==>[ProcessNodeLeft(nodeId=2; id=2; nodeName=null; processName=CustomTaskProcess; processId=business-application-kjar.CustomTaskProcess)]
beforeProcessCompleted ==>[ProcessCompleted(name=CustomTaskProcess; id=business-application-kjar.CustomTaskProcess)]
afterProcessCompleted ==>[ProcessCompleted(name=CustomTaskProcess; id=business-application-kjar.CustomTaskProcess)]
afterNodeLeft ==>[ProcessNodeLeft(nodeId=2; id=2; nodeName=null; processName=CustomTaskProcess; processId=business-application-kjar.CustomTaskProcess)]
afterNodeTriggered ==>[ProcessNodeTriggered(nodeId=2; id=2; nodeName=null; processName=CustomTaskProcess; processId=business-application-kjar.CustomTaskProcess)]
afterNodeLeft ==>[ProcessNodeLeft(nodeId=1; id=1; nodeName=My Task; processName=CustomTaskProcess; processId=business-application-kjar.CustomTaskProcess)]
afterNodeTriggered ==>[ProcessNodeTriggered(nodeId=1; id=1; nodeName=My Task; processName=CustomTaskProcess; processId=business-application-kjar.CustomTaskProcess)]
afterNodeLeft ==>[ProcessNodeLeft(nodeId=3; id=0; nodeName=null; processName=CustomTaskProcess; processId=business-application-kjar.CustomTaskProcess)]
afterNodeTriggered ==>[ProcessNodeTriggered(nodeId=3; id=0; nodeName=null; processName=CustomTaskProcess; processId=business-application-kjar.CustomTaskProcess)]
afterProcessStarted ==>[ProcessStarted(name=CustomTaskProcess; id=business-application-kjar.CustomTaskProcess)]
3.7.3.5. Summary

Congratulations! you have enhanced your business application to take advantage custom service tasks and you learned how to keep an eye on what is actually being executed by your business application. With this knowledge you can start doing more advanced service tasks that will integrate your application with outside world.

3.7.3.6. Source code of the tutorial

Here is the complete source code of the tutorial.

3.7.4. Business Application with JPA entity

3.7.4.1. What will you do

You will enhance your business application with JPA entity that will be used both by your business application service and business assets.

  • develop JPA entity as part of your business-application-model project

  • business process (BPMN2) with user task that will display JPA entity

and execute this business assets

  • via REST api of your business application

  • via Business Central UI

3.7.4.2. What do you need
  • About 20 minutes of your time

  • Java (JDK) 8 or later

  • Maven 3.5.x

  • IDE of your choice

  • Access to the Internet

  • Business Central deployed - see single distribution for instructions

3.7.4.3. What should I do

If you haven’t done it already, complete tutorial Business Application with Business Assets.

If you would like directly start with this tutorial you can get complete source of the Business Application with Business Assets tutorial from here

Start Business Central (if not already started) and open your browser at http://localhost:8080/jbpm-console and logon as user wbadmin with password wbadmin

Import your business assets project into Business Central

if not already imported proceed with points below to import business asset project

  • Go into business assets project - business-application-kjar

  • Execute git init

  • Execute git add -A

  • Execute git commit -m "my business assets project"

  • Log in to Business Central and go to projects

  • Select import project and enter following URL file:///{path to your business application}/business-application-kjar

  • Click import and confirm project to be imported

Implement JPA entity
  • Import business-application-model project into IDE of your choice

  • Add to pom.xml of the model project dependency to JPA api (in scope provided)

<dependencies>
  <dependency>
    <groupId>org.hibernate.javax.persistence</groupId>
    <artifactId>hibernate-jpa-2.1-api</artifactId>
    <version>1.0.0.Final</version>
    <scope>provided</scope>
  </dependency>
</dependencies>
  • Implement class as JPA Entity Person

  • Create three fields in the class

    • id (of type Long)

    • firstName (of type String)

    • lastName (of type String)

  • Annotate the class with @Entity

  • Annotate the id filed with @Id and @GeneratedValue(strategy = GenerationType.AUTO)

Complete class of the entity should look like this

package com.company.model;

import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;

@Entity
public class Person {

    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private Long id;

    private String firstName;

    private String lasteName;

    public Long getId() {
        return id;
    }

    public void setId(Long id) {
        this.id = id;
    }

    public String getFirstName() {
        return firstName;
    }

    public void setFirstName(String firstName) {
        this.firstName = firstName;
    }

    public String getLasteName() {
        return lasteName;
    }

    public void setLasteName(String lasteName) {
        this.lasteName = lasteName;
    }

    @Override
    public String toString() {
        return "Person [id=" + id + ", firstName=" + firstName + ", lasteName=" + lasteName + "]";
    }

}
Configure service project to use the JPA entity
  • Import business-application-service project into IDE of your choice

  • Add dependency to the business-application-model in your service pom.xml

<dependency>
  <groupId>com.company</groupId>
  <artifactId>business-application-model</artifactId>
  <version>1.0-SNAPSHOT</version>
</dependency>
  • Edit application.properties file (that is located in src/main/resources)

  • Add spring.jpa.properties.entity-scan-packages=com.company.model into the file

Adjust the package if you did not use the default com.company.model package
  • Add the same entry into application-dev.properties file

Create new process that use JPA entity
  • Log in to Business Central

  • Go to Projects → business-application-kjar project

  • Go to Settings tab

  • Go to Dependencies

  • Add dependency to business-application-model - make sure it is in provided scope

  • Go to Deployment → Marshalling strategy

  • Add new marshalling strategy with following value new org.drools.persistence.jpa.marshaller.JPAPlaceholderResolverStrategy(entityManagerFactory)

  • Go back to assets

  • Click Add Asset button and select Business Process

  • Give it a name JPAProcess

  • Open Tasks on the palette

  • Drag and Drop the User Task into the canvas

  • Connect it with start event and finish it with end event

  • Create variable named person with type (custom) com.company.model.Person

It should look like this

tutorial 4 process with jpa user task
  • Map the variable as input and output of user task - use same name for input and output variable

tutorial 4 process with jpa user task vars
  • Save and close the editor

Run the application

At this point all development effort is done, the last remaining thing is to pull back the business assets project into the business-application-kjar project

  • Go to business-application-kjar

  • Execute git remote add origin ssh://wbadmin@localhost:8001/MySpace/business-application-kjar (if not already added)

  • Execute git pull origin master - when prompted enter wbadmin as password

Go to business-application-service directory and launch the application

./launch.sh clean install for Linux/Unix

./launch.bat clean install for Windows

3.7.4.4. Results

Once the build and launch is complete you can open your browser http://localhost:8090

Next, point the browser to http://localhost:8090/rest/server/containers to see that your business assets project has been properly deployed and is running.

By default all REST endpoints (url pattern /rest/*) are secured and require authentication. Default user that can be used to logon is wbadmin with password wbadmin

Next, point the browser to http://localhost:8090/rest/server/containers/business-application-kjar/processes to see business processes available for execution. You should see two of them.

Execute business process

You can execute business process via REST api exposed by your business application (in fact by Business Automation capability).

HTTP method: POST

HTTP headers:

  • Accept: application/json

  • Content-Type: application/json

Body:

{
  "person" : {
    "Person" : {
      "firstName":"WB",
      "lastName":"Admin"
    }
  }
}

{processid} needs to be replaced with actual process id that is returned from the endpoint http://localhost:8090/rest/server/containers/business-application-kjar/processes

Remember that endpoints are protected so make sure you provide username and password when making the request.

In response to this request, a process instance id should be returned.

1

You can examine details of that process instance by pointing your browser to http://localhost:8090/rest/server/containers/business-application-kjar/processes/instances/1?withVars=true

<process-instance>
  <process-instance-id>1</process-instance-id>
  <process-id>business-application-kjar.JPAProcess</process-id>
  <process-name>JPAProcess</process-name>
  <process-version>1.0</process-version>
  <process-instance-state>1</process-instance-state>
  <container-id>business-application-kjar-1_0-SNAPSHOT</container-id>
  <initiator>wbadmin</initiator>
  <start-date>2018-10-11T14:42:23.053+02:00</start-date>
  <process-instance-desc>JPAProcess</process-instance-desc>
  <correlation-key>1</correlation-key>
  <parent-instance-id>-1</parent-instance-id>
  <sla-compliance>0</sla-compliance>
  <active-user-tasks>
    <task-summary>
      <task-id>1</task-id>
      <task-name>Task</task-name>
      <task-description/>
      <task-status>Reserved</task-status>
      <task-priority>0</task-priority>
      <task-actual-owner>wbadmin</task-actual-owner>
      <task-created-by>wbadmin</task-created-by>
      <task-created-on>2018-10-11T14:42:23.058+02:00</task-created-on>
      <task-activation-time>2018-10-11T14:42:23.058+02:00</task-activation-time>
      <task-proc-inst-id>2</task-proc-inst-id>
      <task-proc-def-id>business-application-kjar.JPAProcess</task-proc-def-id>
      <task-container-id>business-application-kjar-1_0-SNAPSHOT</task-container-id>
    </task-summary>
  </active-user-tasks>
  <variables>
    <entry>
      <key>person</key>
      <value xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="person">
        <firstName>WB</firstName>
        <id>1</id>
        <lastName>Admin</lastName>
      </value>
    </entry>
    <entry>
      <key>initiator</key>
      <value xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="xs:string">wbadmin</value>
    </entry>
  </variables>
</process-instance>

This illustrates that an instance has been created, it has one user task assigned (the owner is wbadmin) and it has two process variables

  • initiator - set to the user who initiated the request

  • person - our JPA entity that was created based on the payload - but note that the id was generated automatically be data base

You can also examine user task by opening following URL in your browser http://localhost:8090/rest/server/containers/business-application-kjar/tasks/1?withInputData=true

<task-instance>
  <task-id>1</task-id>
  <task-priority>0</task-priority>
  <task-name>Task</task-name>
  <task-subject/>
  <task-description/>
  <task-form>Task</task-form>
  <task-status>Reserved</task-status>
  <task-actual-owner>wbadmin</task-actual-owner>
  <task-created-by>wbadmin</task-created-by>
  <task-created-on>2018-10-11T14:42:23.058+02:00</task-created-on>
  <task-activation-time>2018-10-11T14:42:23.058+02:00</task-activation-time>
  <task-skippable>false</task-skippable>
  <task-workitem-id>1</task-workitem-id>
  <task-process-instance-id>1</task-process-instance-id>
  <task-parent-id>-1</task-parent-id>
  <task-process-id>business-application-kjar.JPAProcess</task-process-id>
  <task-container-id>business-application-kjar-1_0-SNAPSHOT</task-container-id>
  <inputData>
    <entry>
      <key>TaskName</key>
      <value xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="xs:string">Task</value>
    </entry>
    <entry>
      <key>NodeName</key>
      <value xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="xs:string">Task</value>
    </entry>
    <entry>
      <key>person</key>
      <value xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="person">
        <firstName>WB</firstName>
        <id>1</id>
        <lasteName>Admin</lasteName>
      </value>
    </entry>
    <entry>
      <key>Skippable</key>
      <value xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="xs:string">false</value>
    </entry>
    <entry>
      <key>ActorId</key>
      <value xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="xs:string">wbadmin</value>
    </entry>
  </inputData>
</task-instance>

Same person JPA entity is available on the task assigned to wbadmin

3.7.4.5. Summary

Congratulations! you have enhanced your business application to take advantage JPA entity as shared model between your business assets and service projects. With the power of business automation and JPA you learned how to externalise data managed by automated by business processes.

3.7.4.6. Source code of the tutorial

Here is the complete source code of the tutorial.

3.7.5. Business Application with ElasticSearch

3.7.5.1. What will you do

You will build business application that pushes out information about your business automation (processes, cases, tasks) directly to an ElasticSearch server. You can then use ElasticSearch REST api to perform advanced queries on top of your business data.

3.7.5.2. What do you need
  • About 20 minutes of your time

  • Java (JDK) 8 or later

  • Maven 3.5.x

  • IDE of your choice

  • Access to the Internet

  • Business Central deployed - see single distribution for instructions

3.7.5.3. What should I do
Install ElasticSearch

To get quickly up and running with ElasticSearch make use of docker images provides by ElasticSearch

docker pull docker.elastic.co/elasticsearch/elasticsearch:6.4.2

Once pulled, start it with basic settings recommended for development and test.

docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:6.4.2

Wait a bit and your ElasticSearch will be up and running, to verify if it is working as expected, open you browser at http://localhost:9200 and you should see similar content

{
  "name" : "IKXT4Z_",
  "cluster_name" : "docker-cluster",
  "cluster_uuid" : "G7q7D2zgQy6JzLZBCzbtTQ",
  "version" : {
    "number" : "6.4.2",
    "build_flavor" : "default",
    "build_type" : "tar",
    "build_hash" : "04711c2",
    "build_date" : "2018-09-26T13:34:09.098244Z",
    "build_snapshot" : false,
    "lucene_version" : "7.4.0",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}
when prompted for username and password use elastic/changeme
Build business application

To get started with business applications the easiest way is to generate it. Go to start.jbpm.org and click button Generate default business application.

This will generate and download a business-application.zip file that will consists of three projects

  • business-application-model

  • business-application-kjar

  • business-application-service

Unzip the business-application.zip file into desired location and go into business-application-service directory. There you will find launch scripts (for both linux/unix and windows).

Start Business Central (if not already started) and open your browser at http://localhost:8080/jbpm-console and logon as user wbadmin with password wbadmin

Import your business assets project into Business Central

if not already imported proceed with points below to import business asset project

  • Go into business assets project - business-application-kjar

  • Execute git init

  • Execute git add -A

  • Execute git commit -m "my business assets project"

  • Log in to Business Central and go to projects

  • Select import project and enter following URL file:///{path to your business application}/business-application-kjar

  • Click import and confirm project to be imported

Create Business Process

In browser where you logged into Business Central go to Projects. You will see your newly imported project named business-application-kjar, go into that project.

  • go into business-application-kjar project

  • click Add asset button

  • select Business Process asset

  • provide name for this asset

  • create your business process

Sample business process could be a single user task that will be assigned to user wbadmin.

Business process - sample
Configure service project to use the ElasticSearch
  • Import business-application-service project into IDE of your choice

  • Add dependency to the jbpm-event-emitters-elasticsearch in your service pom.xml

<dependency>
  <groupId>org.jbpm</groupId>
  <artifactId>jbpm-event-emitters-elasticsearch</artifactId>
  <version>${version.org.kie}</version>
</dependency>

There are several configuration parameters that define how business application will connect to ElasticSearch server

  • jbpm.addons.event.emitters.elasticsearch.url - location of the ElasticSearch server, defaults to http://localhost:9200

  • jbpm.addons.event.emitters.elasticsearch.date_format - date format to be used for dates defaults to yyyy-MM-dd’T’hh:mm:ss.SSSZ

  • jbpm.addons.event.emitters.elasticsearch.user - optional user name to be used to authenticate in ElasticSearch server

  • jbpm.addons.event.emitters.elasticsearch.password - optional password to be used to authenticate in ElasticSearch server

If the defaults fit your ElasticSearch setup then you don’t need to set any properties in application.properties.

For the default setup we use in this tutorial, user and password need to be set

  • Edit application.properties file (that is located in src/main/resources)

  • Add jbpm.addons.event.emitters.elasticsearch.user=elastic into the file

  • Add jbpm.addons.event.emitters.elasticsearch.password=changeme into the file

Add the same entry into application-dev.properties file
Run the application

At this point all development effort is done, the last remaining thing is to pull back the business assets project into the business-application-kjar project

  • Go to business-application-kjar

  • Execute git remote add origin ssh://wbadmin@localhost:8001/MySpace/business-application-kjar (if not already added)

  • Execute git pull origin master - when prompted enter wbadmin as password

Go to business-application-service directory and launch the application

./launch.sh clean install for Linux/Unix

./launch.bat clean install for Windows

3.7.5.4. Results

Once the build and launch is complete you can open your browser http://localhost:8090 to see your business application up and running.

It presents with a welcome screen that is mainly for verification purpose to illustrate that application started successfully.

You can point the browser to http://localhost:8090/rest/server to see the actual Business Automation capability services

By default all REST endpoints (url pattern /rest/*) are secured and require authentication. Default user that can be used to logon is wbadmin with password wbadmin

Next, point the browser to http://localhost:8090/rest/server/containers/business-application-kjar/processes to see business processes available for execution. You should see just one.

Execute business process

You can execute business process via REST api exposed by your business application (in fact by Business Automation capability).

HTTP method: POST

HTTP headers:

  • Accept: application/json

  • Content-Type: application/json

Body:

{
  "name":"wbadmin",
  "age":25
}

{processid} needs to be replaced with actual process id that is returned from the endpoint http://localhost:8090/rest/server/containers/business-application-kjar/processes

Remember that endpoints are protected so make sure you provide username and password when making the request.

Once executed you can verify the integration with ElasticSearch simply by pointing your browser to http://localhost:9200/processes/_search?pretty=true and the result should be as follows

{
  "took" : 1,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : 1,
    "max_score" : 1.0,
    "hits" : [
      {
        "_index" : "processes",
        "_type" : "process",
        "_id" : "business-application-service-dev_1",
        "_score" : 1.0,
        "_source" : {
          "compositeId" : "business-application-service-dev_1",
          "id" : 1,
          "processId" : "usertaskprocess",
          "processName" : "usertaskprocess",
          "processVersion" : "1.0",
          "state" : 1,
          "containerId" : "business-application-kjar_1.0-SNAPSHOT",
          "initiator" : "wbadmin",
          "date" : "2018-10-25T02:41:55.205+0200",
          "processInstanceDescription" : "usertaskprocess",
          "correlationKey" : "1",
          "parentId" : -1,
          "variables" : {
            "initiator" : "wbadmin",
            "name" : "wbadmin",
            "age" : 25
          }
        }
      }
    ]
  }
}

and to see user tasks stored in ElasticSearch point your browser to http://localhost:9200/tasks/_search?pretty=true

{
  "took" : 1,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : 1,
    "max_score" : 1.0,
    "hits" : [
      {
        "_index" : "tasks",
        "_type" : "task",
        "_id" : "business-application-service-dev_1",
        "_score" : 1.0,
        "_source" : {
          "compositeId" : "business-application-service-dev_1",
          "id" : 1,
          "priority" : 8,
          "name" : "Complete me",
          "subject" : "TaskSubject",
          "description" : "Here is a task for wbadmin",
          "taskType" : null,
          "formName" : "CompleteMe",
          "status" : "Reserved",
          "actualOwner" : "wbadmin",
          "createdBy" : "wbadmin",
          "createdOn" : "2018-10-25T02:41:54.942+0200",
          "activationTime" : "2018-10-25T02:41:54.942+0200",
          "expirationDate" : null,
          "skipable" : false,
          "workItemId" : 1,
          "processInstanceId" : 1,
          "parentId" : -1,
          "processId" : "usertaskprocess",
          "containerId" : "business-application-kjar_1.0-SNAPSHOT",
          "potentialOwners" : [
            "wbadmin"
          ],
          "excludedOwners" : [ ],
          "businessAdmins" : [
            "Administrator",
            "Administrators"
          ],
          "inputData" : {
            "Comment" : "TaskSubject",
            "Description" : "Here is a task for wbadmin",
            "TaskName" : "CompleteMe",
            "NodeName" : "Complete me",
            "Priority" : "8",
            "name" : "wbadmin",
            "Skippable" : "false",
            "ActorId" : "wbadmin",
            "age" : 25
          },
          "outputData" : null
        }
      }
    ]
  }
}

When you complete a task or abort a process instance data in ElasticSearch will be immediately updated.

{
  "took" : 1,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : 1,
    "max_score" : 1.0,
    "hits" : [
      {
        "_index" : "processes",
        "_type" : "process",
        "_id" : "business-application-service-dev_2",
        "_score" : 1.0,
        "_source" : {
          "compositeId" : "business-application-service-dev_2",
          "id" : 2,
          "processId" : "usertaskprocess",
          "processName" : "usertaskprocess",
          "processVersion" : "1.0",
          "state" : 3,
          "containerId" : "business-application-kjar_1.0-SNAPSHOT",
          "initiator" : "wbadmin",
          "date" : "2018-10-25T03:01:02.557+0200",
          "processInstanceDescription" : "usertaskprocess",
          "correlationKey" : "2",
          "parentId" : -1,
          "variables" : {
            "initiator" : "wbadmin",
            "name" : "bartek",
            "age" : 5
          }
        }
      }
    ]
  }
}
3.7.5.5. Summary

Congratulations! you have integrated your business application with ElasticSearch. Now you can take advantage of all the good things ElasticSearch provides you with such as full text search by process variables, task assignees, case participants and more.

3.7.5.6. Source code of the tutorial

Here is the complete source code of the tutorial.

3.7.6. Business Application with JMS

3.7.6.1. What will you do

You will build business application that uses JMS to send information between your business processes. It combines process logic and messaging to provide comprehensive solution to common problems such as - how to notify other participants of particular event.

3.7.6.2. What do you need
  • About 20 minutes of your time

  • Java (JDK) 8 or later

  • Maven 3.5.x

  • IDE of your choice

  • Access to the Internet

  • Business Central deployed - see single distribution for instructions

3.7.6.3. What should I do
Install Apache Artemis

Download and unzip Apache Artemis distribution. Refer to location where you unzip it as ${ARTEMIS_HOME}.

Once downloaded, navigate to location where you want to store your broker data and create new broker

${ARTEMIS_HOME}/bin/artemis create business-app-broker

You will be prompted for some required information during creation, that should look like this

Creating ActiveMQ Artemis instance at: /.../business-app-broker

--user: is a mandatory property!
Please provide the default username:
admin

--password: is mandatory with this configuration:
Please provide the default password:


--allow-anonymous | --require-login: is a mandatory property!
Allow anonymous access?, valid values are Y,N,True,False
Y

Next, start the broker instance, go to business-app-broker/bin and issue following command

./artemis run

Open your browser at http://localhost:8161/console to logon to management console of Apache Artemis with username and password provided at the time you created the broker.

For more detailed instruction on how to configure Apache Artemis visit its website

Last step in configuring JMS service is to create a queue (or an address as it’s called in Apache Artemis).

Once logged into Management Console

  • Go to Artemis in the menu

  • Expand the tree view and click addresses

  • On right hand side click Create

  • Create new address with name ExternalSignalQueue

  • Select Anycast

All steps are done for installing and configuring Apache Artemis for this tutorial.

Build business application

To get started with business applications the easiest way is to generate it. Go to start.jbpm.org and click button Generate default business application.

This will generate and download a business-application.zip file that will consists of three projects

  • business-application-model

  • business-application-kjar

  • business-application-service

Unzip the business-application.zip file into desired location and go into business-application-service directory. There you will find launch scripts (for both linux/unix and windows).

Start Business Central (if not already started) and open your browser at http://localhost:8080/jbpm-console and logon as user wbadmin with password wbadmin

Import your business assets project into Business Central

if not already imported proceed with points below to import business asset project

  • Go into business assets project - business-application-kjar

  • Execute git init

  • Execute git add -A

  • Execute git commit -m "my business assets project"

  • Log in to Business Central and go to projects

  • Select import project and enter following URL file:///{path to your business application}/business-application-kjar

  • Click import and confirm project to be imported

Create Business Processes

In browser where you logged into Business Central go to Projects. You will see your newly imported project named business-application-kjar, go into that project.

  • go into business-application-kjar project

  • click Add asset button

  • select Business Process asset

  • provide name for this asset (throwsignalprocess)

  • create your business process

Sample business process should be a single script task and end signal event. Signal event should use external scope and define a signal IamDone

Business process - sample

Process should define single process variable input that is then mapped as data output of the end event.

Business process - sample

Next create another business process that will receive that signal.

  • go into business-application-kjar project

  • click Add asset button

  • select Business Process asset

  • provide name for this asset (catchsignalprocess)

  • create your business process

Sample business process should be a signal catch event and single user task assigned to wbadmin. The catch signal event should use the signal same as throwing one and that is IamDone

Business process - sample

Process should define single process variable data that is then mapped as data input of the catch event.

Business process - sample
Configure service project to use the Apache Artemis
  • Import business-application-service project into IDE of your choice

  • Add dependency to the spring-boot-starter-artemis in your service pom.xml

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-artemis</artifactId>
</dependency>
  • Add dependency to the jbpm-workitems-jms in your service pom.xml

<dependency>
  <groupId>org.jbpm</groupId>
  <artifactId>jbpm-workitems-jms</artifactId>
  <version>${version.org.kie}</version>
</dependency>

There are several configuration parameters that define how business application will connect to Apache Artemis

  • Edit application.properties file (that is located in src/main/resources)

spring.artemis.mode=native
spring.artemis.host=localhost
spring.artemis.port=61616
spring.artemis.user=admin
spring.artemis.password=admin
Use the user credentials you provided when creating the broker in the configuration
Add the same entry into application-dev.properties file
Develop JMS components of your Business Application

First of all, you need to enable jms on the service level.

  • Open Application class (located in src/main/java/com/company/service directory)

  • Add @EnableJms on the class level (next to @SpringBootApplication)

Then create new class that will be responsible for sending signals over JMS. This will be really small extension to out of the box JMS work item handler. ConfiguredJMSSendTaskWorkItemHandler needs to extend org.jbpm.process.workitem.jms.JMSSendTaskWorkItemHandler and this is where the most of the logic comes from.

This class needs to autowire

  • ConnectionFactory - used to connect to Apache Artemis

  • JmsTemplate - used to send messages

Overload executeWorkItem method to take advantage of JmsTemplate instead of direct JMS API.

Last but not least, annotate the class with @Component annotation so it will be automatically registered as work item handler. Below is the complete source code of the handler implementation.

package com.company.service.jms;

import javax.jms.ConnectionFactory;

import org.jbpm.process.workitem.jms.JMSSendTaskWorkItemHandler;
import org.kie.api.runtime.process.WorkItem;
import org.kie.api.runtime.process.WorkItemManager;
import org.springframework.jms.core.JmsTemplate;
import org.springframework.stereotype.Component;

@Component("External Send Task")
public class ConfiguredJMSSendTaskWorkItemHandler extends JMSSendTaskWorkItemHandler {

    private JmsTemplate jmsTemplate;

    public ConfiguredJMSSendTaskWorkItemHandler(ConnectionFactory connectionFactory, JmsTemplate jmsTemplate) {
        super(connectionFactory, null);
        this.jmsTemplate = jmsTemplate;
    }

    @Override
    public void executeWorkItem(WorkItem workItem, WorkItemManager manager) {
        try {
            jmsTemplate.send("ExternalSignalQueue", (session) -> createMessage(workItem, session));
            manager.completeWorkItem(workItem.getId(), null);
        } catch (Exception e) {
            handleException(e);
        }
    }
}

Last development activity is to create the message receiver. This is even easier than sender as there is out of the box receiver from jBPM - org.jbpm.process.workitem.jms.JMSSignalReceiver

package com.company.service.jms;

import javax.jms.BytesMessage;

import org.jbpm.process.workitem.jms.JMSSignalReceiver;
import org.springframework.jms.annotation.JmsListener;
import org.springframework.stereotype.Component;

@Component
public class ReceiveJMSEvents extends JMSSignalReceiver {

    @JmsListener(destination = "ExternalSignalQueue")
    public void processMessage(BytesMessage content) {
        super.onMessage(content);
    }

}

And that’s it, you’re all set to communicate between business processes via JMS.

Run the application

At this point all development effort is done, the last remaining thing is to pull back the business assets project into the business-application-kjar project

  • Go to business-application-kjar

  • Execute git remote add origin ssh://wbadmin@localhost:8001/MySpace/business-application-kjar (if not already added)

  • Execute git pull origin master - when prompted enter wbadmin as password

Go to business-application-service directory and launch the application

./launch.sh clean install for Linux/Unix

./launch.bat clean install for Windows

3.7.6.4. Results

Once the build and launch is complete you can open your browser http://localhost:8090 to see your business application up and running.

It presents with a welcome screen that is mainly for verification purpose to illustrate that application started successfully.

You can point the browser to http://localhost:8090/rest/server to see the actual Business Automation capability services

By default all REST endpoints (url pattern /rest/*) are secured and require authentication. Default user that can be used to logon is wbadmin with password wbadmin

Next, point the browser to http://localhost:8090/rest/server/containers/business-application-kjar/processes to see business processes available for execution. You should see two processes:

  • catchsignalprocess

  • throwsignalprocess

Execute business process

You can execute business process via REST api exposed by your business application (in fact by Business Automation capability).

First start process instance that will wait for a signal

HTTP method: POST

HTTP headers:

  • Accept: application/json

  • Content-Type: application/json

And then start process instance that will throw (send) signal via JMS

HTTP method: POST

HTTP headers:

  • Accept: application/json

  • Content-Type: application/json

Body:

{
  "input":"hello"
}
Remember that endpoints are protected so make sure you provide username and password when making the request.

Verify that there is a user task assigned to wbadmin user with information coming from second process instance - hello

Execute business process from Business Central UI

Stop the application if it’s running.

Go to business-application-service directory and launch the application in development mode

./launch-dev.sh clean install for Linux/Unix

./launch-dev.bat clean install for Windows

this will connect your business application to Business Central so can be administered from within its UI.

Go to Business Central in the browser and navigate to servers (from the home screen).

Let’s deploy the business-application-kjar to our running application.

  • Go to projects from home screen of Business Central

  • Go into business-application-kjar project

  • Click Deploy button

  • Make sure that Server configuration is set to business-application-service-dev and click ok

The project should be successfully deployed and you can examine that state by going back to servers from home screen.

Next, go to process definitions (in Manage section of the Home screen) and select server configuration (right top corner) - again it should be business-application-service-dev the list of available process definition will be loaded and you should see your single process definitions from the project business-application-kjar.

First start process instance that will wait for a signal (catchsignalprocess), then start process instance that will throw (send) signal via JMS (throwsignalprocess). When starting second process specify the input you want to send together with signal.

Go to Task inbox from home screen to see that task is created with input provided on the second process instance.

3.7.6.5. Summary

Congratulations! you have integrated your business application with JMS. Moreover, you made business processes to talk to each other (over signals). This allows you to build more advanced interactions based on your business logic.

3.7.6.6. Source code of the tutorial

Here is the complete source code of the tutorial.

3.7.7. Business Application with Dynamic Assets

3.7.7.1. What will you do

You will enhance your business application with some dynamic assets that allow more adaptive approach to business logic compared with structured business processes.

Next execute these dynamic assets

  • via REST api of your business application

  • via jBPM Case Management showcase

3.7.7.2. What do you need
  • About 15 minutes of your time

  • Java (JDK) 8 or later

  • Maven 3.5.x

  • Access to the Internet

  • Business Central deployed - see single distribution for instructions

3.7.7.3. What should I do

To get started with business applications the easiest way is to generate it. Go to start.jbpm.org and click button Configure your business application.

Business process - sample
  • First step: Select Business Automation (selected by default)

  • Second step: Provide details for your business application

  • Third step: Select Dynamic Assets, Data Model and Service projects

  • Click Generate business application button

Start Business Central (if not already started) and open your browser at http://localhost:8080/jbpm-console and logon as user wbadmin with password wbadmin

Import your business assets project into Business Central
  • Go into business assets project - business-application-kjar

  • Execute git init

  • Execute git add -A

  • Execute git commit -m "my business assets project"

  • Log in to Business Central and go to projects

  • Select import project and enter following URL file:///{path to your business application}/business-application-kjar

  • Click import and confirm project to be imported

Create Dynamic Asset - Case definition

In browser where you logged into Business Central go to Projects. You will see your newly imported project named business-application-kjar.

  • go into business-application-kjar project

  • click Add asset button

  • select Case definition asset

  • provide name for this asset e.g. myfirstcase

  • optionally you can provide prefix for case ids - if not given it will default to CASE-XXX where XXX is generated number

  • create your case definition

Case definition is designed in so called legacy process designer.

You can now create your dynamic case definition that does not have to have connected process activities.

Sample case definition could be a two user tasks that will be assigned to user wbadmin and not connected to anything else.

Case definition - sample

This sample case definition consists of two user tasks

  • Dynamic User Task

  • Another task that is started automatically

Both of them are assigned to wbadmin user although only one (second) will be created automatically when case instance is created. This is because it is marked as autostart and thus will be directly created.

The first one can be dynamically created on ad hoc basis.

Pull back your business assets to business application source code
  • Go to business-application-kjar

  • Execute git remote add origin ssh://wbadmin@localhost:8001/MySpace/business-application-kjar

  • Execute git pull origin master - when prompted enter wbadmin as password

Go to business-application-service directory and launch the application

./launch.sh clean install for Linux/Unix

./launch.bat clean install for Windows

3.7.7.4. Results

Once the build and launch is complete you can open your browser http://localhost:8090

Next, point the browser to http://localhost:8090/rest/server/containers to see that your business assets project has been properly deployed and is running.

By default all REST endpoints (url pattern /rest/*) are secured and require authentication. Default user that can be used to logon is wbadmin with password wbadmin

Next, point the browser to http://localhost:8090/rest/server/containers/business-application-kjar/cases/definitions to see dynamic assets (cases) available for execution.

Execute business process

You can execute business process via REST api exposed by your business application (in fact by Business Automation capability).

HTTP headers can be set to change the format of data returned

  • Accept: application/json for JSON format

  • Accept: application/xml for XML (JAXB based) format

  • X-KIE-ContentType: XSTREAM for XML (XStream based) format

{casedefid} needs to be replaced with actual case definition id that is returned from the endpoint http://localhost:8090/rest/server/containers/business-application-kjar/cases/definitions

Remember that endpoints are protected so make sure you provide username and password when making the request.

In response to this request, a case instance id should be returned.

<string-type>
    <value>CASE-0000000001</value>
</string-type>

You can examine details of that case instance by pointing your browser to http://localhost:8090/rest/server/containers/business-application-kjar/cases/instances/CASE-0000000001

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<case-instance>
    <case-id>CASE-0000000001</case-id>
    <case-description>myfirstcase</case-description>
    <case-owner>wbadmin</case-owner>
    <case-status>1</case-status>
    <case-definition-id>myfirstcase</case-definition-id>
    <container-id>business-application-kjar-1_0-SNAPSHOT</container-id>
    <case-started-at>2018-10-30T09:54:45.747+01:00</case-started-at>
    <case-completion-msg></case-completion-msg>
    <case-sla-compliance>0</case-sla-compliance>
</case-instance>

Load tasks for given case instance that are assigned to wbadmin user

you should see second task from case definition

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<task-summary-list>
    <task-summary>
        <task-id>1</task-id>
        <task-name>Another task that is started automatically</task-name>
        <task-subject></task-subject>
        <task-description></task-description>
        <task-status>Reserved</task-status>
        <task-priority>0</task-priority>
        <task-is-skipable>true</task-is-skipable>
        <task-actual-owner>wbadmin</task-actual-owner>
        <task-created-by>wbadmin</task-created-by>
        <task-created-on>2018-10-30T09:54:45.790+01:00</task-created-on>
        <task-activation-time>2018-10-30T09:54:45.790+01:00</task-activation-time>
        <task-proc-inst-id>1</task-proc-inst-id>
        <task-proc-def-id>myfirstcase</task-proc-def-id>
        <task-container-id>business-application-kjar-1_0-SNAPSHOT</task-container-id>
        <task-parent-id>-1</task-parent-id>
    </task-summary>
</task-summary-list>

You can trigger dynamically the other user task by issuing request to

Optionally you can send data as payload of the request.

Load tasks again for given case instance that are assigned to wbadmin user

you should see both tasks from case definition

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<task-summary-list>
    <task-summary>
        <task-id>1</task-id>
        <task-name>Another task that is started automatically</task-name>
        <task-subject></task-subject>
        <task-description></task-description>
        <task-status>Reserved</task-status>
        <task-priority>0</task-priority>
        <task-is-skipable>true</task-is-skipable>
        <task-actual-owner>wbadmin</task-actual-owner>
        <task-created-by>wbadmin</task-created-by>
        <task-created-on>2018-10-30T09:54:45.790+01:00</task-created-on>
        <task-activation-time>2018-10-30T09:54:45.790+01:00</task-activation-time>
        <task-proc-inst-id>1</task-proc-inst-id>
        <task-proc-def-id>myfirstcase</task-proc-def-id>
        <task-container-id>business-application-kjar-1_0-SNAPSHOT</task-container-id>
        <task-parent-id>-1</task-parent-id>
    </task-summary>
    <task-summary>
        <task-id>3</task-id>
        <task-name>Dynamic User Task</task-name>
        <task-subject></task-subject>
        <task-description></task-description>
        <task-status>Reserved</task-status>
        <task-priority>0</task-priority>
        <task-is-skipable>true</task-is-skipable>
        <task-actual-owner>wbadmin</task-actual-owner>
        <task-created-by>wbadmin</task-created-by>
        <task-created-on>2018-10-30T10:08:01.257+01:00</task-created-on>
        <task-activation-time>2018-10-30T10:08:01.257+01:00</task-activation-time>
        <task-proc-inst-id>1</task-proc-inst-id>
        <task-proc-def-id>myfirstcase</task-proc-def-id>
        <task-container-id>business-application-kjar-1_0-SNAPSHOT</task-container-id>
        <task-parent-id>-1</task-parent-id>
    </task-summary>
</task-summary-list>
Execute business process from jBPM Case Management Showcase

There is a need to repoint the jBPM Case Management Showcase application to use business application instead of the KIE Server bundled with single zip distribution of jBPM. To do so, edit standalone.xml file of jbpm server (JBPM_SERVER/standalone/configuration) and change value of org.kie.server.location system property

<property name="org.kie.server.location" value="http://localhost:8090/rest/server"/>

Once done, restart jBPM server.

Stop the application if it’s running.

Go to business-application-service directory and launch the application in development mode

./launch-dev.sh clean install for Linux/Unix

./launch-dev.bat clean install for Windows

this will connect your business application to Business Central so can be administered from within its UI.

Go to Business Central in the browser and navigate to servers (from the home screen).

tutorial 7 empty server

As you can see the business-application-service Dev is there and connected. Although it does not have any kjars deployed. This is because it’s now running in managed mode meaning it’s Business Central that decides what kjars it should run.

So let’s deploy the business-application-kjar to our running application.

  • Go to projects from home screen of Business Central

  • Go into business-application-kjar project

  • Click Deploy button

  • Make sure that Server configuration is set to business-application-service-dev and click ok

The project should be successfully deployed and you can examine that state by going back to servers from home screen.

Next, go to process definitions (in Manage section of the Home screen) and select server configuration (right top corner) - again it should be business-application-service-dev the list of available process definition will be loaded and you should see your single case definition from the project business-application-kjar.

tutorial 7 process defs

Examine details of that case definition by clicking on the row in the table. Switch to Diagram tab to see the visual representation of your case definition.

Business Central does not allow to start case instances and thus you need to switch to Case Management showcase application. It is accessible from the Apps launcher icon (top right corner) next to logout button.

Launch the application and login with wbadmin. Once logged in you can start new case instance.

tutorial 7 case app

Go into newly started case instance by clicking on the row of the active cases list.

tutorial 7 case instance

From there you can start new instance of Dynamic User Task as the other one is already there.

3.7.7.5. Summary

Congratulations! you have enhanced your business application to take advantage of dynamic and adaptive business assets that allow to do much more than structured processes. You could see how easy it is to add additional user tasks and that’s just the beginning.

3.7.7.6. Source code of the tutorial

Here is the complete source code of the tutorial.

4. jBPM Installer

4.1. Prerequisites

This script assumes you have Java JDK 1.8+ (set as JAVA_HOME), and Ant 1.9+ installed. If you don’t, use the following links to download and install them:

To check whether Java and Ant are installed correctly, type the following commands inside a command prompt:

java -version

ant -version

This should return information about which version of Java and Ant you are currently using.

4.2. Downloading the Installer

First of all, you need to download the installer and unzip it on your local file system. There are two versions

  • full installer - already contains a lot of the dependencies that are necessary during the installation

  • minimal installer - contains only the installer and will download all required dependencies on the fly

In general, it is probably best to download the full installer: jBPM-7.28.0.Final-installer-full.zip

You can also download the latest build (only for the minimal installer).

4.3. Demo Setup

The easiest way to get started is to simply run the installation script to install the demo setup. The demo install will setup all the web tooling (on top of WildFly) and Eclipse tooling in a pre-configured setup. Go into the jbpm-installer folder where you unzipped the installer and (from a command prompt) run:

ant install.demo

This will:

  • Download WildFly application server

  • Configure and deploy a process execution server

  • Configure and deploy Business Central

  • Configure and deploy the case management application

  • Download Eclipse

  • Install the Drools and jBPM Eclipse plugin

  • Install the Eclipse BPMN 2.0 Modeler

Running this command could take a while (REALLY, not kidding, we are for example downloading an Eclipse installation, even if you downloaded the full installer, specifically for your operating system).

The script always shows which file it is downloading (you could for example check whether it is still downloading by checking the whether the size of the file in question in the jbpm-installer/lib folder is still increasing). If you want to avoid downloading specific components (because you will not be using them or you already have them installed somewhere else), check below for running only specific parts of the demo or directing the installer to an already installed component.

Once the demo setup has finished, you can start playing with the various components by starting the demo setup:

ant start.demo

This will:

  • Start H2 database server

  • Start WildFly application server

  • Start Eclipse

Now wait until the process management console comes up:

The case management UI will be available on:

It could take a minute to start up the application server and web application. If the web page doesn’t show up after a while, make sure you don’t have a firewall blocking that port, or another application already using the port 8080. You can always take a look at the server log {jbpm-installer-folder}/wildfly-{version}/standalone/log/server.log

Once everything is started, you can start playing with the Eclipse and web tooling, as explained in the following sections.

If you only want to try out the web tooling and do not wish to download and install the Eclipse tooling, you can use these alternative commands:

ant install.demo.noeclipse
ant start.demo.noeclipse

Similarly, if you only want to try out the Eclipse tooling and do not wish to download and install the web tooling, you can use these alternative commands:

ant install.demo.eclipse
ant start.demo.eclipse

Now continue with the 10-minute tutorials. Once you’re done playing and you want to shut down the demo setup, you can use:

ant stop.demo

If at any point in time would like to start over with a clean demo setup - meaning all changes you did inside the web tooling and/or saved in the database will be lost, you can run the following command (after which you can run the installer again from scratch, note that this cannot be undone):

ant clean.demo

4.4. 10-Minute Tutorial using Business Central

Open up the process management console:

It could take a minute to start up the application server and web application. If the web page doesn’t show up after a while, make sure you don’t have a firewall blocking that port, or another application already using the port 8080. You can always take a look at the server log {jbpm-installer-folder}/wildfly-{version}/standalone/log/server.log

Log in, using krisv / krisv as username / password.

Using a prebuilt Evaluation example, the following screencast gives an overview of how to manage your process instances. It shows you:

  • How to log in to Business Central

  • How to import an existing example project and build and deploy it

  • How to start a new process instance

  • How to look up the current status of a running process instance

  • How to look up your tasks

  • How to complete a task

  • How to look at reports to monitor your process execution

    ScreencastConsole

Business Central supports the entire life cycle of your business processes: authoring, deployment, process management, tasks and dashboards.

  • The project authoring page allows you to look at existing repositories, where each project can contain business processes (but also business rules, data models, forms, etc.). It allows you to create your own project, or you could import an existing example to take a look at.

    • In this screencast, we start by importing the Evaluation project

  • The project explorer shows all available artifacts:

    • evaluation: business process describing the evaluation process as a sequence of tasks

    • evaluation-taskform: process form to start the evaluation process

    • PerformanceEvaluation-taskform: task form to perform the evaluation tasks

  • To make a process available for execution, you need to successfully build and deploy it first. To do so, open the selected project (in the project authoring page) and click Build & Deploy (top right corner).

  • To manage your process definitions and instances, click the "Process Management" menu option at the top menu bar an select one of available options depending on you interest:

    • Process Definitions - lists all available process definitions

    • Process Instances - lists all active process instances (allows to show completed, aborted as well by changing filter criteria)

  • The process definitions view allows you to start a new process instance by clicking on the Start button. The process form (as defined in the project) will be shown, where you need to fill in the necessary information to start the process. In this case, you need to fill the user you want to start an evaluation for (for example use "krisv") and a reason for the request, after which you can complete the form. Some details about the process instance that was just started will be shown in the process instance details panel. From there you can access additional details:

    • Process model - to visualize current state of the process

    • Process variables - to see current values of process variables

    • Documents - documents related to the process instance

    • Logs - overview of all process events for that instance

    The process instance that you just started is first requiring a self-evaluation of the user and is waiting until the user has completed this task.

  • To see the tasks that have been assigned to you, choose the "Tasks" menu option on the top bar. By default, it will show all active tasks, and a "Performance Evaluation" (that was created by the process instance you just started) should be availabe for you. When you click a task, the task details will be shown, including the task form related to this task. After starting the task, you can fill in the necessary information and complete the task. After completing the task, you could check the "Process Instances" once more to check the progress of your process instance. You should be able to see that the process is now waiting for your HR manager and project manager to also perform an evaluation. You could log in as "john" / "john" and "mary" / "mary" to complete these tasks.

  • After starting and/or completing a few process instances and human tasks, you can generate a report of what has happened so far. Under "Dashboards", select "Process & Task Dashboard". This is a set of see predefined charts that allow users to spot what is going on in the system. Charts can be fully customized as well, as explained in the Business Activity Monitoring chapter.

4.5. 10-Minute Tutorial using Eclipse

The following screencast gives an overview of how to use the Eclipse tooling. It shows you:

  • How to import and execute the evaluation sample project

    • Import the evaluation project (included in the jbpm-installer)

    • Open the Evaluation.bpmn process

    • Open the com.sample.ProcessTest Java class

    • Execute the ProcessTest class to run the process

  • How to create a new jBPM project (including sample process and JUnit test)

    ScreencastEclipse

You can import the evaluation project - a sample included in the jbpm-installer - by selecting "File → Import …​", select "Existing Projects into Workspace" and browse to the jbpm-installer/sample/evaluation folder and click "Finish". You can open up the evaluation process and the ProcessTest class. To execute the class, right-click it and select "Run as …​ - Java Application". The console should show how the process was started and how the different actors in the process completed the tasks assigned to them, to complete the process instance.

You could also create a new project using the jBPM project wizard. The sample projects contain a process and an associated Java file to start the process. Select "File - New …​ - Project …​" and under the "jBPM" category and select "jBPM project". Select to create a project with some example files to get you started quickly and click next. Give the project a name. You can choose from a simple HelloWorld example or a slightly more advanced example using persistence and human tasks. If you select the latter and click Finish, you should see a new project containing a "sample.bpmn" process and a "com.sample.ProcessTest" JUnit test class. You can open the BPMN2 process by double-clicking it. To execute the process, right-click ProcessTest.java and select "Run As - Java Application".

4.6. Configuration

4.6.1. Business Central Authentication

The Business Central web application is using the pre-installed other security domain for authenticating and authorizing users (as specified in the WEB-INF/jboss-web.xml inside the WARs).

The application server uses by default property files based realms - Please note that this configuration is intended only for demo purposes (users, roles and passwords are stored in simple property files on the filesystem).

Authentication is configured in the standalone.xml file as follows:

<security-domain name="other" cache-type="default">
    <authentication>
        <login-module code="Remoting" flag="optional">
            <module-option name="password-stacking" value="useFirstPass"/>
        </login-module>
        <login-module code="RealmDirect" flag="required">
            <module-option name="password-stacking" value="useFirstPass"/>
        </login-module>
        <login-module code="org.kie.security.jaas.KieLoginModule" flag="optional" module="deployment.jbpm-console.war"/>
    </authentication>
</security-domain>
<security-realm name="ApplicationRealm">
    <authentication>
        <local default-user="$local" allowed-users="*" skip-group-loading="true"/>
        <properties path="users.properties" relative-to="jboss.server.config.dir"/>
    </authentication>
    <authorization>
        <properties path="roles.properties" relative-to="jboss.server.config.dir"/>
    </authorization>
</security-realm>

These are the default users:

Table 1. Default users
Name Password Business Central roles Task roles

admin

admin

admin,analyst,kiemgmt,rest-all,kie-server

krisv

krisv

admin,analyst,rest-all,kie-server

john

john

analyst,kie-server

Accounting,PM

mary

mary

analyst,kie-server

HR

sales-rep

sales-rep

analyst,kie-server

sales

jack

jack

analyst,kie-server

IT

katy

katy

analyst,kie-server

HR

salaboy

salaboy

admin,analyst,rest-all,kie-server

IT,HR,Accounting

kieserver

kieserver1!

kie-server

Authentication can be customized by using any of the following options:

  • The users and groups management screens on the Business Central web application.

Navigate into the Business Central web application and click the menu HomeAdmin and selecting Users.

  • The add-user script that comes by default on Wildfly/EAP.

Example for Linux platforms - run the following command and follow the script instructions:

/bin/sh $JBOSS_HOME/bin/add-user.sh
  --user-properties $JBOSS_HOME/standalone/configuration/users.properties
  --group-properties $JBOSS_HOME/standalone/configuration/roles.properties
  --realm ApplicationRealm

4.6.2. Using your own database with the jBPM installer

4.6.2.1. Introduction

jBPM uses the Java Persistence API specification (v2) to allow users to configure whatever datasource they want to use to persist runtime data. As a result, the instructions below describe how you should configure a datasource when using JPA on JBoss application server (e.g. EAP7 or Wildfly10) using a persistence.xml file and configuring your datasource and driver in your application server’s standalone.xml , similar to how you would configure any other application using JPA on the application server. The installer automates some of this (like copying the right files to the right location after installation).

By default, the jbpm-installer uses an H2 database for persisting runtime data. In this section we will:

  1. modify the persistence settings for runtime persistence of process instance state

  2. test the startup with our new settings!

You will need a local instance of a database, in this case we will use MySQL.

4.6.2.2. Database setup

In the MySQL database used in this quickstart, create a single user:

  • user/schema "jbpm" with password "jbpm" (which will be used to persist all entities)

If you end up using different names for your user/schemas, please make a note of where we insert "jbpm" in the configuration files.

If you want to try this quickstart with another database, a section at the end of this quickstart describes what you may need to modify.

4.6.2.3. Configuration

The following files define the persistence settings for the jbpm-installer demo:

  • jbpm-installer/db/jbpm-persistence-JPA2.xml

  • Application server configuration

    • standalone-*.xml

There are multiple standalone.xml files available (depending on whether you are using JBoss EAP or Wildfly and whether you are running the normal or full profile). The full profile is required to use the JMS component for remote integration, so will be used by default by the installer. Best practice is to update all standalone.xml files to have consistent setup but most important is to have standalone-full-wildfly-{version}.xml properly configured as this is used by default by the installer.

Do the following:

  • Disable H2 default database and enable MySQL database in build.properties

    # default is H2
    # H2.version=1.3.168
    # db.name=h2
    # db.driver.jar.name=${db.name}.jar
    # db.driver.download.url=http://repo1.maven.org/maven2/com/h2database/h2/${H2.version}/h2-${H2.version}.jar
    #mysql
    db.name=mysql
    db.driver.module.prefix=com/mysql
    db.driver.jar.name=mysql-connector-java-5.1.18.jar
    db.driver.download.url=https://repository.jboss.org/nexus/service/local/repositories/central/content/mysql/mysql-connector-java/5.1.18/mysql-connector-java-5.1.18.jar
    org.kie.server.persistence.dialect=org.hibernate.dialect.MySQLDialect

    You might want to update the db driver jar name and download url to whatever version of the jar matches your installation. Look to also update the dialect to what matches your installation if needed (for example change to MySQL5Dialect for MySQL 5.x specific features).

  • db/jbpm-persistence-JPA2.xml :

    This is the JPA persistence file that defines the persistence settings used by jBPM for the jBPM engine information, the logging/BAM information, and task service.

    In this file, you will have to change the name of the hibernate dialect used for your database.

    The original line is:

    <property name="hibernate.dialect" value="org.hibernate.dialect.H2Dialect"/>

    In the case of a MySQL database, you need to change it to:

    <property name="hibernate.dialect" value="org.hibernate.dialect.MySQLDialect"/>

    For those of you who decided to use another database, a list of the available hibernate dialect classes can be found here.

  • standalone-full-wildfly-{version}.xml :

    Standalone.xml and standalone-full.xml are the configuration for the standalone JBoss application server. When the installer installs the demo, it copies these files to the standalone/configuration directory in the JBoss server directory. Since the installer uses Wildfly by default as application server, you probably need to change standalone-full-wildfly-{version}.xml .

    We need to change the datasource configuration in standalone-full.xml so that the jBPM engine can use our MySQL database. The original file contains (something very similar to) the following lines:

    <datasource jta="true" jndi-name="java:jboss/datasources/jbpmDS" pool-name="H2DS" enabled="true" use-java-context="true" use-ccm="true">
        <connection-url>jdbc:h2:tcp://localhost/~/jbpm-db;MVCC=TRUE</connection-url>
        <driver>h2</driver>
        <security>
           <user-name>sa</user-name>
        </security>
    </datasource>
    <drivers>
        <driver name="h2" module="com.h2database.h2">
            <xa-datasource-class>org.h2.jdbcx.JdbcDataSource</xa-datasource-class>
        </driver>
    </drivers>

    Change the lines to the following:

    <datasource jta="true" jndi-name="java:jboss/datasources/jbpmDS" pool-name="MySQLDS" enabled="true" use-java-context="true" use-ccm="true">
        <connection-url>jdbc:mysql://localhost:3306/jbpm</connection-url>
        <driver>mysql</driver>
        <security>
           <user-name>jbpm</user-name>
           <password>jbpm</password>
        </security>
    </datasource>

    and add an additional driver configuration:

    <driver name="mysql" module="com.mysql">
        <xa-datasource-class>com.mysql.jdbc.jdbc2.optional.MysqlXADataSource</xa-datasource-class>
    </driver>
  • To install driver jars in JBoss application server (Wildfly, EAP, etc.), it is recommended to install the driver jar as a module. The installer already takes care of this mostly: it will copy the driver jar (you specified in the build.properties ) to the right folder inside the modules directory of your server and put a matching module.xml next to it. For MySQL, this file is called db/mysql_module.xml . Open this file and make sure that the file name of the driver jar listed there is identical the driver jar name you specified in the build.properties (including the version). Note that, even if you simply uncommented the default MySQL configuration, you will still need to add the right version here.

  • Starting the demo

    We’ve modified all the necessary files at this point. Now would be a good time to make sure your database is started up as well!

    The installer script copies this file into the jbpm-console WAR before the WAR is installed on the server. If you have already run the installer, it is recommended to stop the installer and clean it first using

    ant stop.demo

    and

    ant clean.demo

    before continuing.

    Run

    ant install.demo

    to (re)install the wars and copy the necessary configuration files. Once you’ve done that, (re)start the demo using

    ant start.demo
  • Problems?

    If this isn’t working for you, please try the following:

    • Please double check the files you’ve modified: I wrote this, but still made mistakes when changing files!

    • Please make sure that you don’t secretly have another (unmodified) instance of JBoss AS running.

    • If neither of those work (and you’re using MySQL), please do then let us know.

4.6.2.4. Using a different database

If you decide to use a different database with this demo, you need to remember the following when going through the steps above:

  • Configuring the jBPM datasource in standalone.xml:

    • After locating the java:jboss/datasources/jbpmDS datasource, you need to provide the following properties specific to your database:

      • Change the url of your database

      • Change the user-name and password

      • Change the name of the driver (which you’ll create next)

        For example:

        <datasource jta="true" jndi-name="java:jboss/datasources/jbpmDS" pool-name="PostgreSQLDS" enabled="true" use-java-context="true" use-ccm="true">
            <connection-url>jdbc:postgresql://localhost:5432/jbpm</connection-url>
            <driver>postgresql</driver>
            <security>
                <user-name>jbpm</user-name>
                <password>jbpm</password>
            </security>
        </datasource>
    • Add an additional driver configuration:

      • Change the name of the driver to match the name you specified when configuring the datasource in the previous step

      • Change the module of the driver: the database driver jar should be installed as a module (see below) and here you should reference the unique name of the module. Since the installer can take care of automatically generating this module for you (see below), this should match the db.driver.module.prefix property in build.properties (where forward slashes are replaced by a point). In the example below, I used org/postgresql as db.driver.module.prefix which means that I should then use org.postgresql as module name for the driver.

      • Fill in the correct name of the XA datasource class to use.

    For example:

    +

    <driver name="postgresql" module="org.postgresql">
        <xa-datasource-class>org.postgresql.xa.PGXADataSource</xa-datasource-class>
    </driver>
  • You need to change the dialect in persistence.xml to the dialect for your database, for example:

    <property name="hibernate.dialect" value="org.hibernate.dialect.PostgreSQLDialect"/>
  • In order to make sure your driver will be correctly installed in the JBoss application server, there are typically multiple options, like install as a module or as a deployment. It is recommended to install the driver as a module for EAP and Wildfly.

    • Install the driver JAR as a module, which is what the install script does.

    • Otherwise, you can modify and install the downloaded JAR as a deployment. In this case you will have to copy the JAR yourself to the standalone/deployments directory.

    If you choose to install driver as JBoss module (recommended), please do the following:

    • In build.properties, disable the default H2 driver properties

      # default is H2
      # H2.version=1.3.168
      # db.name=h2
      # db.driver.jar.name=h2-${H2.version}.jar
      # db.driver.download.url=http://repo1.maven.org/maven2/com/h2database/h2/${H2.version}/h2-${H2.version}.jar
    • Uncomment one of the other example configs (mysql or postgresql) or create your own:

      #postgresql
      db.name=postgresql
      db.driver.module.prefix=org/postgresql
      db.driver.jar.name=postgresql-9.1-902.jdbc4.jar
      db.driver.download.url=https://repository.jboss.org/nexus/content/repositories/thirdparty-uploads/postgresql/postgresql/9.1-902.jdbc4/postgresql-9.1-902.jdbc4.jar
      • Change the db.name property in build.properties to a name for your database.

      • Change the db.driver.module.prefix property to a name for the module of your driver. Note that this should match the module property when configuring the driver in standalone.xml (where forward slashes in the prefix here are replaced by a point). In the example above, I used org/postgresql as db.driver.module.prefix which means that I should then use org.postgresql as module name for the driver.

      • Change the db.driver.jar.name property to the name of the jar that contains your database driver.

      • Change the db.driver.download.url property to where the driver jar can be downloaded. Alternatively, you could manually download the jar yourself, and place it in the db/drivers folder, using the same name as you specified in the db.driver.jar.name property.

    • Lastly, you’ll have to create the db/${db.name}_module.xml file. As an example you can use db/mysql_module.xml, so just make a copy of it and:

      • Change the name of the module to match the driver module name above

      • Change the name of the module resource path to the name of the db.driver.jar.name property.

    • For example, the top of the file would look like:

<module xmlns="urn:jboss:module:1.0" name="org.postgresql">
   <resources>
     <resource-root path="postgresql-9.1-902.jdbc4.jar"/>
   </resources>

4.6.3. jBPM database schema scripts (DDL scripts)

By default the demo setup makes use of Hibernate auto DDL generation capabilities to build up the complete database schema, including all tables, sequences, etc. This might not always be welcomed (by your database administrator), and thus the installer provides DDL scripts for most popular databases.

Table 2. DDL scripts
Database name Location

db2

jbpm-installer/db/ddl-scripts/db2

derby

jbpm-installer/db/ddl-scripts/derby

h2

jbpm-installer/db/ddl-scripts/h2

hsqldb

jbpm-installer/db/ddl-scripts/hsqldb

mysql5

jbpm-installer/db/ddl-scripts/mysql5

mysqlinnodb

jbpm-installer/db/ddl-scripts/mysqlinnodb

oracle

jbpm-installer/db/ddl-scripts/oracle

postgresql

jbpm-installer/db/ddl-scripts/postgresql

sqlserver

jbpm-installer/db/ddl-scripts/sqlserver

sqlserver2008

jbpm-installer/db/ddl-scripts/sqlserver2008

sybase

jbpm-installer/db/ddl-scripts/sybase

DDL scripts are provided for both jBPM and Quartz schemas although Quartz schema DDL script is only required when the timer service should be configured with Quartz database job store. See the section on timers for additional details.

This can be used to initially create the database schema, but it can also serve as the basis for any\ optimization that needs to be applied - such as indexes, etc.

If you use MySQL 5.7 or earlier (MariaDB 10.2.3 or earlier), you also need to run jbpm-installer/db/ddl-scripts/mysql5/mysql-jbpm-amend-auto-increment-procedure.sql

This script creates a procedure for jBPM tables (ProcessInstanceInfo/WorkItemInfo/Task) to protect AUTO_INCREMENT counter. Without the procedure, ID values of those tables could be reset on MySQL/MariaDB restart (https://dev.mysql.com/doc/refman/8.0/en/innodb-auto-increment-handling.html#innodb-auto-increment-initialization). It would introduce further side effects.

In addition to creating the procedure, you have to call the procedure on MySQL/MariaDB restart. For example,

/etc/my.cnf

init-file=/path/to/mysql-jbpm-amend-auto-increment-call.sql

Write mysql-jbpm-amend-auto-increment-call.sql

call mydatabase.JbpmAmendAutoIncrement;

If you use PostgreSQL with jBPM, you also need to run jbpm-installer/db/ddl-scripts/postgresql/postgresql-jbpm-lo-trigger-clob.sql

This script creates triggers for jBPM tables to protect CLOB references of large objects. Without the triggers, vacuumlo tool (https://www.postgresql.org/docs/9.4/static/vacuumlo.html) deletes active large objects so causes an issue to jBPM. If you are already running jBPM without the triggers, you also need to run the following SQLs after applying the triggers to protect existing CLOB.

insert into jbpm_active_clob ( loid ) select cast(expression as oid) from booleanexpression where expression is not null;
insert into jbpm_active_clob ( loid ) select cast(body as oid) from email_header where body is not null;
insert into jbpm_active_clob ( loid ) select cast(text as oid) from i18ntext where text is not null;
insert into jbpm_active_clob ( loid ) select cast(text as oid) from task_comment where text is not null;
insert into jbpm_active_clob ( loid ) select cast(qexpression as oid) from querydefinitionstore where qexpression is not null;
insert into jbpm_active_clob ( loid ) select cast(deploymentunit as oid) from deploymentstore where deploymentunit is not null;

4.6.4. jBPM installer script

jBPM installer ant script performs most of the work automatically and usually does not require additional attention but in case it does, here is a list of available targets that might be needed to perform some of the steps manually.

Table 3. jBPM installer available targets
Target Description

clean.db

cleans up database used by jBPM demo (applies only to H2 database)

clean.demo

cleans up entire installation so new installation can be performed

clean.demo.noeclipse

same as clean.demo but does not remove Eclipse

clean.eclipse

removes Eclipse and its workspace

clean.generated.ddl

removes DDL scripts generated if any

clean.jboss

removes application server with all its deployments

clean.jboss.repository

removes repository content for demo setup (guvnor Maven repo, niogit, etc)

download.db.driver

downloads DB driver configured in build.properties

download.ddl.dependencies

downloads all dependencies required to run DDL script generation tool

download.droolsjbpm.eclipse

downloads Drools and jBPM Eclipse plugin

download.eclipse

downloads Eclipse distribution

download.eclipse.gef

downloads Eclipse GEF feature

download.jboss

downloads JBoss Application Server

download.jBPM.bin

downloads jBPM binary distribution (jBPM libs and its dependencies)

download.jBPM.casemgmt

downloads jBPM case management console

download.jBPM.console

downloads jBPM process management console

download.kie.server

downloads jBPM process execution server

install.db.files

installs DB driver as JBoss module

install.demo

installs complete demo environment

install.demo.eclipse

installs Eclipse with all jBPM plugins, no server installation

install.demo.noeclipse

similar to install.demo but skips Eclipse installation

install.droolsjbpm-eclipse.into.eclipse

installs droolsjbpm Eclipse plugin into Eclipse

install.eclipse

install Eclipse IDE

install.jboss

installs JBoss AS

install.jBPM-casemgmt.into.jboss

installs jBPM case management application

install.jBPM-console.into.jboss

installs jBPM process management console

install.kie-server.into.jboss

installs jBPM process execution server

4.7. Frequently Asked Questions

Some common issues are explained below.

  1. What if the installer complains it cannot download component X?

    Are you connected to the Internet? Do you have a firewall turned on? Do you require a proxy? It might be possible that one of the locations we’re downloading the components from is temporarily offline. Try downloading the components manually (possibly from alternate locations) and put them in the jbpm-installer/lib folder.

  2. What if the installer complains it cannot extract / unzip a certain JAR/WAR/zip?

    If your download failed while downloading a component, it is possible that the installer is trying to use an incomplete file. Try deleting the component in question from the jbpm-installer/lib folder and reinstall, so it will be downloaded again.

  3. What if I have been changing my installation (and it no longer works) and I want to start over again with a clean installation?

    You can use ant clean.demo to remove all the installed components, so you end up with a fresh installation again.

  4. I sometimes see exceptions when trying to stop or restart certain services, what should I do?

    If you see errors during shutdown, are you sure the services were still running? If you see exceptions during restart, are you sure the service you started earlier was successfully shutdown? Maybe try killing the services manually if necessary.

  5. Something seems to be going wrong when running Eclipse but I have no idea what. What can I do?

    Always check the consoles for output like error messages or stack traces. You can also check the Eclipse Error Log for exceptions. Try adding an audit logger to your session to figure out what’s happening at runtime, or try debugging your application.

  6. Something seems to be going wrong when running the a web-based application like the jbpm-console. What can I do?

    You can check the server log for possible exceptions: jbpm-installer/jboss-as-{version}/standalone/log/server.log (for JBoss AS7).

For all other questions, try contacting the jBPM community as described in the Getting Started chapter.

5. Examples

5.1. Introduction

Business Central provides various sample projects that will help you in getting started with automating business processes. These are bundled together with the application and you can easily try them out by navigating to Design  Projects and clicking on Try Samples.

This section shows the different examples that can be found in the jbpm-playground repository. All these examples are high level and business oriented.

If you want to contribute with these examples please get in touch with any member of the jBPM/Drools Team.

5.2. Importing Projects through Git

To import the Human Resources example, as well as other examples, follow these steps:

  1. Logging in to Business Central

    1. On the command line, change into the $SERVER_HOME/bin/ directory and execute the following command:

      • for Unix environment:

        ./standalone.sh
      • for Windows environment:

        ./standalone.bat
    2. Once your server is up and running, open the following address in a web browser:

      http://localhost:8080/business-central

      This opens the login page.

    3. Log in to Business Central with the user credentials created during installation.

  2. Importing Projects Through Git

    1. Click Design  Projects.

    2. Click Import Project.

      1. If your current space contains at least one project, the Import Project option is available under the dropdown menu in the space menu bar.

    3. In the Import Project dialogue, enter following information:

      • Repository URL : enter the Git URL you want to import, for example: https://github.com/kiegroup/jbpm-playground.

      • Authentication Options: If the target git repository requires authentication, you can specify the user name and password using the expanded dialog option.

    4. Click Import.

project import

This will import a number of examples into your instance of jBPM.

5.3. Human Resources Example

The Human Resource Example’s use case can be described as follows: A company wants to hire new developers. In this process, three departments (that is the Human resources, IT, and Accounting) are involved. These departments are represented by three users: Katy, Jack, and John respectively.

human resources high level
Business process designed for the Human Resource Example's use caseBusiness Process

Note that only four out of the six defined activities within the business process are User Tasks. User Tasks require human interaction. The other two tasks are Service Tasks, which are automated and connected to other systems.

Each instance of the process will follow certain actions:

  • The human resources team performs the initial interview with the candidate.

  • The IT department team performs the technical interview.

  • Based on the output from the previous two steps, the accounting team creates a job proposal.

  • When the proposal has been drafted, it is automatically sent to the candidate via email.

  • If the candidate accepts the proposal, a new meeting to sign the contract is scheduled.

  • Finally, if the candidate accepts the proposal, the system posts a message about the new hire using Twitter service connector.

Note, that Jack, John, and Katy represent any employee within the company with appropriate role assigned.

5.3.1. The Kie Project: human-resources

To start exploring the project:

  1. Click Design  Projects.

  2. Click Human Resources Kjar Example  hiring.

The asset list page contains the hiring.bpmn2 process and a set of forms for each human task. Click these assets to explore. Notice that different editors open for different types of assets.

human resources hiring bpmn

5.3.2. Building the Human Resources Example

To build the Project:

  1. Click Design  Projects.

  2. Click Human Resources Kjar Example.

  3. Click Deploy.

Deploy creates a new JAR artifact that is deployed to the runtime environment as a new deployment unit.

human resources build and deploy

After successfully building and deploying your project, you can verify its presence in the Execution Servers tab. Click Deploy  Execution Servers to do so.

human resources deployment screen
Figure 7. Deployment Units

When you Deploy a project from the Project Editor, it is deployed using the default configuration which means using the Singleton strategy, the default Kie Base and the default Kie session.

If you want to change these settings, you can make the necessary adjustments on the Settings tab for the specific project. Then, you will be able to set a different strategy, or use a non-default Kie Base or Kie Session. Once you saved your settings you can redeploy the project as a new Deployment Unit.

human resources settings screen
Figure 8. Project Settings

Once your artifact that contains the process definition is deployed, the Process Definition will become available in Manage  Process Definitions.

5.3.3. Create a new Process Instance

To create new process instances:

Click Manage  Process Definitions.

Start your instance:

human resources process definitions
Figure 9. Starting Process Instances

The Process Definitions section contains all the available process definitions in the runtime environment. In order to add new process definitions, build and deploy a new project.

Most processes require additional information to create a new process instance. This is done through forms. For this project, fill in the name of the candidate that is to be interviewed.

When you click Submit, you create a new process instance. This creates the first task, that is available for the Human Resources team. To see the task, you need to logout and log in as a user with the appropriate role assigned, that is someone from the Human Resources.

When you start the process, you can interact with the human tasks. To do so, click Track  Task Inbox.

Note that in order to see the tasks in the task list, you need to belong to specific user groups, for which the task is designed. For example, the HR Interview task is visible only for the members of the HR group, and the Tech Interview Task is visible only to the members of the IT group.

5.4. Examples zip

A zip file of examples can also be downloaded from the downloads page, containing various examples that can be opened in the Eclipse-based Developers Tools. Simply download and unzip the examples artefact and import into your Eclipse workspace.

6. jBPM Version Migration Guide

6.1. Deprecated in jBPM 7

Table 4. Deprecated properties
Property Description jBPM 7 Behavior

jbpm.v5.id.strategy

This property is responsible for how the id value of NodeInstance instances was generated. Setting this property to true meant that the same strategy used in jBPM 5 was still used, even though this (jBPM 5) strategy meant that NodeInstance ids were not unique.

In jBPM 7, this is no longer possible: all NodeInstance id’s are unique.

6.2. Changed in jBPM 7

Table 5. Migration information
Jira Description What to do

https://issues.jboss.org/browse/JBPM-7693

Value of constant DAYS_PER_WEEK in class org.jbpm.process.core.timer.BusinessCalendarImpl was updated to business.days.per.week to correctly reflect its meaning.

Update your code to reflect this change - from old value business.hours.per.week to new value business.days.per.week.

jBPM Core

Using the jBPM Core Engine

7. Core Engine API

7.1. Overview

This chapter introduces the API you need to load processes and execute them. For more detail on how to define the processes themselves, check out the chapter on BPMN 2.0.

To interact with the jBPM engine (for example, to start a process), you need to set up a session. This session will be used to communicate with the jBPM engine. A session needs to have a reference to a KIE base, which contains a reference to all the relevant process definitions. This KIE base is used to look up the process definitions whenever necessary. To create a session, you first need to create a KIE base, load all the necessary process definitions (this can be from various sources, like from classpath, file system or process repository) and then instantiate a session.

Once you have set up a session, you can use it to start executing processes. Whenever a process is started, a new process instance is created (for that process definition) that maintains the state of that specific instance of the process.

KnowledgeBaseAndSession

For example, imagine you are writing an application to process sales orders. You could then define one or more process definitions that define how the order should be processed. When starting up your application, you first need to create a KIE base that contains those process definitions. You can then create a session based on this KIE base so that, whenever a new sales order comes in, a new process instance is started for that sales order. That process instance contains the state of the process for that specific sales request.

A KIE base can be shared across sessions and usually is only created once, at the start of the application (as creating a KIE base can be rather heavy-weight as it involves parsing and compiling the process definitions). KIE bases can be dynamically changed (so you can add or remove processes at runtime).

Sessions can be created based on a KIE base and are used to execute processes and interact with the jBPM engine. You can create as many independent session as you need and creating a session is considered relatively lightweight. How many sessions you create is up to you. In general, most simple cases start out with creating one session that is then called from various places in your application. You could decide to create multiple sessions if for example you want to have multiple independent processing units (for example, if you want all processes from one customer to be completely independent from processes for another customer, you could create an independent session for each customer) or if you need multiple sessions for scalability reasons. If you don’t know what to do, simply start by having one KIE base that contains all your process definitions and create one session that you then use to execute all your processes.

The jBPM project has a clear separation between the API the users should be interacting with and the actual implementation classes. The public API exposes most of the features we believe "normal" users can safely use and should remain rather stable across releases. Expert users can still access internal classes but should be aware that they should know what they are doing and that the internal API might still change in the future.

As explained above, the jBPM API should thus be used to (1) create a KIE base that contains your process definitions, and to (2) create a session to start new process instances, signal existing ones, register listeners, etc.

7.2. KieBase

The jBPM API allows you to first create a KIE base. This KIE base should include all your process definitions that might need to be executed by that session. To create a KIE base, use a KieHelper to load processes from various resources (for example from the classpath or from the file system), and then create a new KIE base from that helper. The following code snippet shows how to create a KIE base consisting of only one process definition (using in this case a resource from the classpath).

  KieHelper kieHelper = new KieHelper();
  KieBase kieBase = kieHelper
                    .addResource(ResourceFactory.newClassPathResource("MyProcess.bpmn"))
                    .build();

The ResourceFactory has similar methods to load files from file system, from URL, InputStream, Reader, etc.

This is considered manual creation of KIE base and while it is simple it is not recommended for real application development but more for try outs. Following you’ll find recommended and much more powerful way of building KIE base, KIE session and more - RuntimeManager.

7.3. KieSession

Once you’ve loaded your KIE base, you should create a session to interact with the jBPM engine. This session can then be used to start new processes, signal events, etc. The following code snippet shows how easy it is to create a session based on the previously created KIE base, and to start a process (by id).

KieSession ksession = kbase.newKieSession();
ProcessInstance processInstance = ksession.startProcess("com.sample.MyProcess");

7.3.1. ProcessRuntime

The ProcessRuntime interface defines all the session methods for interacting with processes, as shown below.

  /**
	 * Start a new process instance.  The process (definition) that should
	 * be used is referenced by the given process id.
	 *
	 * @param processId  The id of the process that should be started
	 * @return the ProcessInstance that represents the instance of the process that was started
	 */
    ProcessInstance startProcess(String processId);

    /**
	 * Start a new process instance.  The process (definition) that should
	 * be used is referenced by the given process id.  Parameters can be passed
	 * to the process instance (as name-value pairs), and these will be set
	 * as variables of the process instance.
     *
	 * @param processId  the id of the process that should be started
     * @param parameters  the process variables that should be set when starting the process instance
	 * @return the ProcessInstance that represents the instance of the process that was started
     */
    ProcessInstance startProcess(String processId,
                                 Map<String, Object> parameters);

    /**
     * Signals the jBPM engine that an event has occurred. The type parameter defines
     * which type of event and the event parameter can contain additional information
     * related to the event.  All process instances that are listening to this type
     * of (external) event will be notified.  For performance reasons, this type of event
     * signaling should only be used if one process instance should be able to notify
     * other process instances. For internal event within one process instance, use the
     * signalEvent method that also include the processInstanceId of the process instance
     * in question.
     *
     * @param type the type of event
     * @param event the data associated with this event
     */
    void signalEvent(String type,
                     Object event);

    /**
     * Signals the process instance that an event has occurred. The type parameter defines
     * which type of event and the event parameter can contain additional information
     * related to the event.  All node instances inside the given process instance that
     * are listening to this type of (internal) event will be notified.  Note that the event
     * will only be processed inside the given process instance.  All other process instances
     * waiting for this type of event will not be notified.
     *
     * @param type the type of event
     * @param event the data associated with this event
     * @param processInstanceId the id of the process instance that should be signaled
     */
    void signalEvent(String type,
                     Object event,
                     long processInstanceId);

    /**
     * Returns a collection of currently active process instances.  Note that only process
     * instances that are currently loaded and active inside the jBPM engine will be returned.
     * When using persistence, it is likely not all running process instances will be loaded
     * as their state will be stored persistently.  It is recommended not to use this
     * method to collect information about the state of your process instances but to use
     * a history log for that purpose.
     *
     * @return a collection of process instances currently active in the session
     */
    Collection<ProcessInstance> getProcessInstances();

    /**
     * Returns the process instance with the given id.  Note that only active process instances
     * will be returned.  If a process instance has been completed already, this method will return
     * null.
     *
     * @param id the id of the process instance
     * @return the process instance with the given id or null if it cannot be found
     */
    ProcessInstance getProcessInstance(long processInstanceId);

    /**
     * Aborts the process instance with the given id.  If the process instance has been completed
     * (or aborted), or the process instance cannot be found, this method will throw an
     * IllegalArgumentException.
     *
     * @param id the id of the process instance
     */
    void abortProcessInstance(long processInstanceId);

    /**
     * Returns the WorkItemManager related to this session.  This can be used to
     * register new WorkItemHandlers or to complete (or abort) WorkItems.
     *
     * @return the WorkItemManager related to this session
     */
    WorkItemManager getWorkItemManager();

7.3.2. Event Listeners

The session provides methods for registering and removing listeners. A ProcessEventListener can be used to listen to process-related events, like starting or completing a process, entering and leaving a node, etc. Below, the different methods of the ProcessEventListener class are shown. An event object provides access to related information, like the process instance and node instance linked to the event. You can use this API to register your own event listeners.

public interface ProcessEventListener {

  void beforeProcessStarted( ProcessStartedEvent event );
  void afterProcessStarted( ProcessStartedEvent event );
  void beforeProcessCompleted( ProcessCompletedEvent event );
  void afterProcessCompleted( ProcessCompletedEvent event );
  void beforeNodeTriggered( ProcessNodeTriggeredEvent event );
  void afterNodeTriggered( ProcessNodeTriggeredEvent event );
  void beforeNodeLeft( ProcessNodeLeftEvent event );
  void afterNodeLeft( ProcessNodeLeftEvent event );
  void beforeVariableChanged(ProcessVariableChangedEvent event);
  void afterVariableChanged(ProcessVariableChangedEvent event);

}

A note about before and after events: these events typically act like a stack, which means that any events that occur as a direct result of the previous event, will occur between the before and the after of that event. For example, if a subsequent node is triggered as result of leaving a node, the node triggered events will occur inbetween the beforeNodeLeftEvent and the afterNodeLeftEvent of the node that is left (as the triggering of the second node is a direct result of leaving the first node). Doing that allows us to derive cause relationships between events more easily. Similarly, all node triggered and node left events that are the direct result of starting a process will occur between the beforeProcessStarted and afterProcessStarted events. In general, if you just want to be notified when a particular event occurs, you should be looking at the before events only (as they occur immediately before the event actually occurs). When only looking at the after events, one might get the impression that the events are fired in the wrong order, but because the after events are triggered as a stack (after events will only fire when all events that were triggered as a result of this event have already fired). After events should only be used if you want to make sure that all processing related to this has ended (for example, when you want to be notified when starting of a particular process instance has ended.

Also note that not all nodes always generate node triggered and/or node left events. Depending on the type of node, some nodes might only generate node left events, others might only generate node triggered events. Catching intermediate events for example are not generating triggered events (they are only generating left events, as they are not really triggered by another node, rather activated from outside). Similarly, throwing intermediate events are not generating left events (they are only generating triggered events, as they are not really left, as they have no outgoing connection).

jBPM out-of-the-box provides a listener that can be used to create an audit log (either to the console or the a file on the file system). This audit log contains all the different events that occurred at runtime so it’s easy to figure out what happened. Note that these loggers should only be used for debugging purposes. The following logger implementations are supported by default:

  1. Console logger: This logger writes out all the events to the console.

  2. File logger: This logger writes out all the events to a file using an XML representation. This log file might then be used in the IDE to generate a tree-based visualization of the events that occurred during execution.

  3. Threaded file logger: Because a file logger writes the events to disk only when closing the logger or when the number of events in the logger reaches a predefined level, it cannot be used when debugging processes at runtime. A threaded file logger writes the events to a file after a specified time interval, making it possible to use the logger to visualize the progress in realtime, while debugging processes.

The KieServices lets you add a KieRuntimeLogger to your session, as shown below. When creating a console logger, the KIE session for which the logger needs to be created must be passed as an argument. The file logger also requires the name of the log file to be created, and the threaded file logger requires the interval (in milliseconds) after which the events should be saved. You should always close the logger at the end of your application.

  import org.kie.api.KieServices;
  import org.kie.api.logger.KieRuntimeLogger;
  ...
  KieRuntimeLogger logger = KieServices.Factory.get().getLoggers().newFileLogger(ksession, "test");
  // add invocations to the jBPM engine here,
  // e.g. ksession.startProcess(processId);
  ...
  logger.close();

The log file that is created by the file-based loggers contains an XML-based overview of all the events that occurred at runtime. It can be opened in Eclipse, using the Audit View in the Drools Eclipse plugin, where the events are visualized as a tree. Events that occur between the before and after event are shown as children of that event. The following screenshot shows a simple example, where a process is started, resulting in the activation of the Start node, an Action node and an End node, after which the process was completed.

AuditView

7.3.3. Correlation Keys

A common requirement when working with processes is ability to assign a given process instance some sort of business identifier that can be later on referenced without knowing the actual (generated) id of the process instance. To provide such capabilities, jBPM allows to use CorrelationKey that is composed of CorrelationProperties. CorrelationKey can have either single property describing it (which is in most cases) but it can be represented as multi valued properties set.

Correlation capabilities are provided as part of interface

CorrelationAwareProcessRuntime

that exposes following methods:

      /**
      * Start a new process instance.  The process (definition) that should
      * be used is referenced by the given process id.  Parameters can be passed
      * to the process instance (as name-value pairs), and these will be set
      * as variables of the process instance.
      *
      * @param processId  the id of the process that should be started
      * @param correlationKey custom correlation key that can be used to identify process instance
      * @param parameters  the process variables that should be set when starting the process instance
      * @return the ProcessInstance that represents the instance of the process that was started
      */
      ProcessInstance startProcess(String processId, CorrelationKey correlationKey, Map<String, Object> parameters);

      /**
      * Creates a new process instance (but does not yet start it).  The process
      * (definition) that should be used is referenced by the given process id.
      * Parameters can be passed to the process instance (as name-value pairs),
      * and these will be set as variables of the process instance.  You should only
      * use this method if you need a reference to the process instance before actually
      * starting it.  Otherwise, use startProcess.
      *
      * @param processId  the id of the process that should be started
      * @param correlationKey custom correlation key that can be used to identify process instance
      * @param parameters  the process variables that should be set when creating the process instance
      * @return the ProcessInstance that represents the instance of the process that was created (but not yet started)
      */
      ProcessInstance createProcessInstance(String processId, CorrelationKey correlationKey, Map<String, Object> parameters);

      /**
      * Returns the process instance with the given correlationKey.  Note that only active process instances
      * will be returned.  If a process instance has been completed already, this method will return
      * null.
      *
      * @param correlationKey the custom correlation key assigned when process instance was created
      * @return the process instance with the given id or null if it cannot be found
      */
      ProcessInstance getProcessInstance(CorrelationKey correlationKey);

Correlation is usually used with long running processes and thus require persistence to be enabled to be able to permanently store correlation information.

7.3.4. Threads

In the following text, we will refer to two types of "multi-threading": logical and technical. Technical multi-threading is what happens when multiple threads or processes are started on a computer, for example by a Java or C program. Logical multi-threading is what we see in a BPM process after the process reaches a parallel gateway, for example. From a functional standpoint, the original process will then split into two processes that are executed in a parallel fashion.

Of course, the jBPM engine supports logical multi-threading: for example, processes that include a parallel gateway. We’ve chosen to implement logical multi-threading using one thread: a jBPM process that includes logical multi-threading will only be executed in one technical thread. The main reason for doing this is that multiple (technical) threads need to be be able to communicate state information with each other if they are working on the same process. This requirement brings with it a number of complications. While it might seem that multi-threading would bring performance benefits with it, the extra logic needed to make sure the different threads work together well means that this is not guaranteed. There is also the extra overhead incurred because we need to avoid race conditions and deadlocks.

In general, the jBPM engine executes actions in serial. For example, when the jBPM engine encounters a script task in a process, it will synchronously execute that script and wait for it to complete before continuing execution. Similarly, if a process encounters a parallel gateway, it will sequentially trigger each of the outgoing branches, one after the other. This is possible since execution is almost always instantaneous, meaning that it is extremely fast and produces almost no overhead. As a result, the user will usually not even notice this. Similarly, action scripts in a process are also synchronously executed, and the jBPM engine will wait for them to finish before continuing the process. For example, doing a Thread.sleep(…​) as part of a script will not make the jBPM engine continue execution elsewhere but will block the jBPM engine thread during that period.

The same principle applies to service tasks. When a service task is reached in a process, the jBPM engine will also invoke the handler of this service synchronously. The jBPM engine will wait for the completeWorkItem(…​) method to return before continuing execution. It is important that your service handler executes your service asynchronously if its execution is not instantaneous.

An example of this would be a service task that invokes an external service. Since the delay in invoking this service remotely and waiting for the results might be too long, it might be a good idea to invoke this service asynchronously. This means that the handler will only invoke the service and will notify the jBPM engine later when the results are available. In the mean time, the jBPM engine then continues execution of the process.

Human tasks are a typical example of a service that needs to be invoked asynchronously, as we don’t want the jBPM engine to wait until a human actor has responded to the request. The human task handler will only create a new task (on the task list of the assigned actor) when the human task node is triggered. The jBPM engine will then be able to continue execution on the rest of the process (if necessary) and the handler will notify the jBPM engine asynchronously when the user has completed the task.

7.4. RuntimeManager

7.4.1. Overview

RuntimeManager has been introduced to simplify and empower usage of knowledge API especially in context of processes. It provides configurable strategies that control actual runtime execution (how KieSessions are provided) and by default provides following:

  • Singleton - runtime manager maintains single KieSession regardless of number of processes available

  • Per Request - runtime manager delivers new KieSession for every request

  • Per Process Instance - runtime manager maintains mapping between process instance and KieSession and always provides same KieSession whenever working with given process instance

Runtime Manager is primarily responsible for managing and delivering instances of RuntimeEngine to the caller. In turn, RuntimeEngine encapsulates two of the most important elements of the jBPM engine:

  • KieSession

  • TaskService

Both of these components are already configured to work with each other smoothly without additional configuration from end user. No more need to register human task handler or keeping track if it’s connected to the service or not.

public interface RuntimeManager {

	/**
	 * Returns <code>RuntimeEngine</code> instance that is fully initialized:
	 * <ul>
	 * 	<li>KiseSession is created or loaded depending on the strategy</li>
	 * 	<li>TaskService is initialized and attached to ksession (via listener)</li>
	 * 	<li>WorkItemHandlers are initialized and registered on ksession</li>
	 * 	<li>EventListeners (process, agenda, working memory) are initialized and added to ksession</li>
	 * </ul>
	 * @param context the concrete implementation of the context that is supported by given <code>RuntimeManager</code>
	 * @return instance of the <code>RuntimeEngine</code>
	 */
    RuntimeEngine getRuntimeEngine(Context<?> context);

    /**
     * Unique identifier of the <code>RuntimeManager</code>
     * @return
     */
    String getIdentifier();

    /**
     * Disposes <code>RuntimeEngine</code> and notifies all listeners about that fact.
     * This method should always be used to dispose <code>RuntimeEngine</code> that is not needed
     * anymore. <br/>
     * ksession.dispose() shall never be used with RuntimeManager as it will break the internal
     * mechanisms of the manager responsible for clear and efficient disposal.<br/>
     * Dispose is not needed if <code>RuntimeEngine</code> was obtained within active JTA transaction,
     * this means that when getRuntimeEngine method was invoked during active JTA transaction then dispose of
     * the runtime engine will happen automatically on transaction completion.
     * @param runtime
     */
    void disposeRuntimeEngine(RuntimeEngine runtime);

    /**
     * Closes <code>RuntimeManager</code> and releases its resources. Shall always be called when
     * runtime manager is not needed any more. Otherwise it will still be active and operational.
     */
    void close();

}

RuntimeEngine interface provides the most important methods to get access to jBPM engine components:

public interface RuntimeEngine {

	/**
	 * Returns <code>KieSession</code> configured for this <code>RuntimeEngine</code>
	 * @return
	 */
    KieSession getKieSession();

    /**
	 * Returns <code>TaskService</code> configured for this <code>RuntimeEngine</code>
	 * @return
	 */
    TaskService getTaskService();
}

RuntimeManager will ensure that regardless of the strategy it will provide same capabilities when it comes to initialization and configuration of the RuntimeEngine. That means

  • KieSession will be loaded with same factories (either in memory or JPA based)

  • WorkItemHandlers will be registered on every KieSession (either loaded from db or newly created)

  • Event listeners (Process, Agenda, WorkingMemory) will be registered on every KieSession (either loaded from db or newly created)

  • TaskService will be configured with:

    • JTA transaction manager

    • same entity manager factory as for the KieSession

    • UserGroupCallback from environment

On the other hand, RuntimeManager maintains the jBPM engine disposal as well by providing dedicated methods to dispose RuntimeEngine when it’s no more needed to release any resources it might have acquired.

RuntimeManager’s identifier is used as "deploymentId" during runtime execution. For example, the identifier is persisted as "deploymentId" of a Task when the Task is persisted. Task’s deploymentId is used to associate the RuntimeManager when the Task is completed and its process instance is resumed. The deploymentId is also persisted as "externalId" in history log tables. If you don’t specify an identifier on RuntimeManager creation, a default value is applied (e.g. "default-per-pinstance" for PerProcessInstanceRuntimeManager). That means your application uses the same deployment in its lifecycle. If you maintain multiple RuntimeManagers in your application, you need to specify their identifiers. For example, jbpm-services (DeploymentService) maintains multiple RuntimeManagers with identifiers of kjar’s GAV. kie-workbench web application too because it depends on jbpm-services.

7.4.2. Strategies

Singleton strategy - instructs RuntimeManager to maintain single instance of RuntimeEngine (and in turn single instance of KieSession and TaskService). Access to the RuntimeEngine is synchronized and by that thread safe although it comes with a performance penalty due to synchronization. This strategy is similar to what was available by default in jBPM version 5.x and it’s considered easiest strategy and recommended to start with.

It has following characteristics that are important to evaluate while considering it for given scenario:

  • small memory footprint - single instance of runtime engine and task service

  • simple and compact in design and usage

  • good fit for low to medium load on the jBPM engine due to synchronized access

  • due to single KieSession instance all state objects (such as facts) are directly visible to all process instances and vice versa

  • not contextual - meaning when retrieving instances of RuntimeEngine from singleton RuntimeManager Context instance is not important and usually EmptyContext.get() is used although null argument is acceptable as well

  • keeps track of id of KieSession used between RuntimeManager restarts to ensure it will use same session - this id is stored as serialized file on disc in temp location that depends on the environment can be one of following:

    • value given by jbpm.data.dir system property

    • value given by jboss.server.data.dir system property

    • value given by java.io.tmpdir system property

A combination of Singleton strategy and EJB Timer Scheduler (default in kie-server) has a limitation that it may raise Hibernate issues under load. It’s not recommended in production use.

Per request strategy - instructs RuntimeManager to provide new instance of RuntimeEngine for every request. As request RuntimeManager will consider one or more invocations within single transaction. It must return same instance of RuntimeEngine within single transaction to ensure correctness of state as otherwise operation done in one call would not be visible in the other. This is sort of "stateless" strategy that provides only request scope state and once request is completed RuntimeEngine will be permanently destroyed - KieSession information will be removed from the database in case persistence was used.

It has following characteristics:

  • completely isolated jBPM engine and task service operations for every request

  • completely stateless, storing facts makes sense only for the duration of the request

  • good fit for high load, stateless processes (no facts or timers involved that shall be preserved between requests)

  • KieSession is only available during life time of request and at the end is destroyed

  • not contextual - meaning when retrieving instances of RuntimeEngine from per request RuntimeManager Context instance is not important and usually EmptyContext.get() is used although null argument is acceptable as well

Per process instance strategy - instructs RuntimeManager to maintain a strict relationship between KieSession and ProcessInstance. That means that KieSession will be available as long as the ProcessInstance that it belongs to is active. This strategy provides the most flexible approach to use advanced capabilities of the jBPM engine like rule evaluation in isolation (for given process instance only), maximum performance and reduction of potential bottlenecks intriduced by synchronization; and at the same time reduces number of KieSessions to the actual number of process instances rather than number of requests (in contrast to per request strategy).

It has following characteristics:

  • most advanced strategy to provide isolation to given process instance only

  • maintains strict relationship between KieSession and ProcessInstance to ensure it will always deliver same KieSession for given ProcessInstance

  • merges life cycle of KieSession with ProcessInstance making both to be disposed on process instance completion (complete or abort)

  • allows to maintain data (such as facts, timers) in scope of process instance - only process instance will have access to that data

  • introduces bit of overhead due to need to look up and load KieSession for process instance

  • validates usage of KieSession so it cannot be (ab)used for other process instances, in such case an exception is thrown

  • is contextual - accepts following context instances:

    • EmptyContext or null - when starting process instance as there is no process instance id available yet

    • ProcessInstanceIdContext - used after process instance was created

    • CorrelationKeyContext - used as an alternative to ProcessInstanceIdContext to use custom (business) key instead of process instance id

7.4.3. Usage

Regular usage scenario for RuntimeManager is:

  • At application startup

    • build RuntimeManager and keep it for entire life time of the application, it’s thread safe and can be (or even should be) accessed concurrently

  • At request

    • get RuntimeEngine from RuntimeManager using proper context instance dedicated to strategy of RuntimeManager

    • get KieSession and/or TaskService from RuntimeEngine

    • perform operations on KieSession and/or TaskService such as startProcess, completeTask, etc

    • once done with processing dispose RuntimeEngine using RuntimeManager.disposeRuntimeEngine method

  • At application shutdown

    • close RuntimeManager

When RuntimeEngine is obtained from RuntimeManager within an active JTA transaction, then there is no need to dispose RuntimeEngine at the end, as RuntimeManager will automatically dispose the RuntimeEngine on transaction completion (regardless of the completion status commit or rollback).

7.4.3.1. Example

Here is how you can build RuntimeManager and get RuntimeEngine (that encapsulates KieSession and TaskService) from it:

    // first configure environment that will be used by RuntimeManager
    RuntimeEnvironment environment = RuntimeEnvironmentBuilder.Factory.get()
    .newDefaultInMemoryBuilder()
    .addAsset(ResourceFactory.newClassPathResource("BPMN2-ScriptTask.bpmn2"), ResourceType.BPMN2)
    .get();

    // next create RuntimeManager - in this case singleton strategy is chosen
    RuntimeManager manager = RuntimeManagerFactory.Factory.get().newSingletonRuntimeManager(environment);

    // then get RuntimeEngine out of manager - using empty context as singleton does not keep track
    // of runtime engine as there is only one
    RuntimeEngine runtime = manager.getRuntimeEngine(EmptyContext.get());

    // get KieSession from runtime runtimeEngine - already initialized with all handlers, listeners, etc that were configured
    // on the environment
    KieSession ksession = runtimeEngine.getKieSession();

    // add invocations to the jBPM engine here,
    // e.g. ksession.startProcess(processId);

    // and last dispose the runtime engine
    manager.disposeRuntimeEngine(runtimeEngine);

This example provides simplest (minimal) way of using RuntimeManager and RuntimeEngine although it provides few quite valuable information:

  • KieSession will be in memory only - by using newDefaultInMemoryBuilder

  • there will be single process available for execution - by adding it as an asset

  • TaskService will be configured and attached to KieSession via LocalHTWorkItemHandler to support user task capabilities within processes

7.4.4. Configuration

The complexity of knowing when to create, dispose, register handlers, etc is taken away from the end user and moved to the runtime manager that knows when/how to perform such operations but still allows to have a fine grained control over this process by providing comprehensive configuration of the RuntimeEnvironment.

  public interface RuntimeEnvironment {

	/**
	 * Returns <code>KieBase</code> that shall be used by the manager
	 * @return
	 */
    KieBase getKieBase();

    /**
     * KieSession environment that shall be used to create instances of <code>KieSession</code>
     * @return
     */
    Environment getEnvironment();

    /**
     * KieSession configuration that shall be used to create instances of <code>KieSession</code>
     * @return
     */
    KieSessionConfiguration getConfiguration();

    /**
     * Indicates if persistence shall be used for the KieSession instances
     * @return
     */
    boolean usePersistence();

    /**
     * Delivers concrete implementation of <code>RegisterableItemsFactory</code> to obtain handlers and listeners
     * that shall be registered on instances of <code>KieSession</code>
     * @return
     */
    RegisterableItemsFactory getRegisterableItemsFactory();

    /**
     * Delivers concrete implementation of <code>UserGroupCallback</code> that shall be registered on instances
     * of <code>TaskService</code> for managing users and groups.
     * @return
     */
    UserGroupCallback getUserGroupCallback();

    /**
     * Delivers custom class loader that shall be used by the jBPM engine and task service instances
     * @return
     */
    ClassLoader getClassLoader();

    /**
     * Closes the environment allowing to close all depending components such as ksession factories, etc
     */
    void close();
7.4.4.1. Building RuntimeEnvironment

While RuntimeEnvironment interface provides mostly access to data kept as part of the environment and will be used by the RuntimeManager, users should take advantage of builder style class that provides fluent API to configure RuntimeEnvironment with predefined settings.

public interface RuntimeEnvironmentBuilder {

	public RuntimeEnvironmentBuilder persistence(boolean persistenceEnabled);

	public RuntimeEnvironmentBuilder entityManagerFactory(Object emf);

	public RuntimeEnvironmentBuilder addAsset(Resource asset, ResourceType type);

	public RuntimeEnvironmentBuilder addEnvironmentEntry(String name, Object value);

	public RuntimeEnvironmentBuilder addConfiguration(String name, String value);

	public RuntimeEnvironmentBuilder knowledgeBase(KieBase kbase);

	public RuntimeEnvironmentBuilder userGroupCallback(UserGroupCallback callback);

	public RuntimeEnvironmentBuilder registerableItemsFactory(RegisterableItemsFactory factory);

	public RuntimeEnvironment get();

	public RuntimeEnvironmentBuilder classLoader(ClassLoader cl);

	public RuntimeEnvironmentBuilder schedulerService(Object globalScheduler);

Instances of the RuntimeEnvironmentBuilder can be obtained via RuntimeEnvironmentBuilderFactory that provides preconfigured sets of builder to simplify and help users to build the environment for the RuntimeManager.

public interface RuntimeEnvironmentBuilderFactory {

	/**
     * Provides completely empty <code>RuntimeEnvironmentBuilder</code> instance that allows to manually
     * set all required components instead of relying on any defaults.
     * @return new instance of <code>RuntimeEnvironmentBuilder</code>
     */
    public RuntimeEnvironmentBuilder newEmptyBuilder();

    /**
     * Provides default configuration of <code>RuntimeEnvironmentBuilder</code> that is based on:
     * <ul>
     * 	<li>DefaultRuntimeEnvironment</li>
     * </ul>
     * @return new instance of <code>RuntimeEnvironmentBuilder</code> that is already preconfigured with defaults
     *
     * @see DefaultRuntimeEnvironment
     */
    public RuntimeEnvironmentBuilder newDefaultBuilder();

    /**
     * Provides default configuration of <code>RuntimeEnvironmentBuilder</code> that is based on:
     * <ul>
     * 	<li>DefaultRuntimeEnvironment</li>
     * </ul>
     * but it does not have persistence for jBPM engine configured so it will only store process instances in memory
     * @return new instance of <code>RuntimeEnvironmentBuilder</code> that is already preconfigured with defaults
     *
     * @see DefaultRuntimeEnvironment
     */
    public RuntimeEnvironmentBuilder newDefaultInMemoryBuilder();

    /**
     * Provides default configuration of <code>RuntimeEnvironmentBuilder</code> that is based on:
     * <ul>
     * 	<li>DefaultRuntimeEnvironment</li>
     * </ul>
     * This one is tailored to works smoothly with kjars as the notion of kbase and ksessions
     * @param groupId group id of kjar
     * @param artifactId artifact id of kjar
     * @param version version number of kjar
     * @return new instance of <code>RuntimeEnvironmentBuilder</code> that is already preconfigured with defaults
     *
     * @see DefaultRuntimeEnvironment
     */
    public RuntimeEnvironmentBuilder newDefaultBuilder(String groupId, String artifactId, String version);

    /**
     * Provides default configuration of <code>RuntimeEnvironmentBuilder</code> that is based on:
     * <ul>
     * 	<li>DefaultRuntimeEnvironment</li>
     * </ul>
     * This one is tailored to works smoothly with kjars as the notion of kbase and ksessions
     * @param groupId group id of kjar
     * @param artifactId artifact id of kjar
     * @param version version number of kjar
     * @param kbaseName name of the kbase defined in kmodule.xml stored in kjar
     * @param ksessionName name of the ksession define in kmodule.xml stored in kjar
     * @return new instance of <code>RuntimeEnvironmentBuilder</code> that is already preconfigured with defaults
     *
     * @see DefaultRuntimeEnvironment
     */
    public RuntimeEnvironmentBuilder newDefaultBuilder(String groupId, String artifactId, String version, String kbaseName, String ksessionName);

    /**
     * Provides default configuration of <code>RuntimeEnvironmentBuilder</code> that is based on:
     * <ul>
     * 	<li>DefaultRuntimeEnvironment</li>
     * </ul>
     * This one is tailored to works smoothly with kjars as the notion of kbase and ksessions
     * @param releaseId <code>ReleaseId</code> that described the kjar
     * @return new instance of <code>RuntimeEnvironmentBuilder</code> that is already preconfigured with defaults
     *
     * @see DefaultRuntimeEnvironment
     */
    public RuntimeEnvironmentBuilder newDefaultBuilder(ReleaseId releaseId);

    /**
     * Provides default configuration of <code>RuntimeEnvironmentBuilder</code> that is based on:
     * <ul>
     * 	<li>DefaultRuntimeEnvironment</li>
     * </ul>
     * This one is tailored to works smoothly with kjars as the notion of kbase and ksessions
     * @param releaseId <code>ReleaseId</code> that described the kjar
     * @param kbaseName name of the kbase defined in kmodule.xml stored in kjar
     * @param ksessionName name of the ksession define in kmodule.xml stored in kjar
     * @return new instance of <code>RuntimeEnvironmentBuilder</code> that is already preconfigured with defaults
     *
     * @see DefaultRuntimeEnvironment
     */
    public RuntimeEnvironmentBuilder newDefaultBuilder(ReleaseId releaseId, String kbaseName, String ksessionName);

    /**
     * Provides default configuration of <code>RuntimeEnvironmentBuilder</code> that is based on:
     * <ul>
     * 	<li>DefaultRuntimeEnvironment</li>
     * </ul>
     * It relies on KieClasspathContainer that requires to have kmodule.xml present in META-INF folder which
     * defines the kjar itself.
     * Expects to use default kbase and ksession from kmodule.
     * @return new instance of <code>RuntimeEnvironmentBuilder</code> that is already preconfigured with defaults
     *
     * @see DefaultRuntimeEnvironment
     */
    public RuntimeEnvironmentBuilder newClasspathKmoduleDefaultBuilder();

    /**
     * Provides default configuration of <code>RuntimeEnvironmentBuilder</code> that is based on:
     * <ul>
     * 	<li>DefaultRuntimeEnvironment</li>
     * </ul>
     * It relies on KieClasspathContainer that requires to have kmodule.xml present in META-INF folder which
     * defines the kjar itself.
     * @param kbaseName name of the kbase defined in kmodule.xml
     * @param ksessionName name of the ksession define in kmodule.xml
     * @return new instance of <code>RuntimeEnvironmentBuilder</code> that is already preconfigured with defaults
     *
     * @see DefaultRuntimeEnvironment
     */
    public RuntimeEnvironmentBuilder newClasspathKmoduleDefaultBuilder(String kbaseName, String ksessionName);

Besides KieSession Runtime Manager provides access to TaskService too as integrated component of a RuntimeEngine that will always be configured and ready for communication between jBPM engine and task service.

Since the default builder was used, it will already come with predefined set of elements that consists of:

  • Persistence unit name will be set to org.jbpm.persistence.jpa (for both jBPM engine and task service)

  • Human Task handler will be automatically registered on KieSession

  • JPA based history log event listener will be automatically registered on KieSession

  • Event listener to trigger rule task evaluation (fireAllRules) will be automatically registered on KieSession

7.4.4.2. Registering handlers and listeners

To extend it with your own handlers or listeners a dedicated mechanism is provided that comes as implementation of RegisterableItemsFactory

	/**
	 * Returns new instances of <code>WorkItemHandler</code> that will be registered on <code>RuntimeEngine</code>
	 * @param runtime provides <code>RuntimeEngine</code> in case handler need to make use of it internally
	 * @return map of handlers to be registered - in case of no handlers empty map shall be returned.
	 */
    Map<String, WorkItemHandler> getWorkItemHandlers(RuntimeEngine runtime);

    /**
	 * Returns new instances of <code>ProcessEventListener</code> that will be registered on <code>RuntimeEngine</code>
	 * @param runtime provides <code>RuntimeEngine</code> in case listeners need to make use of it internally
	 * @return list of listeners to be registered - in case of no listeners empty list shall be returned.
	 */
    List<ProcessEventListener> getProcessEventListeners(RuntimeEngine runtime);

    /**
	 * Returns new instances of <code>AgendaEventListener</code> that will be registered on <code>RuntimeEngine</code>
	 * @param runtime provides <code>RuntimeEngine</code> in case listeners need to make use of it internally
	 * @return list of listeners to be registered - in case of no listeners empty list shall be returned.
	 */
    List<AgendaEventListener> getAgendaEventListeners(RuntimeEngine runtime);

    /**
	 * Returns new instances of <code>WorkingMemoryEventListener</code> that will be registered on <code>RuntimeEngine</code>
	 * @param runtime provides <code>RuntimeEngine</code> in case listeners need to make use of it internally
	 * @return list of listeners to be registered - in case of no listeners empty list shall be returned.
	 */
    List<WorkingMemoryEventListener> getWorkingMemoryEventListeners(RuntimeEngine runtime);

A best practice is to just extend those that come out of the box and just add your own. Extensions are not always needed as the default implementations of RegisterableItemsFactory provides possibility to define custom handlers and listeners. Following is a list of available implementations that might be useful (they are ordered in the hierarchy of inheritance):

  • org.jbpm.runtime.manager.impl.SimpleRegisterableItemsFactory - simplest possible implementations that comes empty and is based on reflection to produce instances of handlers and listeners based on given class names

  • org.jbpm.runtime.manager.impl.DefaultRegisterableItemsFactory - extension of the Simple implementation that introduces defaults described above and still provides same capabilities as Simple implementation

  • org.jbpm.runtime.manager.impl.KModuleRegisterableItemsFactory - extension of default implementation that provides specific capabilities for kmodule and still provides same capabilities as Simple implementation

  • org.jbpm.runtime.manager.impl.cdi.InjectableRegisterableItemsFactory - extension of default implementation that is tailored for CDI environments and provides CDI style approach to finding handlers and listeners via producers

Alternatively, simple (stateless or requiring only KieSession) work item handlers might be registered in the well known way - defined as part of CustomWorkItem.conf file that shall be placed on class path. To use this approach do following:

  • create file "drools.session.conf" inside META-INF of the root of the class path, for web applications it will be WEB-INF/classes/META-INF

  • add following line to drools.session.conf file "drools.workItemHandlers = CustomWorkItemHandlers.conf"

  • create file "CustomWorkItemHandlers.conf" inside META-INF of the root of the class path, for web applications it will be WEB-INF/classes/META-INF

  • define custom work item handlers in MVEL style inside CustomWorkItemHandlers.conf

    [
      "Log": new org.jbpm.process.instance.impl.demo.SystemOutWorkItemHandler(),
      "WebService": new org.jbpm.process.workitem.webservice.WebServiceWorkItemHandler(ksession),
      "Rest": new org.jbpm.process.workitem.rest.RESTWorkItemHandler(),
      "Service Task" : new org.jbpm.process.workitem.bpmn2.ServiceTaskHandler(ksession)
    ]

And that’s it, now all these work item handlers will be registered for any KieSession created by that application, regardless if it uses RuntimeManager or not.

Registering handlers and listeners in CDI environment

When using RuntimeManager in CDI environment there are dedicated interfaces that can be used to provide custom WorkItemHandlers and EventListeners to the RuntimeEngine.

public interface WorkItemHandlerProducer {

    /**
     * Returns map of (key = work item name, value work item handler instance) of work items
     * to be registered on KieSession
     * <br/>
     * Parameters that might be given are as follows:
     * <ul>
     *  <li>ksession</li>
     *  <li>taskService</li>
     *  <li>runtimeManager</li>
     * </ul>
     *
     * @param identifier - identifier of the owner - usually RuntimeManager that allows the producer to filter out
     * and provide valid instances for given owner
     * @param params - owner might provide some parameters, usually KieSession, TaskService, RuntimeManager instances
     * @return map of work item handler instances (recommendation is to always return new instances when this method is invoked)
     */
    Map<String, WorkItemHandler> getWorkItemHandlers(String identifier, Map<String, Object> params);
}

Event listener producer shall be annotated with proper qualifier to indicate what type of listeners they provide, so pick one of following to indicate they type:

  • @Process - for ProcessEventListener

  • @Agenda - for AgendaEventListener

  • @WorkingMemory - for WorkingMemoryEventListener

public interface EventListenerProducer<T> {

    /**
     * Returns list of instances for given (T) type of listeners
     * <br/>
     * Parameters that might be given are as follows:
     * <ul>
     *  <li>ksession</li>
     *  <li>taskService</li>
     *  <li>runtimeManager</li>
     * </ul>
     * @param identifier - identifier of the owner - usually RuntimeManager that allows the producer to filter out
     * and provide valid instances for given owner
     * @param params - owner might provide some parameters, usually KieSession, TaskService, RuntimeManager instances
     * @return list of listener instances (recommendation is to always return new instances when this method is invoked)
     */
    List<T> getEventListeners(String identifier, Map<String, Object>  params);
}

Implementations of these interfaces shall be packaged as bean archive (includes beans.xml inside META-INF) and placed on application classpath (e.g. WEB-INF/lib for web application). THat is enough for CDI based RuntimeManager to discover them and register on every KieSession that is created or loaded from data store.

Some parameters are provided to the producers to allow handlers/listeners to be more stateful and be able to do more advanced things with the jBPM engine - like signal of the jBPM engine or process instance in case of an error. Thus all components are provided:

  • KieSession

  • TaskService

  • RuntimeManager

Whenever there is a need to interact with the jBPM engine or task service from within handler or listener, recommended approach is to use RuntimeManager and retrieve RuntimeEngine (and then KieSession and/or TaskService) from it as that will ensure proper state managed according to strategy

In addition, some filtering can be applied based on identifier (that is given as argument to the methods) to decide if given RuntimeManager shall receive handlers/listeners or not.

7.5. Services

On top of RuntimeManager API a set of high level services has been provided from jBPM version 6.2. These services are meant to be the easiest way to embed (j)BPM capabilities into custom application. A complete set of modules are delivered as part of these services. They are partitioned into several modules to ease thier adoptions in various environments.

  • jbpm-services-api

    contains only api classes and interfaces

  • jbpm-kie-services

    rewritten code implementation of services api - pure java, no framework dependencies

  • jbpm-services-cdi

    CDI wrapper on top of core services implementation

  • jbpm-services-ejb-api

    extension to services api for ejb needs

  • jbpm-services-ejb-impl

    EJB wrappers on top of core services implementation

  • jbpm-services-ejb-timer

    scheduler service based on EJB TimerService to support time based operations e.g. timer events, deadlines, etc

  • jbpm-services-ejb-client

    EJB remote client implementation - currently only for JBoss Service modules are grouped with its framework dependencies, so developers are free to choose which one is suitable for them and use only that.

7.5.1. Deployment Service

As the name suggest, its primary responsibility is to deploy (and undeploy) units. Deployment unit is kjar that brings in business assets (like processes, rules, forms, data model) for execution. Deployment services allow to query it to get hold of available deployment units and even their RuntimeManager instances.

there are some restrictions on EJB remote client to do not expose RuntimeManager as it won’t make any sense on client side (after it was serialized).

So typical use case for this service is to provide dynamic behavior into your system so multiple kjars can be active at the same time and be executed simultaneously.

// create deployment unit by giving GAV
DeploymentUnit deploymentUnit = new KModuleDeploymentUnit(GROUP_ID, ARTIFACT_ID, VERSION);
// deploy
deploymentService.deploy(deploymentUnit);
// retrieve deployed unit
DeployedUnit deployed = deploymentService.getDeployedUnit(deploymentUnit.getIdentifier());
// get runtime manager
RuntimeManager manager = deployed.getRuntimeManager();

Complete DeploymentService interface is as follows:

public interface DeploymentService {

    void deploy(DeploymentUnit unit);

    void undeploy(DeploymentUnit unit);

    RuntimeManager getRuntimeManager(String deploymentUnitId);

    DeployedUnit getDeployedUnit(String deploymentUnitId);

    Collection<DeployedUnit> getDeployedUnits();

    void activate(String deploymentId);

    void deactivate(String deploymentId);

    boolean isDeployed(String deploymentUnitId);
}

7.5.2. Definition Service

Upon deployment, every process definition is scanned using definition service that parses the process and extracts valuable information out of it. These information can provide valuable input to the system to inform users about what is expected. Definition service provides information about:

  • process definition - id, name, description

  • process variables - name and type

  • reusable subprocesses used in the process (if any)

  • service tasks (domain specific activities)

  • user tasks including assignment information

  • task data input and output information

So definition service can be seen as sort of supporting service that provides quite a few information about process definition that are extracted directly from BPMN2.

String processId = "org.jbpm.writedocument";

Collection<UserTaskDefinition> processTasks =
bpmn2Service.getTasksDefinitions(deploymentUnit.getIdentifier(), processId);

Map<String, String> processData =
bpmn2Service.getProcessVariables(deploymentUnit.getIdentifier(), processId);

Map<String, String> taskInputMappings =
bpmn2Service.getTaskInputMappings(deploymentUnit.getIdentifier(), processId, "Write a Document" );

While it usually is used with combination of other services (like deployment service) it can be used standalone as well to get details about process definition that do not come from kjar. This can be achieved by using buildProcessDefinition method of definition service.

public interface DefinitionService {

    ProcessDefinition buildProcessDefinition(String deploymentId, String bpmn2Content,
			ClassLoader classLoader, boolean cache) throws IllegalArgumentException;

    ProcessDefinition getProcessDefinition(String deploymentId, String processId);

    Collection<String> getReusableSubProcesses(String deploymentId, String processId);

    Map<String, String> getProcessVariables(String deploymentId, String processId);

    Map<String, String> getServiceTasks(String deploymentId, String processId);

    Map<String, Collection<String>> getAssociatedEntities(String deploymentId, String processId);

    Collection<UserTaskDefinition> getTasksDefinitions(String deploymentId, String processId);

    Map<String, String> getTaskInputMappings(String deploymentId, String processId, String taskName);

    Map<String, String> getTaskOutputMappings(String deploymentId, String processId, String taskName);

}

7.5.3. Process Service

Process service is the one that usually is of the most interest. Once the deployment and definition service was already used to feed the system with something that can be executed. Process service provides access to execution environment that allows:

  • start new process instance

  • work with existing one - signal, get details of it, get variables, etc

  • work with work items

At the same time process service is a command executor so it allows to execute commands (essentially on ksession) to extend its capabilities.

Important to note is that process service is focused on runtime operations so use it whenever there is a need to alter (signal, change variables, etc) process instance and not for read operations like show available process instances by looping though given list and invoking getProcessInstance method. For that there is dedicated runtime data service that is described below.

An example on how to deploy and run process can be done as follows:

KModuleDeploymentUnit deploymentUnit = new KModuleDeploymentUnit(GROUP_ID, ARTIFACT_ID, VERSION);

deploymentService.deploy(deploymentUnit);

long processInstanceId = processService.startProcess(deploymentUnit.getIdentifier(), "customtask");

ProcessInstance pi = processService.getProcessInstance(processInstanceId);

As you can see start process expects deploymentId as first argument. This is extremely powerful to enable service to easily work with various deployments, even with same processes but coming from different versions - kjar versions.

public interface ProcessService {

    Long startProcess(String deploymentId, String processId);

    Long startProcess(String deploymentId, String processId, Map<String, Object> params);

    void abortProcessInstance(Long processInstanceId);

    void abortProcessInstances(List<Long> processInstanceIds);

    void signalProcessInstance(Long processInstanceId, String signalName, Object event);

    void signalProcessInstances(List<Long> processInstanceIds, String signalName, Object event);

    ProcessInstance getProcessInstance(Long processInstanceId);

    void setProcessVariable(Long processInstanceId, String variableId, Object value);

    void setProcessVariables(Long processInstanceId, Map<String, Object> variables);

    Object getProcessInstanceVariable(Long processInstanceId, String variableName);

    Map<String, Object> getProcessInstanceVariables(Long processInstanceId);

    Collection<String> getAvailableSignals(Long processInstanceId);

    void completeWorkItem(Long id, Map<String, Object> results);

    void abortWorkItem(Long id);

    WorkItem getWorkItem(Long id);

    List<WorkItem> getWorkItemByProcessInstance(Long processInstanceId);

    public <T> T execute(String deploymentId, Command<T> command);

    public <T> T execute(String deploymentId, Context<?> context, Command<T> command);

}

7.5.4. Runtime Data Service

Runtime data service as name suggests, deals with all that refers to runtime information:

  • started process instances

  • executed node instances

  • executed node instances

  • and more

Use this service as main source of information whenever building list based UI - to show process definitions, process instances, tasks for given user, etc. This service was designed to be as efficient as possible and still provide all required information.

Some examples:

  • get all process definitions

    Collection definitions = runtimeDataService.getProcesses(new QueryContext());
  • get active process instances

    Collection<processinstancedesc> instances = runtimeDataService.getProcessInstances(new QueryContext());
  • get active nodes for given process instance

    Collection<nodeinstancedesc> instances = runtimeDataService.getProcessInstanceHistoryActive(processInstanceId, new QueryContext());
  • get tasks assigned to john

    List<tasksummary> taskSummaries = runtimeDataService.getTasksAssignedAsPotentialOwner("john", new QueryFilter(0, 10));

There are two important arguments that the runtime data service operations supports:

  • QueryContext

  • QueryFilter - extension of QueryContext

These provide capabilities for efficient management result set like pagination, sorting and ordering (QueryContext). Moreover additional filtering can be applied to task queries to provide more advanced capabilities when searching for user tasks.

public interface RuntimeDataService {

    // Process instance information

    Collection<ProcessInstanceDesc> getProcessInstances(QueryContext queryContext);

    Collection<ProcessInstanceDesc> getProcessInstances(List<Integer> states, String initiator, QueryContext queryContext);

    Collection<ProcessInstanceDesc> getProcessInstancesByProcessId(List<Integer> states, String processId, String initiator, QueryContext queryContext);

    Collection<ProcessInstanceDesc> getProcessInstancesByProcessName(List<Integer> states, String processName, String initiator, QueryContext queryContext);

    Collection<ProcessInstanceDesc> getProcessInstancesByDeploymentId(String deploymentId, List<Integer> states, QueryContext queryContext);

    ProcessInstanceDesc getProcessInstanceById(long processInstanceId);

    Collection<ProcessInstanceDesc> getProcessInstancesByProcessDefinition(String processDefId, QueryContext queryContext);

    Collection<ProcessInstanceDesc> getProcessInstancesByProcessDefinition(String processDefId, List<Integer> states, QueryContext queryContext);


    // Node and Variable instance information

    NodeInstanceDesc getNodeInstanceForWorkItem(Long workItemId);

    Collection<NodeInstanceDesc> getProcessInstanceHistoryActive(long processInstanceId, QueryContext queryContext);

    Collection<NodeInstanceDesc> getProcessInstanceHistoryCompleted(long processInstanceId, QueryContext queryContext);

    Collection<NodeInstanceDesc> getProcessInstanceFullHistory(long processInstanceId, QueryContext queryContext);

    Collection<NodeInstanceDesc> getProcessInstanceFullHistoryByType(long processInstanceId, EntryType type, QueryContext queryContext);

    Collection<VariableDesc> getVariablesCurrentState(long processInstanceId);

    Collection<VariableDesc> getVariableHistory(long processInstanceId, String variableId, QueryContext queryContext);


    // Process information

    Collection<ProcessDefinition> getProcessesByDeploymentId(String deploymentId, QueryContext queryContext);

    Collection<ProcessDefinition> getProcessesByFilter(String filter, QueryContext queryContext);

    Collection<ProcessDefinition> getProcesses(QueryContext queryContext);

    Collection<String> getProcessIds(String deploymentId, QueryContext queryContext);

    ProcessDefinition getProcessById(String processId);

    ProcessDefinition getProcessesByDeploymentIdProcessId(String deploymentId, String processId);

	// user task query operations

    UserTaskInstanceDesc getTaskByWorkItemId(Long workItemId);

    UserTaskInstanceDesc getTaskById(Long taskId);

    List<TaskSummary> getTasksAssignedAsBusinessAdministrator(String userId, QueryFilter filter);

    List<TaskSummary> getTasksAssignedAsBusinessAdministratorByStatus(String userId, List<Status> statuses, QueryFilter filter);

    List<TaskSummary> getTasksAssignedAsPotentialOwner(String userId, QueryFilter filter);

    List<TaskSummary> getTasksAssignedAsPotentialOwner(String userId, List<String> groupIds, QueryFilter filter);

    List<TaskSummary> getTasksAssignedAsPotentialOwnerByStatus(String userId, List<Status> status, QueryFilter filter);

    List<TaskSummary> getTasksAssignedAsPotentialOwner(String userId, List<String> groupIds, List<Status> status, QueryFilter filter);

    List<TaskSummary> getTasksAssignedAsPotentialOwnerByExpirationDateOptional(String userId, List<Status> status, Date from, QueryFilter filter);

    List<TaskSummary> getTasksOwnedByExpirationDateOptional(String userId, List<Status> strStatuses, Date from, QueryFilter filter);

    List<TaskSummary> getTasksOwned(String userId, QueryFilter filter);

    List<TaskSummary> getTasksOwnedByStatus(String userId, List<Status> status, QueryFilter filter);

    List<Long> getTasksByProcessInstanceId(Long processInstanceId);

    List<TaskSummary> getTasksByStatusByProcessInstanceId(Long processInstanceId, List<Status> status, QueryFilter filter);

    List<AuditTask> getAllAuditTask(String userId, QueryFilter filter);

}

7.5.5. User Task Service

User task service covers complete life cycle of individual task so it can be managed from start to end. It explicitly eliminates queries from it to provide scoped execution and moves all query operations into runtime data service. Besides lifecycle operations user task service allows:

  • modification of selected properties

  • access to task variables

  • access to task attachments

  • access to task comments

On top of that user task service is a command executor as well that allows to execute custom task commands.

Complete example with start process and complete user task done by services:

long processInstanceId =
processService.startProcess(deployUnit.getIdentifier(), "org.jbpm.writedocument");

List<Long> taskIds =
runtimeDataService.getTasksByProcessInstanceId(processInstanceId);

Long taskId = taskIds.get(0);

userTaskService.start(taskId, "john");
UserTaskInstanceDesc task = runtimeDataService.getTaskById(taskId);

Map<String, Object> results = new HashMap<String, Object>();
results.put("Result", "some document data");
userTaskService.complete(taskId, "john", results);

The most important thing when working with services is that there is no more need to create your own implementations of Process service that simply wraps runtime manager, runtime engine, ksession usage. Services make use of RuntimeManager API best practices and thus eliminate various risks when working with that API.

7.5.6. Quartz-based Timer Service

jBPM provides a cluster-ready timer service via Quartz, allowing you to dispose or load your KIE session at any time. In order to fire each timer appropriately, this service can be utilized to manage how long a kie session should be active.

A base Quartz configuration file in the case of a clustered environment is provided as an example below:

#============================================================================
# Configure Main Scheduler Properties
#============================================================================

org.quartz.scheduler.instanceName = jBPMClusteredScheduler
org.quartz.scheduler.instanceId = AUTO

#============================================================================
# Configure ThreadPool
#============================================================================

org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool
org.quartz.threadPool.threadCount = 5
org.quartz.threadPool.threadPriority = 5

#============================================================================
# Configure JobStore
#============================================================================

org.quartz.jobStore.misfireThreshold = 60000

org.quartz.jobStore.class=org.quartz.impl.jdbcjobstore.JobStoreCMT
org.quartz.jobStore.driverDelegateClass=org.quartz.impl.jdbcjobstore.StdJDBCDelegate
org.quartz.jobStore.useProperties=false
org.quartz.jobStore.dataSource=managedDS
org.quartz.jobStore.nonManagedTXDataSource=nonManagedDS
org.quartz.jobStore.tablePrefix=QRTZ_
org.quartz.jobStore.isClustered=true
org.quartz.jobStore.clusterCheckinInterval = 20000

#============================================================================
# TODO: Configure Datasources
#============================================================================
#org.quartz.dataSource.managedDS.jndiURL=
#org.quartz.dataSource.nonManagedDS.jndiURL=

For more information on configuring a Quartz scheduler, please see the documentation for the 1.8.5 distribution archive.

7.5.7. QueryService

QueryService provides advanced search capabilities that are based on Dashbuilder DataSets. The concept behind it is that users are given control over how to retrieve data from underlying data store. This includes complex joins with external tables such as JPA entities tables, custom systems data base tables etc.

QueryService is build around two parts:

  • Management operations

    • register query definition

    • replace query definition

    • unregister (remove) query definition

    • get query definition

    • get all registered query definitions

  • Runtime operations

    • query - with two flavors

      • simple based on QueryParam as filter provider

      • advanced based on QueryParamBuilder as filter provider

DashBuilder DataSets provide support for multiple data sources (CSV, SQL, elastic search, etc) while jBPM - since its backend is RDBMS based - focuses on SQL based data sets. So jBPM QueryService is a subset of DashBuilder DataSets capabilities to allow efficient queries with simple API.

Terminology

  • QueryDefinition - represents definion of the data set which consists of unique name, sql expression (the query) and source - JNDI name of the data source to use when performing queries

  • QueryParam - basic structure that represents individual query parameter - condition - that consists of: column name, operator, expected value(s)

  • QueryResultMapper - responsible for mapping raw data set data (rows and columns) into object representation

  • QueryParamBuilder - responsible for building query filters that will be applied on the query definition for given query invocation

While QueryDefinition and QueryParam is rather straight forward, QueryParamBuilder and QueryResultMapper is bit more advanced and require slightly more attention to make use of it in right way, and by that take advantage of their capabilities.

QueryResultMapper

Mapper as the name suggest, maps data taken out from data base (from data set) into object representation. Much like ORM providers such as hibernate maps tables to entities. Obviously there might be many object types that could be used for representing data set results so it’s almost impossible to provide them out of the box. Mappers are rather powerful and thus are pluggable, you can implement your own that will transform the result into whatever type you like. jBPM comes with following mappers out of the box:

  • org.jbpm.kie.services.impl.query.mapper.ProcessInstanceQueryMapper

    • registered with name - ProcessInstances

  • org.jbpm.kie.services.impl.query.mapper.ProcessInstanceWithVarsQueryMapper

    • registered with name - ProcessInstancesWithVariables

  • org.jbpm.kie.services.impl.query.mapper.ProcessInstanceWithCustomVarsQueryMapper

    • registered with name - ProcessInstancesWithCustomVariables

  • org.jbpm.kie.services.impl.query.mapper.UserTaskInstanceQueryMapper

    • registered with name - UserTasks

  • org.jbpm.kie.services.impl.query.mapper.UserTaskInstanceWithVarsQueryMapper

    • registered with name - UserTasksWithVariables

  • org.jbpm.kie.services.impl.query.mapper.UserTaskInstanceWithCustomVarsQueryMapper

    • registered with name - UserTasksWithCustomVariables

  • org.jbpm.kie.services.impl.query.mapper.TaskSummaryQueryMapper

    • registered with name - TaskSummaries

  • org.jbpm.kie.services.impl.query.mapper.RawListQueryMapper

    • registered with name - RawList

Each QueryResultMapper is registered under given name to allow simple look up by name instead of referencing its class name - especially important when using EJB remote flavor of services where we want to reduce number of dependencies and thus not relying on implementation on client side. So to be able to reference QueryResultMapper by name, NamedQueryMapper should be used which is part of jbpm-services-api. That acts as delegate (lazy delegate) as it will look up the actual mapper when the query is actually performed.

queryService.query("my query def", new NamedQueryMapper<Collection<ProcessInstanceDesc>>("ProcessInstances"), new QueryContext());

QueryParamBuilder

QueryParamBuilder that provides more advanced way of building filters for our data sets. By default when using query method of QueryService that accepts zero or more QueryParam instances (as we have seen in above examples) all of these params will be joined with AND operator meaning all of them must match. But that’s not always the case so that’s why QueryParamBuilder has been introduced for users to build their on builders which will provide filters at the time the query is issued.

There is one QueryParamBuilder available out of the box and it is used to cover default QueryParams that are based on so called core functions. These core functions are SQL based conditions and includes following

  • IS_NULL

  • NOT_NULL

  • EQUALS_TO

  • NOT_EQUALS_TO

  • LIKE_TO

  • GREATER_THAN

  • GREATER_OR_EQUALS_TO

  • LOWER_THAN

  • LOWER_OR_EQUALS_TO

  • BETWEEN

  • IN

  • NOT_IN

QueryParamBuilder is simple interface that is invoked as long as its build method returns non null value before query is performed. So you can build up a complex filter options that could not be simply expressed by list of QueryParams. Here is basic implementation of QueryParamBuilder to give you a jump start to implement your own - note that it relies on DashBuilder Dataset API.

public class TestQueryParamBuilder implements QueryParamBuilder<ColumnFilter> {

    private Map<String, Object> parameters;
    private boolean built = false;
    public TestQueryParamBuilder(Map<String, Object> parameters) {
        this.parameters = parameters;
    }

    @Override
    public ColumnFilter build() {
        // return null if it was already invoked
        if (built) {
            return null;
        }

        String columnName = "processInstanceId";

        ColumnFilter filter = FilterFactory.OR(
                FilterFactory.greaterOrEqualsTo((Long)parameters.get("min")),
                FilterFactory.lowerOrEqualsTo((Long)parameters.get("max")));
        filter.setColumnId(columnName);

        built = true;
        return filter;
    }

}

Once you have query param builder implemented you simply use its instance when performing query via QueryService

queryService.query("my query def", ProcessInstanceQueryMapper.get(), new QueryContext(), paramBuilder);

Typical usage scenario

First thing user needs to do is to define data set - view of the data you want to work with - so called QueryDefinition in services api.

SqlQueryDefinition query = new SqlQueryDefinition("getAllProcessInstances", "java:jboss/datasources/ExampleDS");
query.setExpression("select * from processinstancelog");

This is the simplest possible query definition as it can be:

  • constructor takes

    • a unique name that identifies it on runtime

    • data source JNDI name used when performing queries on this definition - in other words source of data

  • expression - the most important part - is the sql statement that builds up the view to be filtered when performing queries

Once we have the sql query definition we can register it so it can be used later for actual queries.

queryService.registerQuery(query);

From now on, this query definition can be used to perform actual queries (or data look ups to use terminology from data sets). Following is the basic one that collects data as is, without any filtering

Collection<ProcessInstanceDesc> instances = queryService.query("getAllProcessInstances", ProcessInstanceQueryMapper.get(), new QueryContext());

Above query was very simple and used defaults from QueryContext - paging and sorting. So let’s take a look at one that changes the defaults of the paging and sorting

QueryContext ctx = new QueryContext(0, 100, "start_date", true);
         
Collection<ProcessInstanceDesc> instances = queryService.query("getAllProcessInstances", ProcessInstanceQueryMapper.get(), ctx);

Now let’s take a look at how to do data filtering

// single filter param
Collection<ProcessInstanceDesc> instances = queryService.query("getAllProcessInstances", ProcessInstanceQueryMapper.get(), new QueryContext(), QueryParam.likeTo(COLUMN_PROCESSID, true, "org.jbpm%"));
 
// multiple filter params (AND)
Collection<ProcessInstanceDesc> instances = queryService.query("getAllProcessInstances", ProcessInstanceQueryMapper.get(), new QueryContext(),
 QueryParam.likeTo(COLUMN_PROCESSID, true, "org.jbpm%"),
 QueryParam.in(COLUMN_STATUS, 1, 3));

With that end user is put in driver seat to define what data and how they should be fetched. Not being limited by JPA provider nor anything else. Moreover this promotes use of tailored queries for your environment as in most of the case there will be single data base used and thus specific features of that data base can be used to increase performance.

Further examples can be found here.

7.5.8. ProcessInstanceMigrationService

The ProcessInstanceMigrationService service is a utility used to migrate given process instances from one deployment to another. Process or task variables are not affected by the migration. The ProcessInstanceMigrationService service enables you to change the process definition for the jBPM engine.

For process instance migrations, let active process instances finish and start new process instances in the new deployment. If this approach is not suitable to your needs, consider the following before starting process instance migration:

  • Backward compatibility

  • Data change

  • Need for node mapping

You should create backward compatible processes whenever possible, such as extending process definitions. For example, removing specific nodes from the process definition breaks compatibility. In such case, you must provide new node mapping in case an active process instance is in a node that has been removed.

A node map contains source node IDs from the old process definition mapped to target node IDs in the new process definition. You can map nodes of the same type only, such as a user task to a user task.

jBPM offers several implementations of the migration service:

public interface ProcessInstanceMigrationService {
 /**
 * Migrates given process instance that belongs to source deployment, into target process id that belongs to target deployment.
 * Following rules are enforced:
 * <ul>
 * <li>source deployment id must be there</li>
 * <li>process instance id must point to existing and active process instance</li>
 * <li>target deployment must exist</li>
 * <li>target process id must exist in target deployment</li>
 * </ul>
 * Migration returns migration report regardless of migration being successful or not that needs to be examined for migration outcome.
 * @param sourceDeploymentId deployment that process instance to be migrated belongs to
 * @param processInstanceId id of the process instance to be migrated
 * @param targetDeploymentId id of deployment that target process belongs to
 * @param targetProcessId id of the process process instance should be migrated to
 * @return returns complete migration report
 */
 MigrationReport migrate(String sourceDeploymentId, Long processInstanceId, String targetDeploymentId, String targetProcessId);
 /**
 * Migrates given process instance (with node mapping) that belongs to source deployment, into target process id that belongs to target deployment.
 * Following rules are enforced:
 * <ul>
 * <li>source deployment id must be there</li>
 * <li>process instance id must point to existing and active process instance</li>
 * <li>target deployment must exist</li>
 * <li>target process id must exist in target deployment</li>
 * </ul>
 * Migration returns migration report regardless of migration being successful or not that needs to be examined for migration outcome.
 * @param sourceDeploymentId deployment that process instance to be migrated belongs to
 * @param processInstanceId id of the process instance to be migrated
 * @param targetDeploymentId id of deployment that target process belongs to
 * @param targetProcessId id of the process process instance should be migrated to
 * @param nodeMapping node mapping - source and target unique ids of nodes to be mapped - from process instance active nodes to new process nodes
 * @return returns complete migration report
 */
 MigrationReport migrate(String sourceDeploymentId, Long processInstanceId, String targetDeploymentId, String targetProcessId, Map<String, String> nodeMapping);
 /**
 * Migrates given process instances that belong to source deployment, into target process id that belongs to target deployment.
 * Following rules are enforced:
 * <ul>
 * <li>source deployment id must be there</li>
 * <li>process instance id must point to existing and active process instance</li>
 * <li>target deployment must exist</li>
 * <li>target process id must exist in target deployment</li>
 * </ul>
 * Migration returns list of migration report - one per process instance, regardless of migration being successful or not that needs to be examined for migration outcome.
 * @param sourceDeploymentId deployment that process instance to be migrated belongs to
 * @param processInstanceIds list of process instance id to be migrated
 * @param targetDeploymentId id of deployment that target process belongs to
 * @param targetProcessId id of the process process instance should be migrated to
 * @return returns complete migration report
 */
 List<MigrationReport> migrate(String sourceDeploymentId, List<Long> processInstanceIds, String targetDeploymentId, String targetProcessId);
 /**
 * Migrates given process instances (with node mapping) that belong to source deployment, into target process id that belongs to target deployment.
 * Following rules are enforced:
 * <ul>
 * <li>source deployment id must be there</li>
 * <li>process instance id must point to existing and active process instance</li>
 * <li>target deployment must exist</li>
 * <li>target process id must exist in target deployment</li>
 * </ul>
 * Migration returns list of migration report - one per process instance, regardless of migration being successful or not that needs to be examined for migration outcome.
 * @param sourceDeploymentId deployment that process instance to be migrated belongs to
 * @param processInstanceIds list of process instance id to be migrated
 * @param targetDeploymentId id of deployment that target process belongs to
 * @param targetProcessId id of the process process instance should be migrated to
 * @param nodeMapping node mapping - source and target unique ids of nodes to be mapped - from process instance active nodes to new process nodes
 * @return returns list of migration reports one per each process instance
 */
 List<MigrationReport> migrate(String sourceDeploymentId, List<Long> processInstanceIds, String targetDeploymentId, String targetProcessId, Map<String, String> nodeMapping);
}

To migrate process instances on the KIE Server, use the following implementations. These correspond with the implementations described in the previous code sample.

public interface ProcessAdminServicesClient {

    MigrationReportInstance migrateProcessInstance(String containerId, Long processInstanceId, String targetContainerId, String targetProcessId);

    MigrationReportInstance migrateProcessInstance(String containerId, Long processInstanceId, String targetContainerId, String targetProcessId, Map<String, String> nodeMapping);

    List<MigrationReportInstance> migrateProcessInstances(String containerId, List<Long> processInstancesId, String targetContainerId, String targetProcessId);

    List<MigrationReportInstance> migrateProcessInstances(String containerId, List<Long> processInstancesId, String targetContainerId, String targetProcessId, Map<String, String> nodeMapping);
}

You can migrate a single process instance, or multiple process instances at once. If you migrate multiple process instances, each instance will be migrated in a separate transaction to ensure that the migrations do not affect each other.

After migration is done, the migrate method returns a MigrationReport object that contains the following information:

  • Start and end dates of the migration.

  • Migration outcome (success or failure).

  • Log entry as INFO, WARN, or ERROR type. The ERROR message terminates the migration.

The following is an example process instance migration:

Example Process Instance Migration
import org.kie.server.api.model.admin.MigrationReportInstance;
import org.kie.server.api.marshalling.MarshallingFormat;
import org.kie.server.client.KieServicesClient;
import org.kie.server.client.KieServicesConfiguration;

public class ProcessInstanceMigrationTest{

	private static final String SOURCE_CONTAINER = "com.redhat:MigrateMe:1.0";
  private static final String SOURCE_PROCESS_ID = "MigrateMe.MigrateMev1";
	private static final String TARGET_CONTAINER = "com.redhat:MigrateMe:2";
  private static final String TARGET_PROCESS_ID = "MigrateMe.MigrateMeV2";

	public static void main(String[] args) {

		KieServicesConfiguration config = KieServicesFactory.newRestConfiguration("http://HOST:PORT/kie-server/services/rest/server", "USERNAME", "PASSWORD");
		config.setMarshallingFormat(MarshallingFormat.JSON);
		KieServicesClient client = KieServicesFactory.newKieServicesClient(config);

		long sourcePid = client.getProcessClient().startProcess(SOURCE_CONTAINER, SOURCE_PROCESS_ID);

    // Use the 'report' object to return migration results.
		MigrationReportInstance report = client.getAdminClient().migrateProcessInstance(SOURCE_CONTAINER, sourcePid,TARGET_CONTAINER, TARGET_PROCESS_ID);

		System.out.println("Was migration successful:" + report.isSuccessful());

		client.getProcessClient().abortProcessInstance(TARGET_CONTAINER, sourcePid);

	}
}
7.5.8.1. Known limitations
  • When a new or modified task requires inputs which are not available in the migrated v2 process instance.

  • Modifying the tasks prior to the active task where the changes have an impact on the further processing.

  • Removing a human task which is currently active (can only be replaced - requires to be mapped to another human task)

  • Adding a new task parallel to the single active task (all branches in AND gateway are not activated - process will stuck)

  • Removing the active timer events (won’t be changed in DB)

  • Fixing or updating inputs and outputs in an active task (task data aren’t migrated)

  • Node mapping updates only the task node name and description! (other task fields won’t be mapped including the TaskName variable)

7.5.9. Working with deployments

Deployment Service provides convinient way to put business assets to an execution environment but there are cases that requires some additional management to make them available in right context.

Activation and Deactivation of deployments

Imagine situation where there are number of processes already running of given deployment and then new version of these processes comes into the runtime environment. With that administrator can decide that new instances of given process definition should be using new version only while already active instances should continue with the previous version.

To help with that deployment service has been equipped with following methods:

  • activate

    allows to activate given deployment so it can be available for interaction meaning will show its process definition and allow to start new process instances of that project’s processes

  • deactivate

    allows to deactivate deployment which will disable option to see or start new process instances of that project’s processes but will allow to continue working with already active process instances, e.g. signal, work with user task etc

This feature allows smooth transition between project versions whitout need of process instance migration.

Deployment synchronization

Prior to jBPM 6.2, jbpm services did not have deployment store by default. When embedded in jbpm-console/kie-wb they utilized sistem.git VFS repository to preserve deployed units across server restarts. While that works fine, it comes with some drawbacks:

  • not available for custom systems that use services

  • requires complex setup in cluster - zookeeper and helix

With version 6.2 jbpm services come with deployment synchronizer that stores available deployments into data base, including its deployment descriptor. At the same time it constantly monitors that table to keep it in sync with other installations that might be using same data source. This is especially important when running in cluster or when Business Central runs next to custom application and both should be able to operate on the same artifacts.

By default synchronization must be configured (when runing as core services while it is automatically enabled for ejb and cdi extensions). To configure synchronization following needs to be configured:

TransactionalCommandService commandService = new TransactionalCommandService(emf);

DeploymentStore store = new DeploymentStore();
store.setCommandService(commandService);

DeploymentSynchronizer sync = new DeploymentSynchronizer();
sync.setDeploymentService(deploymentService);
sync.setDeploymentStore(store);

DeploymentSyncInvoker invoker = new DeploymentSyncInvoker(sync, 2L, 3L, TimeUnit.SECONDS);
invoker.start();
....
invoker.stop();

With this, deployments will be synchronized every 3 seconds with initial delay of two seconds.

Invoking latest version of project’s processes

In case there is a need to always work with latest version of project’s process, services allow to interact with various operations using deployment id with latest keyword. Let’s go over an example to better understand the feature.

Initially deployed unit is org.jbpm:HR:1.0 which has the first version of an hiring process. After several weeks, new version is developed and deployed to the execution server - org.jbpm:HR.2.0 with version 2 of the hiring process.

To allow callers of the services to interact without being worried if they work with latest version, they can use following deployment id:

org.jbpm.HR:latest

this will alwyas find out latest available version of project that is identified by:

  • groupId: org.jbpm

  • artifactId: HR

version comparizon is based on Maven version numbers and relies on Maen based algorithm to find the latest one.

This is only supported when process identifier remains the same in all project versions

Here is a complete example with deployment of multiple versions and interacting always with the latest:

KModuleDeploymentUnit deploymentUnitV1 = new KModuleDeploymentUnit("org.jbpm", "HR", "1.0");
deploymentService.deploy(deploymentUnitV1);

long processInstanceId = processService.startProcess("org.jbpm:HR:LATEST", "customtask");
ProcessInstanceDesc piDesc = runtimeDataService.getProcessInstanceById(processInstanceId);

// we have started process with project's version 1
assertEquals(deploymentUnitV1.getIdentifier(), piDesc.getDeploymentId());

// next we deploy version 1
KModuleDeploymentUnit deploymentUnitV2 = new KModuleDeploymentUnit("org.jbpm", "HR", "2.0");
deploymentService.deploy(deploymentUnitV2);

processInstanceId = processService.startProcess("org.jbpm:HR:LATEST", "customtask");
piDesc = runtimeDataService.getProcessInstanceById(processInstanceId);

// this time we have started process with project's version 2
assertEquals(deploymentUnitV2.getIdentifier(), piDesc.getDeploymentId());

As illustrated this provides very powerful feature when interacting with frequently chaning environment that allows to always be up to date when it comes to use of process definitions.

This feature is also available in REST interface so whenever sending request with deployment id, it’s enough to replace concrete version with LATEST keyword to make use of this feature.

7.6. Configuration

There are several control parameters available to alter the jBPM engine default behavior. This allows to fine tune the execution for the environment needs and actual requirements. All of these parameters are set as JVM system properties, usually with -D when starting program e.g. application server.

Table 6. Control parameters
Name Possible values Default value Description

jbpm.ut.jndi.lookup

String

Alternative JNDI name to be used when there is no access to the default one (java:comp/UserTransaction).

Note: Must be valid for given runtime environment. Do not use if there is no access to the default user transaction JNDI name.

jbpm.enable.multi.con

true|false

false

Enables multiple incoming/outgoing sequence flows support for activities

jbpm.business.calendar.properties

String

/jbpm.business.calendar.properties

Allows to provide alternative classpath location of business calendar configuration file

jbpm.overdue.timer.delay

Long

2000

Specifies delay for overdue timers to allow proper initialization, in milliseconds

jbpm.process.name.comparator

String

Allows to provide alternative comparator class to empower start process by name feature, if not set NumberVersionComparator is used

jbpm.loop.level.disabled

true|false

true

Allows to enable or disable loop iteration tracking, to allow advanced loop support when using XOR gateways

org.kie.mail.session

String

mail/jbpmMailSession

Allows to provide alternative JNDI name for mail session used by Task Deadlines

jbpm.usergroup.callback.properties

String

/jbpm.usergroup.callback.properties

Allows to provide alternative classpath location for user group callback implementation (LDAP, DB)

jbpm.user.group.mapping

String

${jboss.server.config.dir}/roles.properties

Allows to provide alternative location of roles.properties for JBossUserGroupCallbackImpl

jbpm.user.info.properties

String

/jbpm.user.info.properties

Allows to provide alternative classpath location of user info configuration (used by LDAPUserInfoImpl)

org.jbpm.ht.user.separator

String

,

Allows to provide alternative separator of actors and groups for user tasks, default is comma (,)

org.quartz.properties

String

Allows to provide location of the quartz config file to activate quartz based timer service

jbpm.data.dir

String

${jboss.server.data.dir} is available otherwise ${java.io.tmpdir}

Allows to provide location where data files produced by jbpm should be stored

org.kie.executor.pool.size

Integer

1

Allows to provide thread pool size for jbpm executor

org.kie.executor.retry.count

Integer

3

Allows to provide number of retries attempted in case of error by jbpm executor

org.kie.executor.interval

Integer

0

Allows to provide frequency used to check for pending jobs by jbpm executor, in seconds

org.kie.executor.disabled

true|false

true

Enables or disable jbpm executor

org.kie.store.services.class

String

org.drools.persistence.jpa.KnowledgeStoreServiceImpl

Fully qualified name of the class that implements KieStoreServices that will be responsible for bootstraping KieSession instances

8. Processes

8.1. What is BPMN 2.0

"The primary goal of BPMN is to provide a notation that is readily understandable by all business users, from the business analysts that create the initial drafts of the processes, to the technical developers responsible for implementing the technology that will perform those processes, and finally, to the business people who will manage and monitor those processes."

The Business Process Model and Notation (BPMN) 2.0 specification is an OMG specification that not only defines a standard on how to graphically represent a business process (like BPMN 1.x), but now also includes execution semantics for the elements defined, and an XML format on how to store (and share) process definitions.

jBPM6 allows you to execute processes defined using the BPMN 2.0 XML format. That means that you can use all the different jBPM6 tooling to model, execute, manage and monitor your business processes using the BPMN 2.0 format for specifying your executable business processes. Actually, the full BPMN 2.0 specification also includes details on how to represent things like choreographies and collaboration. The jBPM project however focuses on that part of the specification that can be used to specify executable processes.

Executable processes in BPMN consist of a different types of nodes being connected to each other using sequence flows. The BPMN 2.0 specification defines three main types of nodes:

  • Events: They are used to model the occurrence of a particular event. This could be a start event (that is used to indicate the start of the process), end events (that define the end of the process, or of that subflow) and intermediate events (that indicate events that might occur during the execution of the process).

  • Activities: These define the different actions that need to be performed during the execution of the process. Different types of tasks exist, depending on the type of activity you are trying to model (e.g. human task, service task, etc.) and activities could also be nested (using different types of sub-processes).

  • Gateways: Can be used to define multiple paths in the process. Depending on the type of gateway, these might indicate parallel execution, choice, etc.

jBPM6 does not implement all elements and attributes as defined in the BPMN 2.0 specification. We do however support a significant subset, including the most common node types that can be used inside executable processes. This includes (almost) all elements and attributes as defined in the "Common Executable" subclass of the BPMN 2.0 specification, extended with some additional elements and attributes we believe are valuable in that context as well. The full set of elements and attributes that are supported can be found below, but it includes elements like:

  • Flow objects

    • Events

      • Start Event (None, Conditional, Signal, Message, Timer)

      • End Event (None, Terminate, Error, Escalation, Signal, Message, Compensation)

      • Intermediate Catch Event (Signal, Timer, Conditional, Message)

      • Intermediate Throw Event (None, Signal, Escalation, Message, Compensation)

      • Non-interrupting Boundary Event (Escalation, Signal, Timer, Conditional, Message)

      • Interrupting Boundary Event (Escalation, Error, Signal, Timer, Conditional, Message, Compensation)

    • Activities

      • Script Task

      • Task

      • Service Task

      • User Task

      • Business Rule Task

      • Manual Task

      • Send Task

      • Receive Task

      • Reusable Sub-Process (Call Activity)

      • Embedded Sub-Process

      • Event Sub-Process

      • Ad-Hoc Sub-Process

      • Data-Object

    • Gateways

      • Diverging

        • Exclusive

        • Inclusive

        • Parallel

        • Event-Based

      • Converging

        • Exclusive

        • Inclusive

        • Parallel

    • Lanes

  • Data

    • Java type language

    • Process properties

    • Embedded Sub-Process properties

    • Activity properties

  • Connecting objects

    • Sequence flow

For example, consider the following "Hello World" BPMN 2.0 process, which does nothing more that writing out a "Hello World" statement when the process is started.

HelloWorld

An executable version of this process expressed using BPMN 2.0 XML would look something like this:

<?xml version="1.0" encoding="UTF-8"?>
<definitions id="Definition"
             targetNamespace="http://www.example.org/MinimalExample"
             typeLanguage="http://www.java.com/javaTypes"
             expressionLanguage="http://www.mvel.org/2.0"
             xmlns="http://www.omg.org/spec/BPMN/20100524/MODEL"
             xmlns:xs="http://www.w3.org/2001/XMLSchema-instance"
             xs:schemaLocation="http://www.omg.org/spec/BPMN/20100524/MODEL BPMN20.xsd"
             xmlns:bpmndi="http://www.omg.org/spec/BPMN/20100524/DI"
             xmlns:dc="http://www.omg.org/spec/DD/20100524/DC"
             xmlns:di="http://www.omg.org/spec/DD/20100524/DI"
             xmlns:tns="http://www.jboss.org/drools">

  <process processType="Private" isExecutable="true" id="com.sample.HelloWorld" name="Hello World" >

    <!-- nodes -->
    <startEvent id="_1" name="StartProcess" />
    <scriptTask id="_2" name="Hello" >
      <script>System.out.println("Hello World");</script>
    </scriptTask>
    <endEvent id="_3" name="EndProcess" >
        <terminateEventDefinition/>
    </endEvent>

    <!-- connections -->
    <sequenceFlow id="_1-_2" sourceRef="_1" targetRef="_2" />
    <sequenceFlow id="_2-_3" sourceRef="_2" targetRef="_3" />

  </process>

  <bpmndi:BPMNDiagram>
    <bpmndi:BPMNPlane bpmnElement="Minimal" >
      <bpmndi:BPMNShape bpmnElement="_1" >
        <dc:Bounds x="15" y="91" width="48" height="48" />
      </bpmndi:BPMNShape>
      <bpmndi:BPMNShape bpmnElement="_2" >
        <dc:Bounds x="95" y="88" width="83" height="48" />
      </bpmndi:BPMNShape>
      <bpmndi:BPMNShape bpmnElement="_3" >
        <dc:Bounds x="258" y="86" width="48" height="48" />
      </bpmndi:BPMNShape>
      <bpmndi:BPMNEdge bpmnElement="_1-_2" >
        <di:waypoint x="39" y="115" />
        <di:waypoint x="75" y="46" />
        <di:waypoint x="136" y="112" />
      </bpmndi:BPMNEdge>
      <bpmndi:BPMNEdge bpmnElement="_2-_3" >
        <di:waypoint x="136" y="112" />
        <di:waypoint x="240" y="240" />
        <di:waypoint x="282" y="110" />
      </bpmndi:BPMNEdge>
    </bpmndi:BPMNPlane>
  </bpmndi:BPMNDiagram>

</definitions>

To create your own process using BPMN 2.0 format, you can

  • The jBPM Designer is an open-source web-based editor that supports the BPMN 2.0 format. We have embedded it into Business Central for BPMN 2.0 process visualization and editing. You could use the Designer (either standalone or integrated) to create / edit BPMN 2.0 processes and then export them to BPMN 2.0 format or save them into repository and import them so they can be executed.

  • A new BPMN2 Eclipse plugin is being created to support the full BPMN2 specification.

  • You can always manually create your BPMN 2.0 process files by writing the XML directly. You can validate the syntax of your processes against the BPMN 2.0 XSD, or use the validator in the Eclipse plugin to check both syntax and completeness of your model.

  • Drools Eclipse Process editor has been deprecated in favor of BPMN2 Modeler for process modeling. It can still be used for limited number of supported elements but should be faced out as it is not being developed any more.

    Create a new Process file using the Drools Eclipse plugin wizard and in the last page of the wizard, make sure you select Drools 5.1 code compatibility. This will create a new process using the BPMN 2.0 XML format. Note however that this is not exactly a BPMN 2.0 editor, as it still uses different attributes names etc. It does however save the process using valid BPMN 2.0 syntax. Also note that the editor does not support all node types and attributes that are already supported in the jBPM engine.

The following code fragment shows you how to load a BPMN2 process into your KIE base …​

private static KnowledgeBase createKnowledgeBase() throws Exception {
    KieHelper kieHelper = new KieHelper();
    KieBase kieBase = kieHelper
    .addResource(ResourceFactory.newClassPathResource("sample.bpmn2"))
    .build();

    return kieBase;
}

... and how to execute this process …​

KieBase kbase = createKnowledgeBase();
KieSession ksession = kbase.newKieSession();
ksession.startProcess("com.sample.HelloWorld");

For more detail, check out the chapter on the API and the basics.

8.2. Business processes

A business process is a diagram that describes the order for a series of steps that must be executed and consists of predefined nodes and connections. Each node represents one step in the process while the connections specify how to transition from one node to another.

A typical business process consists of the following components:

  • The header section that comprises global elements such as the name of the process, imports, and variables

  • The nodes section that contains all the different nodes that are part of the process

  • The connections section that links these nodes to each other to create a flow chart

This image shows the steps of "self evaluation" through the project manager and HR manager.
Figure 10. Business process

jBPM contains the legacy process designer and the new process designer for creating business process diagrams. The new process designer has an improved layout and feature set and continues to be developed. Until all features of the legacy process designer are completely implemented in the new process designer, both designers are available in Business Central for you to use.

8.2.1. Creating a business process in Business Central

The process designer is the jBPM process modeler. The output of the modeler is a BPMN 2.0 process definition file. The definition is used as input for the jBPM jBPM engine, which creates a process instance based on the definition.

The procedures in this section provide a general overview of how to create a simple business process.

Prerequisites
  • You have created or imported a jBPM project.

  • You have created the required users. User privileges and settings are controlled by the roles assigned to a user and the groups that a user belongs to.

Procedure
  1. In Business Central, go to MenuDesignProjects.

  2. Click the project name to open the project’s asset list.

  3. Click Add Asset → Business Process.

  4. In the Create new Business Process wizard, enter the following values:

    • Business Process: New business process name

    • Package: Package location for your new business process, for example com.myspace.myProject

  5. Click Ok to open the process designer.

  6. In the upper-right corner, click the Diagram properties diagram properties icon and add your business process property information, such as process data and variables:

    1. Scroll down and expand Process Data.

    2. Click btn plus next to Process Variables and define the process variables that you want to use in your business process.

  7. In the process designer canvas, use the left toolbar to drag and drop BPMN components to define your business process logic, connections, events, tasks, or other elements.

  8. After you add and define all components of the business process, click Save to save the completed business process.

8.2.1.1. Creating business process tasks

You can create the following types of tasks as part of your business process:

  • Business rule tasks: Used to make decisions through a Decision Model and Notation (DMN) model or rule flow group

  • Script tasks: Used to execute a piece of code written in Java, JavaScript, or MVEL

  • User tasks: Used to include human actions as input to the business process

As an example, this procedure uses a user task.

Procedure
  1. Click the start event to create an outgoing connection to a new task.

    Creating an outgoing connection from the start event to a user task
    Figure 11. Outgoing connection from the start event to a user task
  2. Convert the new task to one of the available task types, such as a user task.

    Converting in to a user task
    Figure 12. Convert into a User task
  3. For this example, click the user task and in the upper-right corner, click the Diagram properties diagram properties icon.

  4. Add the user task property information, such as the following details:

    1. Expand Implementation/Execution and enter values for both the Task Name and Actor fields.

    2. Click btn assign next to Assignments to open the Data I/O window

    3. Create the input and output assignments for the user task.

  5. After you add and define all task information, click Save to save the updated business process.

8.2.1.2. Copying elements from one business process to another business process

You can copy individual elements from one business process to another business process in Business Central.

Procedure
  1. In the business process designer canvas, click and drag the cursor to select the elements that you want to copy.

  2. Click 3417 in the upper-right toolbar to copy your selection.

  3. Switch into the second business process where you want to add the copied elements.

  4. In the second business process, create any process variables that are used in the business process that you want to copy. The variable Name and Type parameters must be identical in order to preserve variable mapping.

  5. Click 3418 to paste your selection.

  6. Click Save to save the updated business process.

8.2.1.3. Making a copy of a business process

You can make a copy of a business process in Business Central and modify the copied process as needed.

Procedure
  1. In the business process designer, click Copy in the upper-right toolbar.

  2. In the Make a Copy window, enter a new name for the copied business process, select the target package, and optionally add a comment.

  3. Click Make a Copy.

  4. Modify the copied business process as needed and click Save to save the updated business process.

8.2.1.4. Resizing elements and using the zoom function to view business processes

You can resize individual elements in a business process and zoom in or out to modify the view of your business process.

Procedure
  1. In the business process designer, select the element and click the red dot in the lower-right corner of the element.

  2. Drag the red dot to resize the element.

    Resizing an element
    Figure 13. Resize an element
  3. To zoom in or out to view the entire diagram, click the plus or minus sign on the lower-right side of the canvas.

    Zooming to view the entire diagram
    Figure 14. Enlarge or shrink a business process

8.2.2. Deploying a business process in Business Central

After you design your business process in Business Central, you can build and deploy your project in Business Central to make the process available to KIE Server.

Prerequisites
  • KIE Server is deployed and connected to Business Central.

Procedure
  1. In Business Central, go to MenuDesignProjects.

  2. Click the project that you want to deploy.

  3. Click Deploy.

    You can also select the Build & Install option to build the project and publish the KJAR file to the configured Maven repository without deploying to a KIE Server. In a development environment, you can click Deploy to deploy the built KJAR file to a KIE Server without stopping any running instances (if applicable), or click Redeploy to deploy the built KJAR file and stop any running instances. The next time you deploy or redeploy the built KJAR, the previous deployment unit (KIE container) is automatically updated in the same target KIE Server. In a production environment, the Redeploy option is disabled and you can click Deploy only to deploy the built KJAR file to a new deployment unit (KIE container) on a KIE Server.

    To configure the KIE Server environment mode, set the org.kie.server.mode system property to org.kie.server.mode=development or org.kie.server.mode=production. To configure the deployment behavior for a corresponding project in Business Central, go to project SettingsGeneral SettingsVersion and toggle the Development Mode option. By default, KIE Server and all new projects in Business Central are in development mode. You cannot deploy a project with Development Mode turned on or with a manually added SNAPSHOT version suffix to a KIE Server that is in production mode.

8.2.3. Executing a business process in Business Central

After you build and deploy the project that contains your business process, you can execute the defined functionality for the business process.

As an example, this procedure uses the Mortgage_Process sample project in Business Central. In this scenario, you input data into a mortgage application form acting as the mortgage broker. The MortgageApprovalProcess business process runs and determines whether or not the applicant has offered an acceptable down payment based on the decision rules defined in the project. The business process either ends the rule testing or requests that the applicant increase the down payment to proceed. If the application passes the business rule testing, the bank approver reviews the application and either approves or denies the loan.

Prerequisites
  • KIE Server is deployed and connected to Business Central.

  • You have imported the Mortgage_Process sample project in Business Central. (Go to MenuDesignProjects, click dots in the upper-right corner of the screen, and select Try SamplesMortgage_ProcessOk.)

  • You have built and deployed the Mortgage_Process sample project.

Procedure
  1. In Business Central, go to MenuManageProcess Definitions.

  2. Click anywhere in the MortgageApprovalProcess row to view the process details.

  3. Click the Diagram tab to view the business process diagram in the editor.

  4. Click New Process Instance to open the Application form and input the following values into the form fields:

    • Down Payment: 30000

    • Years of amortization: 10

    • Name: Ivo

    • Annual Income: 60000

    • SSN: 123456789

    • Age of property: 8

    • Address of property: Brno

    • Locale: Rural

    • Property Sale Price: 50000

  5. Click Submit to start a new process instance. After starting the process instance, the Instance Details view opens.

  6. Click the Diagram tab to view the process flow within the process diagram. The state of the process is highlighted as it moves through each task.

  7. Click MenuManageTasks.

    For this example, the user or users working on the corresponding tasks are members of the following groups:

    • approver: For the Qualify task

    • broker: For the Correct Data and Increase Down Payment tasks

    • manager: For the Final Approval task

  8. As the approver, review the Qualify task information, click Claim and then Start to start the task, and then select Is mortgage application in limit? and click Complete to complete the task flow.

  9. In the Tasks page, click anywhere in the Final Approval row to open the Final Approval task.

  10. Click Claim to claim responsibility for the task, and click Complete to finalize the loan approval process.

The Save and Release buttons are only used to either pause the approval process and save the instance if you are waiting on a field value, or to release the task for another user to modify.

8.2.4. Process definitions and process instances in Business Central

A process definition is a Business Process Model and Notation (BPMN) 2.0 file that serves as a container for a process and its BPMN diagram. The process definition shows all of the available information about the business process, such as any associated subprocesses or the number of users and groups that are participating in the selected definition.

A process definition also defines the import entry for imported processes that the process definition uses, and the relationship entries.

BPMN2 source of a process definition
<definitions id="Definition"
               targetNamespace="http://www.jboss.org/drools"
               typeLanguage="http://www.java.com/javaTypes"
               expressionLanguage="http://www.mvel.org/2.0"
               xmlns="http://www.omg.org/spec/BPMN/20100524/MODEL"Rule Task
               xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
               xsi:schemaLocation="http://www.omg.org/spec/BPMN/20100524/MODEL BPMN20.xsd"
               xmlns:g="http://www.jboss.org/drools/flow/gpd"
               xmlns:bpmndi="http://www.omg.org/spec/BPMN/20100524/DI"
               xmlns:dc="http://www.omg.org/spec/DD/20100524/DC"
               xmlns:di="http://www.omg.org/spec/DD/20100524/DI"
               xmlns:tns="http://www.jboss.org/drools">

    <process>
      PROCESS
    </process>

    <bpmndi:BPMNDiagram>
     BPMN DIAGRAM DEFINITION
    </bpmndi:BPMNDiagram>

    </definitions>

After you have created, configured, and deployed your project that includes your business processes, you can view the list of all the process definitions in Business Central MenuManageProcess Definitions. You can refresh the list of deployed process definitions at any time by clicking the refresh button in the upper-right corner.

The process definition list shows all the available process definitions that are deployed into the platform. Click any of the process definitions listed to show the corresponding process definition details. This displays information about the process definition, such as if there is a sub-process associated with it, or how many users and groups exist in the process definition. The Diagram tab in the process definition details page contains the BPMN2-based diagram of the process definition.

Within each selected process definition, you can start a new process instance for the process definition by clicking the New Process Instance button in the upper-right corner. Process instances that you start from the available process definitions are listed in MenuManageProcess Instances.

You can also define the default pagination option for all users under the Manage drop-down menu (Process Definition, Process Instances, Tasks, Execution Errors, and Jobs) and in MenuTrackTask Inbox.

8.2.4.1. Process definitions in XML

You can create processes directly in XML format using the BPMN 2.0 specifications. The syntax of these XML processes is defined using the BPMN 2.0 XML Schema Definition.

A process XML file consists of the following core sections:

  • process: This is the top part of the process XML that contains the definition of the different nodes and their properties. The process XML file consists of exactly one <process> element. This element contains parameters related to the process (its type, name, ID, and package name), and consists of three subsections: a header section where process-level information such as variables, globals, imports, and lanes are defined, a nodes section that defines each of the nodes in the process, and a connections section that contains the connections between all the nodes in the process.

  • BPMNDiagram: This is the lower part of the process XML file that contains all graphical information, such as the location of the nodes. The nodes section contains a specific element for each node and defines the various parameters and any sub-elements for that node type.

The following process XML file fragment shows a simple process that contains a sequence of a start event, a script task that prints "Hello World" to the console, and an end event:

<?xml version="1.0" encoding="UTF-8"?>

<definitions
  id="Definition"
  targetNamespace="http://www.jboss.org/drools"
  typeLanguage="http://www.java.com/javaTypes"
  expressionLanguage="http://www.mvel.org/2.0"
  xmlns="http://www.omg.org/spec/BPMN/20100524/MODEL"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://www.omg.org/spec/BPMN/20100524/MODEL BPMN20.xsd"
  xmlns:g="http://www.jboss.org/drools/flow/gpd"
  xmlns:bpmndi="http://www.omg.org/spec/BPMN/20100524/DI"
  xmlns:dc="http://www.omg.org/spec/DD/20100524/DC"
  xmlns:di="http://www.omg.org/spec/DD/20100524/DI"
  xmlns:tns="http://www.jboss.org/drools">

  <process processType="Private" isExecutable="true" id="com.sample.hello" name="Hello Process">
    <!-- nodes -->
    <startEvent id="_1" name="Start" />

    <scriptTask id="_2" name="Hello">
      <script>System.out.println("Hello World");</script>
    </scriptTask>

    <endEvent id="_3" name="End" >
      <terminateEventDefinition/>
    </endEvent>

    <!-- connections -->

    <sequenceFlow id="_1-_2" sourceRef="_1" targetRef="_2" />
    <sequenceFlow id="_2-_3" sourceRef="_2" targetRef="_3" />
  </process>

  <bpmndi:BPMNDiagram>
    <bpmndi:BPMNPlane bpmnElement="com.sample.hello" >

      <bpmndi:BPMNShape bpmnElement="_1" >
        <dc:Bounds x="16" y="16" width="48" height="48" />
      </bpmndi:BPMNShape>

      <bpmndi:BPMNShape bpmnElement="_2" >
        <dc:Bounds x="96" y="16" width="80" height="48" />
      </bpmndi:BPMNShape>

      <bpmndi:BPMNShape bpmnElement="_3" >
        <dc:Bounds x="208" y="16" width="48" height="48" />
      </bpmndi:BPMNShape>

      <bpmndi:BPMNEdge bpmnElement="_1-_2" >
        <di:waypoint x="40" y="40" />
        <di:waypoint x="136" y="40" />
      </bpmndi:BPMNEdge>

      <bpmndi:BPMNEdge bpmnElement="_2-_3" >
        <di:waypoint x="136" y="40" />
        <di:waypoint x="232" y="40" />
      </bpmndi:BPMNEdge>

    </bpmndi:BPMNPlane>
  </bpmndi:BPMNDiagram>

</definitions>

8.2.5. Invoking a Decision Model and Notation (DMN) service in a business process

You can use Decision Model and Notation (DMN) to model a decision service graphically in a decision requirements diagram (DRD) in Business Central and then invoke that DMN service as part of a business process in Business Central. Business processes interact with DMN services by identifying the DMN service and mapping business data between DMN inputs and the business process properties.

As an illustration, this procedure uses an example TrainStation project that defines train routing logic. This example project contains the following data object and DMN components designed in Business Central for the routing decision logic:

Example Train object
public class Train {

     private String departureStation;

     private String destinationStation;

     private BigDecimal railNumber;

     // Getters and setters
}
dmn execution graph
Figure 15. Example Compute Rail DMN model
dmn execution expression
Figure 16. Example Rail DMN decision table
dmn execution data type
Figure 17. Example tTrain DMN data type

For more information about creating DMN models in Business Central, see Decision Model and Notation (DMN) in the Drools documentation.

Prerequisites
  • All required data objects and DMN model components are defined in the project.

Procedure
  1. In Business Central, go to MenuDesignProjects and click the project name.

  2. Select or create the business process asset in which you want to invoke the DMN service.

  3. In the process designer, use the left toolbar to drag and drop BPMN components as usual to define your overall business process logic, connections, events, tasks, or other elements.

  4. To incorporate a DMN service in the business process, add a Business Rule task from the left toolbar or from the start-node options and insert the task in the relevant location in the process flow.

    For this example, the following Accept Train business process incorporates the DMN service in the Route To Rail node:

    dmn execution business process
    Figure 18. Example Accept Train business process with a DMN service
  5. Select the business rule task node that you want to use for the DMN service, click Diagram properties in the upper-right corner of the process designer, and under Implementation/Execution, define the following fields:

    • Rule Language: Select DMN.

    • Namespace: Enter the unique namespace from the DMN model file. Example: https://www.drools.org/kie-dmn

    • Decision Name: Enter the name of the DMN decision node that you want to invoke in the selected process node. Example: Rail

    • DMN Model Name: Enter the DMN model name. Example: Compute Rail

  6. Under Data AssignmentsAssignments, click the Edit icon and add the DMN input and output data to define the mapping between the DMN service and the process data.

    For the Route To Rail DMN service node in this example, you add an input assignment for Train that corresponds to the input node in the DMN model, and add an output assignment for Rail that corresponds to the decision node in the DMN model. The Data Type must match the type that you set for that node in the DMN model, and the Source and Target definition is the relevant variable or field for the specified object.

    dmn execution io mapping
    Figure 19. Example input and output mapping for the Route To Rail DMN service node
  7. Click Save to save the data input and output data.

  8. Define the remainder of your business process according to how you want the completed DMN service to be handled.

    For this example, the Diagram propertiesImplementation/ExecutionOn Exit Action value is set to the following code to store the rail number after the Route To Rail DMN service is complete:

    Example code for On Exit Action
    train.setRailNumber(rail);

    If the rail number is not computed, the process reaches a No Appropriate Rail end error node that is defined with the following condition expression:

    dmn execution negative condition
    Figure 20. Example condition for No Appropriate Rail end error node

    If the rail number is computed, the process reaches an Accept Train script task that is defined with the following condition expression:

    dmn execution positive condition
    Figure 21. Example condition for Accept Train script task node

    The Accept Train script task also uses the following script in Diagram propertiesImplementation/ExecutionScript to print a message about the train route and current rail:

    com.myspace.trainstation.Train t =
        (com.myspace.trainstation.Train) kcontext.getVariable("train");
    System.out.println("Train from: " + t.getDepartureStation() +
                       ", to: " + t.getDestinationStation() +
                       ",  is on rail: " + t.getRailNumber());
  9. After you define your business process with the incorporated DMN service, save your process in the process designer, deploy the project, and run the corresponding process definition to invoke the DMN service.

    For this example, when you deploy the TrainStation project and run the corresponding process definition, you open the process instance form for the Accept Train process definition and set the departure station and destination station fields to test the execution:

    dmn execution process instance form
    Figure 22. Example process instance form for the Accept Train process definition

    After the process is executed, a message appears in the server log with the train route that you specified:

    Example server log output for the Accept Train process
    Train from: Zagreb, to: Belgrade,  is on rail: 1

8.3. Activities

8.3.1. Script task

ScriptTask
Figure 23. Script task

Represents a script that should be executed in this process. A Script Task should have one incoming connection and one outgoing connection. The associated action specifies what should be executed, the dialect used for coding the action (i.e., Java, JavaScript or MVEL), and the actual action code. This code can access any variables and globals. There is also a predefined variable kcontext that references the ProcessContext object (which can, for example, be used to access the current ProcessInstance or NodeInstance, and to get and set variables, or get access to the ksession using kcontext.getKieRuntime()). When a Script Task is reached in the process, it will execute the action and then continue with the next node. It contains the following properties:

  • Id: The id of the node (which is unique within one node container).

  • Name: The display name of the node.

  • Action: The action script associated with this action node.

Note that you can write any valid Java code inside a script node. This basically allows you to do anything inside such a script node. There are some caveats however:

  • When trying to create a higher-level business process, that should also be understood by business users, it is probably wise to avoid low-level implementation details inside the process, including inside these script tasks. A Script Task could still be used to quickly manipulate variables etc. but other concepts like a Service Task could be used to model more complex behaviour in a higher-level manner.

  • Scripts should be immediate. They are using the jBPM engine thread to execute the script. Scripts that could take some time to execute should probably be modeled as an asynchronous Service Task.

  • You should try to avoid contacting external services through a script node. Not only does this usually violate the first two caveats, it is also interacting with external services without the knowledge of the jBPM engine, which can be problematic, especially when using persistence and transactions. In general, it is probably wiser to model communication with an external service using a service task.

  • Scripts should not throw exceptions. Runtime exceptions should be caught and for example managed inside the script or transformed into signals or errors that can then be handled inside the process.

8.3.2. Service task

ServiceTask
Figure 24. Service task

Represents an (abstract) unit of work that should be executed in this process. All work that is executed outside the jBPM engine should be represented (in a declarative way) using a Service Task. Different types of services are predefined, e.g., sending an email, logging a message, etc. Users can define domain-specific services or work items, using a unique name and by defining the parameters (input) and results (output) that are associated with this type of work. Check the chapter on domain-specific processes for a detailed explanation and illustrative examples of how to define and use work items in your processes. When a Service Task is reached in the process, the associated work is executed. A Service Task should have one incoming connection and one outgoing connection.

  • Id: The id of the node (which is unique within one node container).

  • Name: The display name of the node.

  • Parameter mapping: Allows copying the value of process variables to parameters of the work item. Upon creation of the work item, the values will be copied.

  • Result mapping: Allows copying the value of result parameters of the work item to a process variable. Each type of work can define result parameters that will (potentially) be returned after the work item has been completed. A result mapping can be used to copy the value of the given result parameter to the given variable in this process. For example, the "FileFinder" work item returns a list of files that match the given search criteria within the result parameter Files. This list of files can then be bound to a process variable for use within the process. Upon completion of the work item, the values will be copied.

  • On-entry and on-exit actions: Actions that are executed upon entry or exit of this node, respectively.

  • Additional parameters: Each type of work item can define additional parameters that are relevant for that type of work. For example, the "Email" work item defines additional parameters such as From, To, Subject and Body. The user can either provide values for these parameters directly, or define a parameter mapping that will copy the value of the given variable in this process to the given parameter; if both are specified, the mapping will have precedence. Parameters of type String can use #{expression} to embed a value in the string. The value will be retrieved when creating the work item, and the substitution expression will be replaced by the result of calling toString() on the variable. The expression could simply be the name of a variable (in which case it resolves to the value of the variable), but more advanced MVEL expressions are possible as well, e.g., \#{person.name.firstname}.

8.3.3. User task

UserTask
Figure 25. User task

Processes can also involve tasks that need to be executed by human actors. A User Task represents an atomic task to be executed by a human actor. It should have one incoming connection and one outgoing connection. User Tasks can be used in combination with Swimlanes to assign multiple human tasks to similar actors. Refer to the chapter on human tasks for more details. A User Task is actually nothing more than a specific type of service node (of type "Human Task"). A User Task contains the following properties:

  • Id: The id of the node (which is unique within one node container).

  • Name: The display name of the node.

  • TaskName: The name of the human task.

  • Priority: An integer indicating the priority of the human task.

  • Comment: A comment associated with the human task.

  • ActorId: The actor id that is responsible for executing the human task. A list of actor id’s can be specified using a comma (',') as separator.

  • GroupId: The group id that is responsible for executing the human task. A list of group id’s can be specified using a comma (',') as separator.

  • Skippable: Specifies whether the human task can be skipped, i.e., whether the actor may decide not to execute the task.

  • Content: The data associated with this task.

  • Swimlane: The swimlane this human task node is part of. Swimlanes make it easy to assign multiple human tasks to the same actor. See the human tasks chapter for more detail on how to use swimlanes.

  • On entry and on exit actions: Action scripts that are executed upon entry and exit of this node, respectively.

  • Parameter mapping: Allows copying the value of process variables to parameters of the human task. Upon creation of the human tasks, the values will be copied.

  • Result mapping: Allows copying the value of result parameters of the human task to a process variable. Upon completion of the human task, the values will be copied. A human task has a result variable "Result" that contains the data returned by the human actor. The variable "ActorId" contains the id of the actor that actually executed the task.

A user task should define the type of task that needs to be executed (using properties like TaskName, Comment, etc.) and who needs to perform it (using either actorId or groupId). Note that if there is data related to this specific process instance that the end user needs when performing the task, this data should be passed as the content of the task. The task for example does not have access to process variables. Check out the chapter on human tasks to get more detail on how to pass data between human tasks and the process instance.

8.3.4. Reusable sub-process

ReusableSubProcess
Figure 26. Reusable sub-process - Call activity

Represents the invocation of another process from within this process. A sub-process node should have one incoming connection and one outgoing connection. When a Reusable Sub-Process node is reached in the process, the jBPM engine will start the process with the given id. It contains the following properties:

  • Id: The id of the node (which is unique within one node container).

  • Name: The display name of the node.

  • ProcessId: The id of the process that should be executed.

  • Wait for completion (by default true): If this property is true, this sub-process node will only continue if the child process that was started has terminated its execution (completed or aborted); otherwise it will continue immediately after starting the subprocess (so it will not wait for its completion).

  • Independent (by default true): If this property is true, the child process is started as an independent process, which means that the child process will not be terminated if this parent process is completed (or this sub-process node is canceled for some other reason); otherwise the active sub-process will be canceled on termination of the parent process (or cancellation of the sub-process node). Note that you can only set independent to "false" only when "Wait for completion" is set to true.

  • On-entry and on-exit actions: Actions that are executed upon entry or exit of this node, respectively.

  • Parameter in/out mapping: A sub-process node can also define in- and out-mappings for variables. The variables given in the "in" mapping will be used as parameters (with the associated parameter name) when starting the process. The variables of the child process that are defined for the "out" mappings will be copied to the variables of this process when the child process has been completed. Note that you can use "out" mappings only when "Wait for completion" is set to true.

8.3.5. Business rule task

BusinessRuleTask
Figure 27. Business rule task

A Business Rule Task Represents a set of rules that need to be evaluated. The rules are evaluated when the node is reached. A Rule Task should have one incoming connection and one outgoing connection. Rules are defined in separate files using the Drools rule format. Rules can become part of a specific ruleflow group using the ruleflow-group attribute in the header of the rule.

When a Rule Task is reached in the process, the jBPM engine will start executing rules that are part of the corresponding ruleflow-group (if any). Execution will automatically continue to the next node if there are no more active rules in this ruleflow group. As a result, during the execution of a ruleflow group, new activations belonging to the currently active ruleflow group can be added to the Agenda due to changes made to the facts by the other rules. Note that the process will immediately continue with the next node if it encounters a ruleflow group where there are no active rules at that time.

If the ruleflow group was already active, the ruleflow group will remain active and execution will only continue if all active rules of the ruleflow group has been completed. It contains the following properties:

  • Id: The id of the node (which is unique within one node container).

  • Name: The display name of the node.

  • RuleFlowGroup: The name of the ruleflow group that represents the set of rules of this RuleFlowGroup node.

8.3.6. Embedded sub-process

EmbeddedSubProcess
Figure 28. Embedded sub-process

A Sub-Process is a node that can contain other nodes so that it acts as a node container. This allows not only the embedding of a part of the process within such a sub-process node, but also the definition of additional variables that are accessible for all nodes inside this container. A sub-process should have one incoming connection and one outgoing connection. It should also contain one start node that defines where to start (inside the Sub-Process) when you reach the sub-process. It should also contain one or more end events. Note that, if you use a terminating event node inside a sub-process, you are terminating just that sub-process. A sub-process ends when there are no more active nodes inside the sub-process. It contains the following properties:

  • Id: The id of the node (which is unique within one node container).

  • Name: The display name of the node.

  • Variables: Additional variables can be defined to store data during the execution of this node.

8.3.7. Multi-instance sub-process

MultipleInstances
Figure 29. Multi-instance sub-process

A Multiple Instance sub-process is a special kind of sub-process that allows you to execute the contained process segment multiple times, once for each element in a collection. A multiple instance sub-process should have one incoming connection and one outgoing connection. It waits until the embedded process fragment is completed for each of the elements in the given collection before continuing. It contains the following properties:

  • Id: The id of the node (which is unique within one node container).

  • Name: The display name of the node.

  • CollectionExpression: The name of a variable that represents the collection of elements that should be iterated over. The collection variable should be an array or of type java.util.Collection. If the collection expression evaluates to null or an empty collection, the multiple instances sub-process will be completed immediately and follow its outgoing connection.

  • VariableName: The name of the variable to contain the current element from the collection. This gives nodes within the composite node access to the selected element.

  • CollectionOutput: The name of a variable that represents collection of elements that will gather all output of the multi instance sub process

  • OutputVariableName: The name of the variable to contain the currentl output from the multi instance activitiy

  • CompletionCondition: MVEL expression that will be evaluated on each instance completion to check if given multi instance activity can already be completed. In case it evaluates to true all other remaining instances within multi instance activity will be canceled.

8.4. Events

8.4.1. Start event

StartEvent
Figure 30. Start event

The start of the process. A process should have exactly one start node (none start node which does not have event definitions), which cannot have incoming connections and should have one outgoing connection. Whenever a process is started, execution will start at this node and automatically continue to the first node linked to this start event, and so on. It contains the following properties:

  • Id: The id of the node (which is unique within one node container).

  • Name: The display name of the node.

8.4.2. End events

8.4.2.1. End event
EndEvent
Figure 31. End event

The end of the process. A process should have one or more end events. The End Event should have one incoming connection and cannot have any outgoing connections. It contains the following properties:

  • Id: The id of the node (which is unique within one node container).

  • Name: The display name of the node.

  • Terminate: An End Event can terminate the entire process or just the path. When a process instance is terminated, it means its state is set to completed and all other nodes that might still be active (on parallel paths) in this process instance are canceled. Non-terminating end events are simply end for this path (execution of this branch will end here), but other parallel paths can still continue. A process instance will automatically complete if there are no more active paths inside that process instance (for example, if a process instance reaches a non-terminating end node but there are no more active branches inside the process instance, the process instance will be completed anyway). Terminating end events are visualized using a full circle inside the event node, non-terminating event nodes are empty. Note that, if you use a terminating event node inside a sub-process, you are terminating just that sub-process and top level continues.

8.4.2.2. Throwing error event
ErrorEndEvent
Figure 32. Throwing error event

An Error Event can be used to signal an exceptional condition in the process. It should have one incoming connection and no outgoing connections. When an Error Event is reached in the process, it will throw an error with the given name. The process will search for an appropriate error handler that is capable of handling this kind of fault. If no error handler is found, the process instance will be aborted. An Error Event contains the following properties:

  • Id: The id of the node (which is unique within one node container).

  • Name: The display name of the node.

  • FaultName: The name of the fault. This name is used to search for appropriate exception handlers that are capable of handling this kind of fault.

  • FaultVariable: The name of the variable that contains the data associated with this fault. This data is also passed on to the exception handler (if one is found).

Error handlers can be specified using boundary events.

8.4.3. Intermediate events

8.4.3.1. Catching timer event
IntermediateTimerEvent
Figure 33. Catching timer event

Represents a timer that can trigger one or multiple times after a given period of time. A Timer Event should have one incoming connection and one outgoing connection. The timer delay specifies how long the timer should wait before triggering the first time. When a Timer Event is reached in the process, it will start the associated timer. The timer is canceled if the timer node is canceled (e.g., by completing or aborting the enclosing process instance). Consult the section “Timers” for more information. The Timer Event contains the following properties:

  • Id: The id of the node (which is unique within one node container).

  • Name: The display name of the node.

  • Timer delay: The delay that the node should wait before triggering the first time. The expression should be of the form [d][#h][#m][#s][[ms]]. This allows you to specify the number of days, hours, minutes, seconds and milliseconds (which is the default if you don’t specify anything). For example, the expression "1h" will wait one hour before triggering the timer. The expression could also use #{expr} to dynamically derive the delay based on some process variable. Expr in this case could be a process variable, or a more complex expression based on a process variable (e.g. myVariable.getValue()). It does support CRON like expression as well.

  • Timer period: The period between two subsequent triggers. If the period is 0, the timer should only be triggered once. The expression should be of the form [d][#h][#m][#s][[ms]]. You can specify the number of days, hours, minutes, seconds and milliseconds (which is the default if you don’t specify anything). For example, the expression "1h" will wait one hour before triggering the timer again. The expression could also use #{expr} to dynamically derive the period based on some process variable. Expr in this case could be a process variable, or a more complex expression based on a process variable (e.g. myVariable.getValue()).

Timer events could also be specified as boundary events on sub-processes and tasks that are not automatic tasks like script task that have no wait state as timer will not have a change to fire before task completion.

8.4.3.2. Catching signal event
IntermediateSignalEvent
Figure 34. Catching signal event

A Signal Event can be used to respond to internal or external events during the execution of the process. A Signal Event should have one incoming connections and one outgoing connection. It specifies the type of event that is expected. Whenever that type of event is detected, the node connected to this event node will be triggered. It contains the following properties:

  • Id: The id of the node (which is unique within one node container).

  • Name: The display name of the node.

  • EventType: The type of event that is expected.

  • VariableName: The name of the variable that will contain the data associated with this event (if any) when this event occurs.

A process instance can be signaled that a specific event occurred using

ksession.signalEvent(eventType, data, processInstanceId)

This will trigger all (active) signal event nodes in the given process instance that are waiting for that event type. Data related to the event can be passed using the data parameter. If the event node specifies a variable name, this data will be copied to that variable when the event occurs.

It is also possible to use event nodes inside sub-processes. These event nodes will however only be active when the sub-process is active.

You can also generate a signal from inside a process instance. A script (in a script task or using on entry or on exit actions) can use

kcontext.getKieRuntime().signalEvent(eventType, data, kcontext.getProcessInstance().getId());

A throwing signal event could also be used to model the signaling of an event.

8.5. Gateways

8.5.1. Diverging gateway

DivergingGateway
Figure 35. Diverging gateway

Allows you to create branches in your process. A Diverging Gateway should have one incoming connection and two or more outgoing connections. There are three types of gateway nodes currently supported:

  • AND or parallel means that the control flow will continue in all outgoing connections simultaneously.

  • XOR or exclusive means that exactly one of the outgoing connections will be chosen. The decision is made by evaluating the constraints that are linked to each of the outgoing connections. The constraint with the lowest priority number that evaluates to true is selected. Constraints can be specified using different dialects. Note that you should always make sure that at least one of the outgoing connections will evaluate to true at runtime (the jBPM engine will throw an exception at runtime if it cannot find at least one outgoing connection).

  • OR or inclusive means that all outgoing connections whose condition evaluates to true are selected. Conditions are similar to the exclusive gateway, except that no priorities are taken into account. Note that you should make sure that at least one of the outgoing connections will evaluate to true at runtime because the jBPM engine will throw an exception at runtime if it cannot determine an outgoing connection.

It contains the following properties:

  • Id: The id of the node (which is unique within one node container).

  • Name: The display name of the node.

  • Type: The type of the split node, i.e., AND, XOR or OR (see above).

  • Constraints: The constraints linked to each of the outgoing connections (in case of an exclusive or inclusive gateway).

8.5.2. Converging gateway

ConvergingGateway
Figure 36. Converging gateway

Allows you to synchronize multiple branches. A Converging Gateway should have two or more incoming connections and one outgoing connection. There are three types of splits currently supported:

  • AND or parallel means that is will wait until all incoming branches are completed before continuing.

  • XOR or exclusive means that it continues as soon as one of its incoming branches has been completed. If it is triggered from more than one incoming connection, it will trigger the next node for each of those triggers.

  • OR or inclusive means that it continues as soon as all direct active paths of its incoming branches has been completed. This is complex merge behaviour that is described in BPMN2 specification but in most cases it means that OR join will wait for all active flows that started in OR split. Some advanced cases (including other gateways in between or repeatable timers) will be causing different "direct active path" calculation.

It contains the following properties:

  • Id: The id of the node (which is unique within one node container).

  • Name: The display name of the node.

  • Type: The type of the Join node, i.e. AND, OR or XOR.

8.6. Others

8.6.1. Variables

While the flow chart focuses on specifying the control flow of the process, it is usually also necessary to look at the process from a data perspective. Throughout the execution of a process, data can be retrieved, stored, passed on and used.

For storing runtime data, during the execution of the process, process variables can be used. A variable is defined by a name and a data type. This could be a basic data type, such as boolean, int, or String, or any kind of Object subclass (it must implement Serializable interface). Variables can be defined inside a variable scope. The top-level scope is the variable scope of the process itself. Subscopes can be defined using a Sub-Process. Variables that are defined in a subscope are only accessible for nodes within that scope.

Whenever a variable is accessed, the process will search for the appropriate variable scope that defines the variable. Nesting of variable scopes is allowed. A node will always search for a variable in its parent container. If the variable cannot be found, it will look in that one’s parent container, and so on, until the process instance itself is reached. If the variable cannot be found, a read access yields null, and a write access produces an error message, with the process continuing its execution.

Variables can be used in various ways:

  • Process-level variables can be set when starting a process by providing a map of parameters to the invocation of the startProcess method. These parameters will be set as variables on the process scope.

  • Script actions can access variables directly, simply by using the name of the variable as a local parameter in their script. For example, if the process defines a variable of type "org.jbpm.Person" in the process, a script in the process could access this directly:

    // call method on the process variable "person"
    person.setAge(10);

    Changing the value of a variable in a script can be done through the knowledge context:

    kcontext.setVariable(variableName, value);
  • Service tasks (and reusable sub-processes) can pass the value of process variables to the outside world (or another process instance) by mapping the variable to an outgoing parameter. For example, the parameter mapping of a service task could define that the value of the process variable x should be mapped to a task parameter y right before the service is being invoked. You can also inject the value of process variable into a hard-coded parameter String using \#{expression}. For example, the description of a human task could be defined as You need to contact person #{person.getName()} (where person is a process variable), which will replace this expression by the actual name of the person when the service needs to be invoked. Similarly results of a service (or reusable sub-process) can also be copied back to a variable using a result mapping.

  • Various other nodes can also access data. Event nodes for example can store the data associated to the event in a variable, etc. Check the properties of the different node types for more information.

  • Process variables can be accessed also from the Java code of your application. It is done by casting of ProcessInstance to WorkflowProcessInstance. See the following example:

    variable = ((WorkflowProcessInstance) processInstance).getVariable("variableName");

    To list all the process variables see the following code snippet:

    org.jbpm.process.instance.ProcessInstance processInstance = ...;
    VariableScopeInstance variableScope = (VariableScopeInstance) processInstance.getContextInstance(VariableScope.VARIABLE_SCOPE);
    Map<String, Object> variables = variableScope.getVariables();

    Note that when you use persistence then you have to use a command based approach to get all process variables:

    Map<String, Object> variables = ksession.execute(new GenericCommand<Map<String, Object>>() {
        public Map<String, Object> execute(Context context) {
            KieSession ksession = ((KnowledgeCommandContext) context).getStatefulKnowledgesession();
            org.jbpm.process.instance.ProcessInstance processInstance = (org.jbpm.process.instance.ProcessInstance) ksession.getProcessInstance(piId);
            VariableScopeInstance variableScope = (VariableScopeInstance) processInstance.getContextInstance(VariableScope.VARIABLE_SCOPE);
            Map<String, Object> variables = variableScope.getVariables();
            return variables;
        }
    });

Finally, processes (and rules) all have access to globals, i.e. globally defined variables and data in the KIE session. Globals are directly accessible in actions just like variables. Globals need to be defined as part of the process before they can be used. You can for example define globals by clicking the globals button when specifying an action script in the Eclipse action property editor. You can also set the value of a global from the outside using ksession.setGlobal(name, value) or from inside process scripts using kcontext.getKieRuntime().setGlobal(name,value);.

8.6.2. Scripts

Action scripts can be used in different ways:

  • Within a Script Task,

  • As entry or exit actions, with a number of nodes.

Actions have access to globals and the variables that are defined for the process and the predefined variable kcontext. This variable is of type ProcessContext and can be used for several tasks:

  • Getting the current node instance (if applicable). The node instance could be queried for data, such as its name and type. You can also cancel the current node instance.

    NodeInstance node = kcontext.getNodeInstance();
    String name = node.getNodeName();
  • Getting the current process instance. A process instance can be queried for data (name, id, processId, etc.), aborted or signaled an internal event.

    ProcessInstance proc = kcontext.getProcessInstance();
    proc.signalEvent( type, eventObject );
  • Getting or setting the value of variables.

  • Accessing the Knowledge Runtime allows you do things like starting a process, signaling (external) events, inserting data, etc.

jBPM supports multiple dialects, like Java, JavaScript and MVEL. Java actions should be valid Java code, same for JavaScript. MVEL actions can use the business scripting language MVEL to express the action. MVEL accepts any valid Java code but additionally provides support for nested accesses of parameters (e.g., person.name instead of person.getName()), and many other scripting improvements. Thus, MVEL expressions are more convenient for the business user. For example, an action that prints out the name of the person in the "requester" variable of the process would look like this:

// Java dialect
System.out.println( person.getName() );

// JavaScript dialect
print(person.name + '\n);

//  MVEL dialect
System.out.println( person.name );

8.6.3. Constraints

Constraints can be used in various locations in your processes, for example in a diverging gateway. jBPM supports two types of constraints:

  • Code constraints are boolean expressions, evaluated directly whenever they are reached. We support multiple dialects for expressing these code constraints: Java, JavaScript and MVEL. All code constraints have direct access to the globals and variables defined in the process. Here is an example of a valid Java code constraint, person being a variable in the process:

    return person.getAge() > 20;

    A similar example of a valid MVEL code constraint is:

    return person.age > 20;

    And for JavaScript:

    person.age > 20
  • Rule constraints are equals to normal Drools rule conditions. They use the Drools Rule Language syntax to express possibly complex constraints. These rules can, like any other rule, refer to data in the Working Memory. They can also refer to globals directly. Here is an example of a valid rule constraint:

    Person( age > 20 )

    This tests for a person older than 20 being in the Working Memory.

Rule constraints do not have direct access to variables defined inside the process. It is however possible to refer to the current process instance inside a rule constraint, by adding the process instance to the Working Memory and matching for the process instance in your rule constraint. We have added special logic to make sure that a variable processInstance of type WorkflowProcessInstance will only match to the current process instance and not to other process instances in the Working Memory. Note that you are however responsible yourself to insert the process instance into the session and, possibly, to update it, for example, using Java code or an on-entry or on-exit or explicit action in your process. The following example of a rule constraint will search for a person with the same name as the value stored in the variable "name" of the process:

processInstance : WorkflowProcessInstance()
Person( name == ( processInstance.getVariable("name") ) )
# add more constraints here ...

8.6.4. Timers

Timers wait for a predefined amount of time, before triggering, once or repeatedly. They can be used to trigger certain logic after a certain period, or to repeat some action at regular intervals.

8.6.4.1. Configure timer with delay and period

A Timer node is set up with a delay and a period. The delay specifies the amount of time to wait after node activation before triggering the timer the first time. The period defines the time between subsequent trigger activations. A period of 0 results in a one-shot timer.

The (period and delay) expression should be of the form [d][#h][#m][#s][[ms]]. You can specify the amount of days, hours, minutes, seconds and milliseconds (which is the default if you don’t specify anything). For example, the expression "1h" will wait one hour before triggering the timer (again).

8.6.4.2. Configure timer with CRON like expression

Timer events can be configured with CRON like expression when timeCycle is used as timer event definition. Important is that the language attribute of timeCycle definition must be set to cron. With that such cycle of a timer is controlled in the same way as CRON jobs. CRON like expression is supported for:

  • start event timers

  • intermediate event timers

  • boundary event timers

Following is an example of a definition of a boundary timer with CRON like expression

<bpmn2:boundaryEvent id="1" name="Send Update Timer" attachedToRef="_77A94B54-8B7C-4F8A-84EE-C1D310A343A6" cancelActivity="false">
   <bpmn2:outgoing>2</bpmn2:outgoing>
   <bpmn2:timerEventDefinition id="_erIyiJZ7EeSDh8PHobjSSA">
     <bpmn2:timeCycle xsi:type="bpmn2:tFormalExpression" id="_erIyiZZ7EeSDh8PHobjSSA" language="cron">0/1 * * * * ?</bpmn2:timeCycle>
   </bpmn2:timerEventDefinition>
</bpmn2:boundaryEvent>

This timer will fire every second and will continue until activity this boundary event is attached to is active.

8.6.4.3. Configure timer ISO-8601 date format

since version 6 timers can be configured with valid ISO8601 date format that supports both one shot timers and repeatable timers. Timers can be defined as date and time representation, time duration or repeating intervals

  • Date - 2013-12-24T20:00:00.000+02:00 - fires exactly at Christmas Eve at 8PM

  • Duration - PT1S - fires once after 1 second

  • Repeatable intervals - R/PT1S - fires every second, no limit, alternatively R5/PT1S will fire 5 times every second

8.6.4.4. Configure timer with process variables

The timer service is responsible for making sure that timers get triggered at the appropriate times. Timers can also be canceled, meaning that the timer will no longer be triggered.

Timers can be used in two ways inside a process:

  • A Timer Event may be added to the process flow. Its activation starts the timer, and when it triggers, once or repeatedly, it activates the Timer node’s successor. Subsequently, the outgoing connection of a timer with a positive period is triggered multiple times. Canceling a Timer node also cancels the associated timer, after which no more triggers will occur.

  • Timers can be associated with a Sub-Process or tasks as a boundary event.

8.6.4.5. Update timer within running process instance

In some cases timer that has been already scheduled should be rescheduled to accomodate new requirements (prolong or shorten timer expiration time, change delay, period or repeat limit).

As this involves several low level steps, jBPM comes with a dedicated command to perform these operations as atomic operation to make sure all is done within same transaction.

org.jbpm.process.instance.command.UpdateTimerCommand

Following timer events are supported to be updated:

  • boundary timer event

  • intermediate timer event

Timers can be rescheduled by providing following information to the UpdateTimerCommand

  • processInstanceId - mandatory

  • timer node name - mandatory

Next one of following three parameters set needs to be used:

  • delay

  • period and repeatLimit

  • delay, period and repeatLimit

Example on how to updated timer event:

// first start process instance and record its id
long id = kieSession.startProcess(BOUNDARY_PROCESS_NAME).getId();

//set timer delay to 3s
kieSession.execute(new UpdateTimerCommand(id, BOUNDARY_TIMER_ATTACHED_TO_NAME, 3));

Important is that the update command is executed via ksession executor to ensure it’s done in transaction (when persistence is used).

8.7. Process Fluent API

While it is recommended to define processes using the graphical editor or the underlying XML (to shield yourself from internal APIs), it is also possible to define a process using the Process API directly. The most important process model elements are defined in the packages org.jbpm.workflow.core and org.jbpm.workflow.core.node. A "fluent API" is provided that allows you to easily construct processes in a readable manner using factories. At the end, you can validate the process that you were constructing manually.

8.7.1. Example

This is a simple example of a basic process with a script task only:

RuleFlowProcessFactory factory =
    RuleFlowProcessFactory.createProcess("org.jbpm.HelloWorld");
factory
    // Header
    .name("HelloWorldProcess")
    .version("1.0")
    .packageName("org.jbpm")
    // Nodes
    .startNode(1).name("Start").done()
    .actionNode(2).name("Action")
        .action("java", "System.out.println(\"Hello World\");").done()
    .endNode(3).name("End").done()
    // Connections
    .connection(1, 2)
    .connection(2, 3);
RuleFlowProcess process = factory.validate().getProcess();

KieServices ks = KieServices.Factory.get();
KieFileSystem kfs = ks.newKieFileSystem();
Resource resource = ks.getResources().newByteArrayResource(
    XmlBPMNProcessDumper.INSTANCE.dump(process).getBytes());
resource.setSourcePath("helloworld.bpmn2");
kfs.write(resource);
ReleaseId releaseId = ks.newReleaseId("org.jbpm", "helloworld", "1.0");
kfs.generateAndWritePomXML(releaseId);
ks.newKieBuilder(kfs).buildAll();
ks.newKieContainer(releaseId).newKieSession().startProcess("org.jbpm.HelloWorld");

You can see that we start by calling the static createProcess() method from the RuleFlowProcessFactory class. This method creates a new process with the given id and returns the RuleFlowProcessFactory that can be used to create the process. A typical process consists of three parts. The header part comprises global elements like the name of the process, imports, variables, etc. The nodes section contains all the different nodes that are part of the process. The connections section finally links these nodes to each other to create a flow chart.

In this example, the header contains the name and the version of the process and the package name. After that, you can start adding nodes to the current process. If you have auto-completion you can see that you have different methods to create each of the supported node types at your disposal.

When you start adding nodes to the process, in this example by calling the startNode(), actionNode() and endNode() methods, you can see that these methods return a specific NodeFactory, that allows you to set the properties of that node. Once you have finished configuring that specific node, the done() method returns you to the current RuleFlowProcessFactory so you can add more nodes, if necessary.

When you are finished adding nodes, you must connect them by creating connections between them. This can be done by calling the method connection, which will link previously created nodes.

Finally, you can validate the generated process by calling the validate() method and retrieve the created RuleFlowProcess object.

8.8. Testing

Even though business processes aren’t code (we even recommend you to make them as high-level as possible and to avoid adding implementation details), they also have a life cycle like other development artefacts. And since business processes can be updated dynamically, testing them (so that you don’t break any use cases when doing a modification) is really important as well.

8.8.1. Unit testing

When unit testing your process, you test whether the process behaves as expected in specific use cases, for example test the output based on the existing input. To simplify unit testing, jBPM includes a helper class called JbpmJUnitBaseTestCase (in the jbpm-test module) that you can use to greatly simplify your JUnit testing, by offering:

  • helper methods to create a new RuntimeManager and RuntimeEngine for a given (set of) process(es)

    • you can select whether you want to use persistence or not

  • assert statements to check

    • the state of a process instance (active, completed, aborted)

    • which node instances are currently active

    • which nodes have been triggered (to check the path that has been followed)

    • get the value of variables

For example, consider the following "hello world" process containing a start event, a script task and an end event. The following JUnit test will create a new session, start the process and then verify whether the process instance completed successfully and whether these three nodes have been executed.

HelloWorld
public class ProcessPersistenceTest extends JbpmJUnitBaseTestCase {

    public ProcessPersistenceTest() {
        // setup data source, enable persistence
        super(true, true);
    }

    @Test
    public void testProcess() {
        // create runtime manager with single process - hello.bpmn
        createRuntimeManager("hello.bpmn");

        // take RuntimeManager to work with jBPM engine
        RuntimeEngine runtimeEngine = getRuntimeEngine();

        // get access to KieSession instance
        KieSession ksession = runtimeEngine.getKieSession();

        // start process
        ProcessInstance processInstance = ksession.startProcess("com.sample.bpmn.hello");

        // check whether the process instance has completed successfully
        assertProcessInstanceCompleted(processInstance.getId(), ksession);

        // check what nodes have been triggered
        assertNodeTriggered(processInstance.getId(), "StartProcess", "Hello", "EndProcess");
    }
}

JbpmJUnitBaseTestCase acts as base test case class that shall be used for jBPM related tests. It provides four usage areas:

  • JUnit life cycle methods

    • setUp: executed @Before and configures data source and EntityManagerFactory, cleans up Singleton’s session id

    • tearDown: executed @After and clears out history, closes EntityManagerFactory and data source, disposes RuntimeEngines and RuntimeManager

  • KIE base and KnowledgeSession management methods

    • createRuntimeManager creates RuntimeManager for given set of assets and selected strategy

    • disposeRuntimeManager disposes RuntimeManager currently active in the scope of test

    • getRuntimeEngine creates new RuntimeEngine for given context

  • Assertions

    • assertProcessInstanceCompleted

    • assertProcessInstanceAborted

    • assertProcessInstanceActive

    • assertNodeActive

    • assertNodeTriggered

    • assertProcessVarExists

    • assertNodeExists

    • assertVersionEquals

    • assertProcessNameEquals

  • Helper methods

    • getDs - returns currently configured data source

    • getEmf - returns currently configured EntityManagerFactory

    • getTestWorkItemHandler - returns test work item handler that might be registered in addition to what is registered by default

    • clearHistory - clears history log

    • setupPoolingDataSource - sets up data source

JbpmJUnitBaseTestCase supports all three predefined RuntimeManager strategies as part of the unit testing. It’s enough to specify which strategy shall be used whenever creating runtime manager as part of single test:

public class ProcessHumanTaskTest extends JbpmJUnitBaseTestCase {

    private static final Logger logger = LoggerFactory.getLogger(ProcessHumanTaskTest.class);

    public ProcessHumanTaskTest() {
        super(true, false);
    }

    @Test
    public void testProcessProcessInstanceStrategy() {
        RuntimeManager manager = createRuntimeManager(Strategy.PROCESS_INSTANCE, "manager", "humantask.bpmn");
        RuntimeEngine runtimeEngine = getRuntimeEngine(ProcessInstanceIdContext.get());
        KieSession ksession = runtimeEngine.getKieSession();
        TaskService taskService = runtimeEngine.getTaskService();

        int ksessionID = ksession.getId();
        ProcessInstance processInstance = ksession.startProcess("com.sample.bpmn.hello");

        assertProcessInstanceActive(processInstance.getId(), ksession);
        assertNodeTriggered(processInstance.getId(), "Start", "Task 1");

        manager.disposeRuntimeEngine(runtimeEngine);
        runtimeEngine = getRuntimeEngine(ProcessInstanceIdContext.get(processInstance.getId()));

        ksession = runtimeEngine.getKieSession();
        taskService = runtimeEngine.getTaskService();

        assertEquals(ksessionID, ksession.getId());

        // let john execute Task 1
        List<TaskSummary> list = taskService.getTasksAssignedAsPotentialOwner("john", "en-UK");
        TaskSummary task = list.get(0);
        logger.info("John is executing task {}", task.getName());
        taskService.start(task.getId(), "john");
        taskService.complete(task.getId(), "john", null);

        assertNodeTriggered(processInstance.getId(), "Task 2");

        // let mary execute Task 2
        list = taskService.getTasksAssignedAsPotentialOwner("mary", "en-UK");
        task = list.get(0);
        logger.info("Mary is executing task {}", task.getName());
        taskService.start(task.getId(), "mary");
        taskService.complete(task.getId(), "mary", null);

        assertNodeTriggered(processInstance.getId(), "End");
        assertProcessInstanceCompleted(processInstance.getId(), ksession);
    }
}

Above is more complete example that uses PerProcessInstance runtime manager strategy and uses task service to deal with user tasks.

8.8.1.1. Testing integration with external services

Real-life business processes typically include the invocation of external services (like for example a human task service, an email server or your own domain-specific services). One of the advantages of our domain-specific process approach is that you can specify yourself how to actually execute your own domain-specific nodes, by registering a handler. And this handler can be different depending on your context, allowing you to use testing handlers for unit testing your process. When you are unit testing your business process, you can register test handlers that then verify whether specific services are requested correctly, and provide test responses for those services. For example, imagine you have an email node or a human task as part of your process. When unit testing, you don’t want to send out an actual email but rather test whether the email that is requested contains the correct information (for example the right to email, a personalized body, etc.).

A TestWorkItemHandler is provided by default that can be registered to collect all work items (a work item represents one unit of work, like for example sending one specific email or invoking one specific service and contains all the data related to that task) for a given type. This test handler can then be queried during unit testing to check whether specific work was actually requested during the execution of the process and that the data associated with the work was correct.

The following example describes how a process that sends out an email could be tested. This test case in particular will test whether an exception is raised when the email could not be sent (which is simulated by notifying the jBPM engine that the sending the email could not be completed). The test case uses a test handler that simply registers when an email was requested (and allows you to test the data related to the email like from, to, etc.). Once the jBPM engine has been notified the email could not be sent (using abortWorkItem(..)), the unit test verifies that the process handles this case successfully by logging this and generating an error, which aborts the process instance in this case.

HelloWorld2
public void testProcess2() {

    // create runtime manager with single process - hello.bpmn
    createRuntimeManager("sample-process.bpmn");
    // take RuntimeManager to work with jBPM engine
    RuntimeEngine runtimeEngine = getRuntimeEngine();

    // get access to KieSession instance
    KieSession ksession = runtimeEngine.getKieSession();

    // register a test handler for "Email"
    TestWorkItemHandler testHandler = getTestWorkItemHandler();

    ksession.getWorkItemManager().registerWorkItemHandler("Email", testHandler);

    // start the process
    ProcessInstance processInstance = ksession.startProcess("com.sample.bpmn.hello2");

    assertProcessInstanceActive(processInstance.getId(), ksession);
    assertNodeTriggered(processInstance.getId(), "StartProcess", "Email");

    // check whether the email has been requested
    WorkItem workItem = testHandler.getWorkItem();
    assertNotNull(workItem);
    assertEquals("Email", workItem.getName());
    assertEquals("me@mail.com", workItem.getParameter("From"));
    assertEquals("you@mail.com", workItem.getParameter("To"));

    // notify the jBPM engine the email has been sent
    ksession.getWorkItemManager().abortWorkItem(workItem.getId());
    assertProcessInstanceAborted(processInstance.getId(), ksession);
    assertNodeTriggered(processInstance.getId(), "Gateway", "Failed", "Error");

}
8.8.1.2. Configuring persistence

You can configure whether you want to execute the JUnit tests using persistence or not. By default, the JUnit tests will use persistence, meaning that the state of all process instances will be stored in a (in-memory H2) database (which is started by the JUnit test during setup) and a history log will be used to check assertions related to execution history. When persistence is not used, process instances will only live in memory and an in-memory logger is used for history assertions.

Persistence (and setup of data source) is controlled by the super constructor and allows following

  • default, no arg constructor - the most simple test case configuration (does NOT initialize data source and does NOT configure session persistence) - this is usually used for in memory process management, without human task interaction

  • super(boolean, boolean) - allows to explicitly configure persistence and data source. This is the most common way of bootstrapping test cases for jBPM

    • super(true, false) - to execute with in memory process management with human tasks persistence

    • super(true, true) - to execute with persistent process management with human tasks persistence

  • super(boolean, boolean, string) - same as super(boolean, boolean) but allows to use another persistence unit name than default (org.jbpm.persistence.jpa)

public class ProcessHumanTaskTest extends JbpmJUnitBaseTestCase {

    private static final Logger logger = LoggerFactory.getLogger(ProcessHumanTaskTest.class);

    public ProcessHumanTaskTest() {
        // configure this tests to not use persistence for jBPM engine but still use it for human tasks
        super(true, false);
    }
}

9. Human Tasks

9.1. Introduction

An important aspect of business processes is human task management. While some of the work performed in a process can be executed automatically, some tasks need to be executed by human actors.

jBPM supports a special human task node inside processes for modeling this interaction with human users. This human task node allows process designers to define the properties related to the task that the human actor needs to execute, like for example the type of task, the actor(s), or the data associated with the task.

jBPM also includes a so-called human task service, a back-end service that manages the life cycle of these tasks at runtime. The jBPM implementation is based on the WS-HumanTask specification. Note however that this implementation is fully pluggable, meaning that users can integrate their own human task solution if necessary.

In order to have human actors participate in your processes, you first need to (1) include human task nodes inside your process to model the interaction with human actors, (2) integrate a task management component (like for example the WS-HumanTask based implementation provided by jBPM) and (3) have end users interact with a human task client to request their task list and claim and complete the tasks assigned to them. Each of these three elements will be discussed in more detail in the next sections.

9.2. Using User Tasks in our Processes

jBPM supports the use of human tasks inside processes using a special User Task node defined by the BPMN2 Specification(as shown in the figure above). A User Task node represents an atomic task that needs to be executed by a human actor.

user task

[Although jBPM has a special user task node for including human tasks inside a process, human tasks are considered the same as any other kind of external service that needs to be invoked and are therefore simply implemented as a domain-specific service. See the chapter on domain-specific processes to learn more about this.]

A User Task node contains the following core properties:

  • Actors: The actors that are responsible for executing the human task. A list of actor id’s can be specified using a comma (',') as separator.

  • Group: The group id that is responsible for executing the human task. A list of group id’s can be specified using a comma (',') as separator.

  • Name: The display name of the node.

  • TaskName: The name of the human task. This name is used to link the task to a Form. It also represent the internal name of the Task that can be used for other purposes.

  • DataInputSet: all the input variables that the task will receive to work on. Usually you will be interested in copying variables from the scope of the process to the scope of the task. (Look at the data mappings section for an example)

  • DataOutputSet: all the output variables that will be generated by the execution of the task. Here you specify all the name of the variables in the context of the task that you are interested to copy to the context of the process. (Look at the data mappings section for an example)

  • Assignments: here you specify which process variable will be linked to each Data Input and Data Output mapping. (Look at the data mappings section for an example)

You can edit these variables in the properties view (see below) when selecting the User Task node.

properties panel

A User Task node also contains the following extra properties:

  • Comment: A comment associated with the human task. Here you can use expressions.

  • Content: The data associated with this task.

  • Priority: An integer indicating the priority of the human task.

  • Skippable: Specifies whether the human task can be skipped, i.e., whether the actor may decide not to execute the task.

  • On entry and on exit actions: Action scripts that are executed upon entry and exit of this node, respectively.

properties panel extra

9.2.1. Swimlanes

User tasks can be used in combination with swimlanes to assign multiple human tasks to the same actor. Whenever the first task in a swimlane is created, and that task has an actorId specified, that actorId will be assigned to (all other tasks of) that swimlane as well. Note that this would override the actorId of subsequent tasks in that swimlane (if specified), so only the actorId of the first human task in a swimlane will be taken into account, all others will then take the actorId as assigned in the first one.

ActorId assignment will work only when there is single actor specified. Since ActorId field can contain multiple actors (john,mary,peter) auto assignment for the first task will not be performed when multiple values are found.

Whenever a human task that is part of a swimlane is completed, the actorId of that swimlane is set to the actorId that executed that human task. This allows for example to assign a human task to a group of users, and to assign future tasks of that swimlame to the user that claimed the first task. This will also automatically change the assignment of tasks if at some point one of the tasks is reassigned to another user.

Also exists the possibility to disable the autoclaim functionality of the swimlanes. In such case, the swimlane works like a visual element to group tasks in the process diagram, but the task which belong to any swimlane won’t be assigned automatically. The Autoclaim functionality is set to true by default. If you require the property Autoclaim set to false by default, set the following runtime environment entry in your deployment descriptor on a global or a project level:

  • Name: Autoclaim

  • Value: "false"

For example, if you want to set the entry in the XML deployment descriptor on the project level, add the following to the kie-deployment-descriptor.xml file:

<environment-entries>
  ..
    <environment-entry>
        <resolver>mvel</resolver>
        <identifier>new String ("false")</identifier>
        <parameters/>
        <name>Autoclaim</name>
    </environment-entry>
  ..
</environment-entries>

If you are setting the runtime environment property using the API, note that the value is a String, not a Boolean type.

For more information about deployment descriptors, see Deployment descriptors.

9.3. Task escalations and notifications

There are a number of situations that can raise the need for task escalation. For example, if an user is assigned to a task, but is unavailable, the task should be automatically reassigned to another user or group. Escalation can be defined for tasks that are in one of the following states:

  • not started (READY or RESERVED)

  • not completed (IN_PROGRESS)

Whenever an escalation is reached its associated users/groups will be assigned to the task as potential owners, replacing those that were previously set. In case the task had an actual owner assigned, it will be reset and the task will be placed in READY state.

9.3.1. Designing a task escalation

You must set the following attribute values when designing a task escalation in the BPMN2 editor:

editor reassignment
  • Users: Comma-separated list of user IDs that must be assigned to the task during escalation. Acceptable values are strings and expressions, such as #{user-id}.

  • Groups: Comma-separated list of group IDs that must be assigned to the task during escalation. Acceptable values are strings and expressions such as #{group-id}.

  • Expires At: Time or duration definition stating when the escalation should start. For a detailed description, see the Time and Duration definitions section.

  • Type: Identifies the type of task state that the escalation should start. For example, not-started or not-completed.

9.3.2. Email notifications

In addition to defining task escalation values, email notifications can defined and sent for tasks that are in one of the following states:

  • not started (READY or RESERVED)

  • not completed (IN_PROGRESS)

9.3.3. Designing email notifications

The following attributes must be set when designing an email notification in the BPMN2 editor:

editor notification
  • Type: Identifies the type of task state that the escalation should start. For example, not-started or not-completed.

  • Task expiration definition: This definition states about the time or duration of escalation. For more information, see the Time and Duration definitions section.

  • From: (Optional) user or group ID. Acceptable values are strings and expressions.

  • To Users: Comma-separated list of user IDs that are the recipients of the notification.

  • To Groups: Comma-separated list of group IDs that are recipients of the notification.

  • Reply To: (Optional) user or group ID that will receive replies from the notification.

  • Subject: Subject of the notification. Acceptable values are strings and expressions.

  • Body: Body of the notification. Acceptable values are strings and expressions.

A Notification can reference process variables using the #{processVariable} expression and task variables using the ${taskVariable} expression. The process variables are resolved during task creation time and task variables are resolved at notification time. The following additional task variables can be defined for notifications:

  • taskId: Internal ID of a task instance

  • processInstanceId: Internal ID of a process instance that the task belongs to

  • workItemId: Internal ID of a work item that created this task

  • owners: List of users and groups that are potential owners of the task

  • doc: Map that contains regular task variables

The following illustration contains the body of a simple notification message and shows how the different variables can be accessed:

<html>
	<body>
		<b>${owners[0].id} you have been assigned to a task (task-id ${taskId})</b><br>
		You can access it in your task
		<a href="http://localhost:8080/jbpm-console/app.html#errai_ToolSet_Tasks;Group_Tasks.3">inbox</a><br/>
		Important technical information that can be of use when working on it<br/>
		- process instance id - ${processInstanceId}<br/>
		- work item id - ${workItemId}<br/>

		<hr/>

		Here are some task variables available
		<ul>
			<li>ActorId = ${doc['ActorId']}</li>
			<li>GroupId = ${doc['GroupId']}</li>
			<li>Comment = ${doc['Comment']}</li>
		</ul>
		<hr/>
		Here are all potential owners for this task
		<ul>
		$foreach{orgEntity : owners}
			<li>Potential owner = ${orgEntity.id}</li>
		$end{}
		</ul>

		<i>Regards from jBPM team</i>
	</body>
</html>

9.3.4. Time and Duration definitions

With the ISO 8601 format addition, duration definition such as "2s" or "4h" became single time executions. In order to define repetable executions you must now use the ISO 8601 repetable format (see Repeatable execution below.)

Use the Task expiration definition attribute for both task escalations and notifications to define when the escalation or notification will occur. The Task expiration definition attribute can be set in several ways: as Time period, as Date/time and as Expression.

9.3.4.1. Time period

In default mode the Time period widget will generate One time execution (see One time execution below.), if needed, the Notification repeat switch allows to set Repeatable execution of two types: until Task state changes (like R/P1Y ) and until Repeat count reaches (like R4/P1Y) (see Repeatable execution below.)

9.3.4.2. Date/time

Notify after task expiration can be set by choosing the date and time in dateTime picker and by choosing the timezone. If needed, time zone can be switched from timezone offset to time zone naming and back. It is possible to set Notification repeat by switching on Notification repeat like it works for Time period and it is also possible to set how often notify will fire.

9.3.4.3. Expression

In other cases it is possible to set Task expiration as a string value or an expression. For example, #{expiresAt}. The following options are available to define your escalation or notification definitions:

9.3.4.4. One time execution

Can be defined with both time format, for example:

  • 2m - in two minutes

  • 4h - in four hours

  • 6d - in 6 days

or with ISO 8601 date and time format, for example:

  • PT2M - in two minutes

  • PT4H - in four hours

  • P6D - in six days

9.3.4.5. Repeatable execution

When using the ISO 8601 format, you can define the rescheduling of your task escalation or notification using one of the following options that follow the ISO 8601 repeating intervals specification:

  • R/duration - First triggers at current time, plus duration and repeats at each duration time interval. For example, "R5/PT4H" triggers four hours from now and repeats five times in four hour intervals. "R/PT2S" is an unbounded interval and triggers every two seconds until the task is no longer in the not-started or not-completed states.

  • R/startDate/duration - First triggers at the startDate with the repeat period using the set duration. For example "R2/2019-01-01T13:00:00Z/PT6H" is a trigger that first fires on January 1st 2019 at 1pm and re-fires two times six and twelve hours from the first fire.

  • R/duration/endDate - First triggers at endDate - duration with the repeat period using the set duration. For example "R2/PT6H/2019-01-01T13:00:00Z" is a trigger that first fires on January 1st 2019 at 7am and re-fires two times six and twelve hours from the first fire.

  • R/startDate/endDate - First triggers at the startDate and the duration set at endDate - startDate. For example: "R2/2019-01-01T13:00:00Z/2019-01-01T16:00:00Z" is a trigger that fires on January 1st 2019 at 1pm and re-fires two times three and six hours from the first fire.

You can use one unbounded or multiple bounded (non-ISO8601) definitions for each escalation or notification type (such as not-completed or not-started). You cannot mix unbounded and bounded notifications and escalations. For example, you cannot use R2/PT1S for a not-completed notification and R/PT2S for a not-completed escalation because both are of the not-completed type. However, you can use R2/PT1S for a not-started escalation and R/PT2S for a not-completed escalation. Whether a definition is an escalation or a notification is irrelevant, but the type distinction is important.

9.4. Data Mappings

Human tasks typically present some data related to the task that needs to be performed to the actor that is executing the task and usually also request the actor to provide some result data related to the execution of the task. Task forms are typically used to present this data to the actor and request results.

The data that will be used by the Task needs to be specified when we define the User Task in our Process. In order to do that we need to define which data will be copied from the process context to the task context. Notice that the data is copied, so it can be modified inside the Task context but it will not affect the process variables unless we decide to copy back the value from the task to the process context.

Most of the times Forms are used to display data to the end user. Allowing them to generate/create new data that will be propagated to the process context to be used by future activities. In order to decide how the information flow from the process to a particular task and from the task to the process we need to define which pieces of information will be automatically copied by the jBPM engine. The following sections shows how to do these mappings by configuring the DataInputSet, DataOutputSet and the Assignments properties of a User Task.

Let’s start defining the Task DataInputSet:

data input

Both GroupId and Comment are automatically generated, so you don’t need to worry about that. In this case the only user defined Data Input is called: in_name. This means that the task will be receiving information from the process context and internally this variable will be called in_name. The type is also specified here.

In the Data Outputs represent the data that will be generated by the tasks. In this case we have two variables of type String called: out_name and out_mail and two Integer variables called: out_age and out_score are defined. This means that inside the task context we will need to set the value to these variables.

data output

Finally all the connections with the process context needs to be done in the Data Assignments. The main idea here is to define how Data Inputs and Data Outputs will be associated with process variables.

data assignments

As shown in the previous screenshot, the assignments between the process variables (in this case (name, age, mail and hr_score)) and the Data Inputs and Outputs are done in the Data Assignments screen. Notice that the example uses a convention that makes it easy to know which is an internal Task variables (Data Input/Output) using the "in_" and "out_" prefix to the variable names. Using this convention you can quickly understand the Assignments screen. The first row maps the process variable called name to the data input called in_name. The second row maps the data output called out_mail to the process variable called mail, and so on.

These mappings at runtime will automatically copy the variables content from one context (process and task) to the other automatically for us.

9.5. Task Lifecycle

From the perspective of a process, when a user task node is encountered during the execution, a human task is created. The process will then only leave the user task node when the associated human task has been completed or aborted.

The human task itself usually has a complete life cycle itself as well. For details beyond what is described below, please check out the WS-HumanTask specification. The following diagram is from the WS-HumanTask specification and describes the human task life cycle.

WSHT lifecycle

A newly created task starts in the "Created" stage. Usually, it will then automatically become "Ready", after which the task will show up on the task list of all the actors that are allowed to execute the task. The task will stay "Ready" until one of these actors claims the task, indicating that he or she will be executing it.

When a user then eventually claims the task, the status will change to "Reserved". Note that a task that only has one potential (specific) actor will automatically be assigned to that actor upon creation of the task. When the user who has claimed the task starts executing it, the task status will change from "Reserved" to "InProgress".

Lastly, once the user has performed and completed the task, the task status will change to "Completed". In this step, the user can optionally specify the result data related to the task. If the task could not be completed, the user could also indicate this by using a fault response, possibly including fault data, in which case the status would change to "Failed".

While the life cycle explained above is the normal life cycle, the specification also describes a number of other life cycle methods, including:

  • Delegating or forwarding a task, so that the task is assigned to another actor

  • Revoking a task, so that it is no longer claimed by one specific actor but is (re)available to all actors allowed to take it

  • Temporarly suspending and resuming a task

  • Stopping a task in progress

  • Skipping a task (if the task has been marked as skippable), in which case the task will not be executed

9.6. Task Permissions

Only users associated with a specific task are allowed to modify or retrieve information about the task. This allows users to create a jBPM workflow with multiple tasks and yet still be assured of both the confidentiality and integrity of the task status and information associated with a task.

Some task operations will end up throwing a org.jbpm.services.task.exception.PermissionDeniedException when used with information about an unauthorized user. For example, when a user is trying to directly modify the task (for example, by trying to claim or complete the task), the PermissionDeniedException will be thrown if that user does not have the correct role for that operation. Furthermore, a user will not be able to view or retrieve tasks that the user is not involved with, especially if this is via the Business Central application.

User 'Administrator' and group 'Administrators' are automatically added to each Human Task.

9.6.1. Task Permissions Matrix

The permisions matrix below summarizes the actions that specific user roles are allowed to do. On the left side, possible operations are listed while user roles are listed across the top of the matrix.

The cells of the permissions matrix contain one of three possible characters, each of which indicate the user role permissions for that operation:

  • a "+ indicates that the user role CAN do the specified operation

  • a “-” indicates that the user role MAY NOT do the specified operation

  • a “0” indicates that the user role MAY NOT do the specified operation, and that it is also not an operation that matches the user’s role ("not applicable")

Furthermore, the following words or abbreviations in the table header refer to the following roles:

Table 7. Task roles in the permissions table
Word Role Description

Initiator

Task Initiator

The user who creates the task instance

Stakeholder

Task Stakeholder

The user involved in the task: this user can influence the progress of a task, by performing administrative actions on the task instance

Potential

Potential Owner

The user who can claim the task before it has been claimed, or after it has been released or forward: only tasks that have the status "Ready" may be claimed; a potential owner becomes the actual owner of a task by claiming the task

Actual

Actual Owner

The user who has claimed the task and will progress the task to completion or failure

Administrator

Business Adminstrator

A "super user" who may modify the status or progress of a task at any point in a task’s lifecycle

User roles are assigned to users by the definition of the task in the jBPM (BPMN2) process definition.

Permissions Matrices

The following matrix describes the authorizations for all operations which modify a task:

Table 8. Main operations permissions matrix
Operation Role Initiator Stakeholder Potential Actual Administrator

activate

+

+

0

0

+

claim

-

+

+

0

+

complete

-

+

0

+

+

delegate

+

+

+

+

+

fail

-

+

0

+

+

forward

+

+

+

+

+

nominate

+

+

+

+

+

release

+

+

+

+

+

remove

-

0

0

0

+

resume

+

+

+

+

+

skip

+

+

+

+

+

start

-

+

+

+

+

stop

-

+

0

+

+

suspend

+

+

+

+

+

The matrix below describes the authorizations used when retrieving task information. In short, it says that all users which have any role with regards to the specific task, are allowed to see the task. This applies to all operations that are used to retrieve any type of information about the task.

Table 9. Retrieval operations permissions matrix
Operation Role Initiator Stakeholder Potential Actual Administrator

get

+

+

+

+

+

9.7. Task Service and The jBPM engine

As far as the jBPM engine is concerned, human tasks are similar to any other external service that needs to be invoked and are implemented as a domain-specific service. (For more on domain-specific services, see the chapter on them here.) Because a human task is an example of such a domain-specific service, the process itself only contains a high-level, abstract description of the human task to be executed and a work item handler that is responsible for binding this (abstract) task to a specific implementation.

Users can plug in any human task service implementation, such as the one that’s provided by jBPM, or they may register their own implementation. In the next paragraphs, we will describe the human task service implementation provided by jBPM.

The jBPM project provides a default implementation of a human task service based on the WS-HumanTask specification. If you do not need to integrate jBPM with another existing implementation of a human task service, you can use this service. The jBPM implementation manages the life cycle of the tasks (creation, claiming, completion, etc.) and stores the state of all the tasks, task lists, and other associated information. It also supports features like internationalization, calendar integration, different types of assignments, delegation, escalation and deadlines. The code for the implementation itself can be found in the jbpm-human-task module.

The jBPM task service implementation is based on the WS-HumanTask (WS-HT) specification. This specification defines (in detail) the model of the tasks, the life cycle, and many other features. It is very comprehensive and the first version can be found here.

9.8. Task Service API

The human task service exposes a Java API for managing the life cycle of tasks. This allows clients to integrate (at a low level) with the human task service. Note that end users should probably not interact with this low-level API directly, but use one of the more user-friendly task clients (see below) instead. These clients offer a graphical user interface to request task lists, claim and complete tasks, and manage tasks in general. The task clients listed below use the Java API to internally interact with the human task service. Of course, the low-level API is also available so that developers can use it in their code to interact with the human task service directly.

A task service (interface org.kie.api.task.TaskService) offers the following methods (among others) for managing the life cycle of human tasks:

              ...

              void start( long taskId, String userId );

              void stop( long taskId, String userId );

              void release( long taskId, String userId );

              void suspend( long taskId, String userId );

              void resume( long taskId, String userId );

              void skip( long taskId, String userId );

              void delegate(long taskId, String userId, String targetUserId);

              void complete( long taskId, String userId, Map<String, Object> results );

              ...

If you take a look at the method signatures you will notice that almost all of these methods take the following arguments:

  • taskId: The id of the task that we are working with. This is usually extracted from the currently selected task in the user task list in the user interface.

  • userId: The id of the user that is executing the action. This is usually the id of the user that is logged in into the application.

There is also an internal interface that you should check for more methods to interact with the Task Service, this interface is internal until it gets tested. Future version of the External (public) interface can include some of the methods proposed in the InternalTaskService interface. If you want to make use of the methods provided by this interface you need to manually cast to InternalTaskService. One method that can be useful from this interface is getTaskContent():

               Map<String, Object> getTaskContent( long taskId );

This method saves you from doing all the boiler plate of getting the ContentMarshallerContext to unmarshall the serialized version of the task content. If you only want to use the stable/public API’s you can just copy what this method does:

              Task taskById = taskQueryService.getTaskInstanceById(taskId);
              Content contentById = taskContentService.getContentById(taskById.getTaskData().getDocumentContentId());
              ContentMarshallerContext context = getMarshallerContext(taskById);
              Object unmarshalledObject = ContentMarshallerHelper.unmarshall(contentById.getContent(), context.getEnvironment(), context.getClassloader());
              if (!(unmarshalledObject instanceof Map)) {
                  throw new IllegalStateException(" The Task Content Needs to be a Map in order to use this method and it was: "+unmarshalledObject.getClass());

              }
              Map<String, Object> content = (Map<String, Object>) unmarshalledObject;
              return content;

Because the content of the Task can be any Object, the previous method assume that you are storing a Map of objects to work. If you are storing other than a Map you should do the correspondent checks.

9.8.1. Task event listener

Task service supports task listeners to be invoked upon various life cycle events happening on given task instance. In majority of cases task event listeners are used to intercept certain operation to perform additional logic - like storing task information in separate tables for business activity monitoring needs.

Task event listeners are pluggable and users can provide their own implementation of org.kie.api.task.TaskLifeCycleEventListener interface. There are beforeTask* and afterTask* methods that are invoked upon given event occured on a task instance.

TaskEvent (org.kie.api.task.TaskEvent) is the only argument available to the listener that provides access to:

  • Task instance that the event correspond to

  • TaskContext that provides access to services for further processing needs such as TaskPersistenceContext

In many cases implementors of task event listener need to have access to task variables (either input or output or both) to perform required operations. It can be done as described above (using various services and content marshaller helper) though that in many cases leads to code duplication in multiple listeners thus an extended support was added in 6.5 to simply use TaskContext to obtain that information.

loadTaskVariables(Task task);

Method loadTaskVariables can be used to populate both input and output variables of a given task by simple and single method call. That method is "no op" in case task variables are already set on a task.

To improve performance task variables are automatically set when they are available - usually given by caller on task service:

  • when task is created it usually has input variables, these variables are then set on Task instance so there is no need to use loadTaskVariables method as only task input variables are available when task is being created - applies to beforeTaskAdded and afterTaskAdded events handling

  • when task is completed it usually has output variables, these variables are set on a task so there is no need to use loadTaskVariables method if only task output variables are required.

Other than that loadTaskVariables should be used to populate task variables.

It’s enough to call it once (like in beforeTask) method of the listener as they will be available to both beforeTask* and afterTask* methods then.

9.8.2. Data model of task service

Below is the data base model used by task service with all tables and their relationship illustrated.

task schema

9.9. Interacting with the Task Service

In order to get access to the Task Service API it is recommended to let the Runtime Manager to make sure that everything is setup correctly. Look at the Runtime Manager section for more information. From the API perspective you should be doing something like this:

// ...

RuntimeEngine engine = runtimeManager.getRuntimeEngine(EmptyContext.get());
KieSession kieSession = engine.getKieSession();
// Start a process
kieSession.startProcess("CustomersRelationship.customers", params);
// Do Task Operations
TaskService taskService = engine.getTaskService();
List<TaskSummary> tasksAssignedAsPotentialOwner = taskService.getTasksAssignedAsPotentialOwner("mary", "en-UK");

// Claim Task
taskService.claim(taskSummary.getId(), "mary");
// Start Task
taskService.start(taskSummary.getId(), "mary");

// ...

If you use this approach, there is no need to register the Task Service with the jBPM engine. The Runtime Manager will do that for you automatically. If you don’t use the Runtime Manager, you will be responsible for setting the LocalHTWorkItemHandler in the session in order to get the Task Service notifying the jBPM engine when a task is completed, or the jBPM engine notifying that a task has been created.

In jBPM 6.x the Task Service runs locally to the jBPM engine and for that reason multiple light clients can be created for different jBPM engine instances. All the clients will be sharing the same database (backend storage for the tasks).

9.10. Experimental features

9.10.1. SubTasks

The "Subtasks" feature is an experimental feature in the task service. This feature allows one task to have sub-tasks in a parent-child relationship. The parent task can auto-complete depending on the state of its children (and the subtask strategy used).

You can use it by setting the parentId of a task, either when creating the task manually via the task service or otherwise by setting the ParentId parameter of the task definition in the BPMN2 process definition.

10. Persistence and Transactions

10.1. Process Instance State

jBPM allows the persistent storage of certain information. This chapter describes these different types of persistence and how to configure them. An example of the information stored is the process runtime state. Storing the process runtime state is necessary in order to be able to continue execution of a process instance at any point, if something goes wrong. Also, the process definitions themselves, and the history information (logs of current and previous process states) can also be persisted.

10.1.1. Runtime State

Whenever a process is started, a process instance is created, which represents the execution of the process in that specific context. For example, when executing a process that specifies how to process a sales order, one process instance is created for each sales request. The process instance represents the current execution state in that specific context, and contains all the information related to that process instance. Note that it only contains the (minimal) runtime state that is needed to continue the execution of that process instance at some later time, but it does not include information about the history of that process instance if that information is no longer needed in the process instance.

The runtime state of an executing process can be made persistent, for example, in a database. This allows to restore the state of execution of all running processes in case of unexpected failure, or to temporarily remove running instances from memory and restore them at some later time. jBPM allows you to plug in different persistence strategies. By default, if you do not configure the jBPM engine otherwise, process instances are not made persistent.

If you configure the jBPM engine to use persistence, it will automatically store the runtime state into the database. You do not have to trigger persistence yourself, the jBPM engine will take care of this when persistence is enabled. Whenever you invoke the jBPM engine, it will make sure that any changes are stored at the end of that invocation, at so-called safe points. Whenever something goes wrong and you restore the jBPM engine from the database, you also should not reload the process instances and trigger them manually to resume execution, as process instances will automatically resume execution if they are triggered, like for example by a timer expiring, the completion of a task that was requested by that process instance, or a signal being sent to the process instance. The jBPM engine will automatically reload process instances on demand.

The runtime persistence data should in general be considered internal, meaning that you probably should not try to access these database tables directly and especially not try to modify these directly (as changing the runtime state of process instances without the jBPM engine knowing might have unexpected side-effects). In most cases where information about the current execution state of process instances is required, the use of a history log is mostly recommended (see below). In some cases, it might still be useful to for example query the internal database tables directly, but you should only do this if you know what you are doing.

10.1.1.1. Binary Persistence

jBPM uses a binary persistence mechanism, otherwise known as marshalling, which converts the state of the process instance into a binary dataset. When you use persistence with jBPM, this mechanism is used to save or retrieve the process instance state from the database. The same mechanism is also applied to the session state and any work item states.

When the process instance state is persisted, two things happen:

  • First, the process instance information is transformed into a binary blob. For performance reasons, a custom serialization mechanism is used and not normal Java serialization.

  • This blob is then stored, alongside other metadata about this process instance. This metadata includes, among other things, the process instance id, process id, and the process start date.

Apart from the process instance state, the session itself can also store some state, such as the state of timer jobs, or the session data that any business rules would be evaluated over. This session state is stored separately as a binary blob, along with the id of the session and some metadata. You can always restore session state by reloading the session with the given id. The session id can be retrieved using ksession.getId().

Note that the process instance binary datasets are usually relatively small, as they only contain the minimal execution state of the process instance. For a simple process instance, this usually contains one or a few node instances, i.e., any node that is currently executing, and any existing variable values.

As a result of jBPM using marshalling, the data model is both simple and small.

jbpm schema doc
Figure 37. jBPM data model

The sessioninfo entity contains the state of the (knowledge) session in which the jBPM process instance is running.

Table 10. SessionInfo
Field Description Nullable

id

The primary key.

NOT NULL

lastmodificationdate

The last time that the entity was saved to the database

rulesbytearray

The binary dataset containing the state of the session

NOT NULL

startdate

The start time of the session

optlock

The version field that serves as its optimistic lock value

The processinstanceinfo entity contains the state of the jBPM process instance.

Table 11. ProcessInstanceInfo
Field Description Nullable

instanceid

The primary key

NOT NULL

lastmodificationdate

The last time that the entity was saved to the database

lastreaddate

The last time that the entity was retrieved (read) from the database

processid

The name (id) of the process

processinstancebytearray

This is the binary dataset containing the state of the process instance

NOT NULL

startdate

The start time of the process

state

An integer representing the state of the process instance

NOT NULL

optlock

The version field that serves as its optimistic lock value

The eventtypes entity contains information about events that a process instance will undergo or has undergone.

Table 12. EventTypes
Field Description Nullable

instanceid

This references the processinstanceinfo primary key and there is a foreign key constraint on this column.

NOT NULL

eventTypes

A text field related to an event that the process has undergone.

The workiteminfo entity contains the state of a work item.

Table 13. WorkItemInfo
Field Description Nullable

workitemid

The primary key

NOT NULL

creationDate

The creation date of the work item

name

The name of the work item

processinstanceid

The (primary key) id of the process: there is no foreign key constraint on this field.

NOT NULL

state

An integer representing the state of the work item

NOT NULL

optlock

The version field that serves as its optimistic lock value

workitembytearay

This is the binary dataset containing the state of the work item

NOT NULL

The CorrelationKeyInfo entity contains information about correlation keys assigned to given process instance - loose relationship as this table is considered optional used only when correlation capabilities are required.

Table 14. CorrelationKeyInfo
Field Description Nullable

keyid

The primary key

NOT NULL

name

assigned name of the correlation key

processinstanceid

The id of the process instance which is assigned to this correlation key

NOT NULL

optlock

The version field that serves as its optimistic lock value

The CorrelationPropertyInfo entity contains information about correlation properties for given correlation key that is assigned to given process instance.

Table 15. CorrelationPropertyInfo
Field Description Nullable

propertyid

The primary key

NOT NULL

name

The name of the property

value

The value of the property

NOT NULL

optlock

The version field that serves as its optimistic lock value

correlationKey-keyid

Foregin key to map to correlation key

NOT NULL

The ContextMappingInfo entity contains information about contextual information mapped to ksession. This is an internal part of RuntimeManager and can be considered optional when RuntimeManager is not used.

Table 16. ContextMappingInfo
Field Description Nullable

mappingid

The primary key

NOT NULL

context_id

Identifier of the context

NOT NULL

ksession_id

Identifier of the ksession mapped to this context

NOT NULL

optlock

The version field that serves as its optimistic lock value

10.1.1.2. Safe Points

The state of a process instance is stored at so-called "safe points" during the execution of the jBPM engine. Whenever a process instance is executing (for example when it started or continuing from a previous wait state, the jBPM engine executes the process instance until no more actions can be performed (meaning that the process instance either has completed (or was aborted), or that it has reached a wait state in all of its parallel paths). At that point, the jBPM engine has reached the next safe state, and the state of the process instance (and all other process instances that might have been affected) is stored persistently.

10.2. Audit Log

In many cases it will be useful (if not necessary) to store information about the execution of process instances, so that this information can be used afterwards. For example, sometimes we want to verify which actions have been executed for a particular process instance, or in general, we want to be able to monitor and analyze the efficiency of a particular process.

However, storing history information in the runtime database can result in the database rapidly increasing in size, not to mention the fact that monitoring and analysis queries might influence the performance of your runtime engine. This is why process execution history information can be stored separately.

This history log of execution information is created based on events that the jBPM engine generates during execution. This is possible because the jBPM runtime engine provides a generic mechanism to listen to events. The necessary information can easily be extracted from these events and then persisted to a database. Filters can also be used to limit the scope of the logged information.

10.2.1. The jBPM Audit data model

The jbpm-audit module contains an event listener that stores process-related information in a database using JPA. The data model itself contains three entities, one for process instance information, one for node instance information, and one for (process) variable instance information.

The ProcessInstanceLog table contains the basic log information about a process instance.

Table 17. ProcessInstanceLog
Field Description Nullable

id

The primary key and id of the log entity

NOT NULL

duration

Actual duration of this process instance since its start date

end_date

When applicable, the end date of the process instance

externalId

Optional external identifier used to correlate to some elements - e.g. deployment id

user_identity

Optional identifier of the user who started the process instance

outcome

The outcome of the process instance, for instance error code in case of process instance was finished with error event

parentProcessInstanceId

The process instance id of the parent process instance if any

processid

The id of the process

processinstanceid

The process instance id

NOT NULL

processname

The name of the process

processversion

The version of the process

start_date

The start date of the process instance

status

The status of process instance that maps to process instance state

The NodeInstanceLog table contains more information about which nodes were actually executed inside each process instance. Whenever a node instance is entered from one of its incoming connections or is exited through one of its outgoing connections, that information is stored in this table.

Table 18. NodeInstanceLog
Field Description Nullable

id

The primary key and id of the log entity

NOT NULL

connection

Actual identifier of the sequence flow that led to this node instance

log_date

The date of the event

externalId

Optional external identifier used to correlate to some elements - e.g. deployment id

nodeid

The node id of the corresponding node in the process definition

nodeinstanceid

The node instance id

nodename

The name of the node

nodetype

The type of the node

processid

The id of the process that the process instance is executing

processinstanceid

The process instance id

NOT NULL

type

The type of the event (0 = enter, 1 = exit)

NOT NULL

workItemId

Optional - only for certain node types - The identifier of work item

The VariableInstanceLog table contains information about changes in variable instances. The default is to only generate log entries when (after) a variable changes. It’s also possible to log entries before the variable (value) changes.

Table 19. VariableInstanceLog
Field Description Nullable

id

The primary key and id of the log entity

NOT NULL

externalId

Optional external identifier used to correlate to some elements - e.g. deployment id

log_date

The date of the event

processid

The id of the process that the process instance is executing

processinstanceid

The process instance id

NOT NULL

oldvalue

The previous value of the variable at the time that the log is made

value

The value of the variable at the time that the log is made

variableid

The variable id in the process definition

variableinstanceid

The id of the variable instance

The AuditTaskImpl table contains information about tasks that can be used for queries.

Table 20. AuditTaskImpl
Field Description Nullable

id

The primary key and id of the task log entity

activationTime

Time when this task was activated

actualOwner

Actual owner assigned to this task - only set when task is claimed

createdBy

User who created this task

createdOn

Date when task was created

deploymentId

Deployment id this task is part of

description

Description of the task

dueDate

Due date set on this task

name

Name of the task

parentId

Parent task id

priority

Priority of the task

processId

Process definition id that this task belongs to

processInstanceId

Process instance id that this task is associated with

processSessionId

KieSession id used to create this task

status

Current status of the task

taskId

Identifier of task

workItemId

Identifier of work item assigned on process side to this task id

The BAMTaskSummary table that collects information about tasks that is used by BAM engine to build charts and dashboards.

Table 21. BAMTaskSummary
Field Description Nullable

id

The primary key and id of the log entity

NOT NULL

createdDate

Date when task was created

duration

Duration since task was created

endDate

Date when task reached end state (complete, exit, fail, skip)

processinstanceid

The process instance id

startDate

Date when task was started

status

Current status of the task

taskId

Identifier of the task

taskName

Name of the task

userId

User id assigned to the task

The TaskVariableImpl table contains information about task variable instances.

Table 22. TaskVariableImpl
Field Description Nullable

id

The primary key and id of the log entity

NOT NULL

modificationDate

Date when the variable was modified last time

name

Name of the task

processid

The id of the process that the process instance is executing

processinstanceid

The process instance id

taskId

Identifier of the task

type

Type of the variable - either input or output of the task

value

Variable value

The TaskEvent table contains information about changes in task instances. Operations such as claim, start, stop etc are stored here to provide time line view of events that happened to given task.

Table 23. TaskEvent
Field Description Nullable

id

The primary key and id of the log entity

NOT NULL

logTime

LDate when this event was saved

message

Log event message

processinstanceid

The process instance id

taskId

Identifier of the task

type

Type of the event - corresponds to life cycle phases of the task

userId

User id assigned to the task

10.2.2. Storing Process Events in a Database

To log process history information in a database like this, you need to register the logger on your session like this:

KieSession ksession = ...;
AbstractAuditLogger auditLogger = AuditLoggerFactory.newInstance(Type.JPA, ksession, null);
ksession.addProcessEventListener(auditLogger);

// invoke methods one your session here

To specify the database where the information should be stored, modify the file persistence.xml file to include the audit log classes as well (ProcessInstanceLog, NodeInstanceLog and VariableInstanceLog), as shown below.

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>

<persistence
  version="2.0"
  xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd
  http://java.sun.com/xml/ns/persistence/orm http://java.sun.com/xml/ns/persistence/orm_2_0.xsd"
  xmlns="http://java.sun.com/xml/ns/persistence"
  xmlns:orm="http://java.sun.com/xml/ns/persistence/orm"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">

  <persistence-unit name="org.jbpm.persistence.jpa" transaction-type="JTA">
    <provider>org.hibernate.ejb.HibernatePersistence</provider>
    <jta-data-source>jdbc/jbpm-ds</jta-data-source>
    <mapping-file>META-INF/JBPMorm.xml</mapping-file>
    <class>org.drools.persistence.info.SessionInfo</class>
    <class>org.jbpm.persistence.processinstance.ProcessInstanceInfo</class>
    <class>org.drools.persistence.info.WorkItemInfo</class>
    <class>org.jbpm.persistence.correlation.CorrelationKeyInfo</class>
    <class>org.jbpm.persistence.correlation.CorrelationPropertyInfo</class>
    <class>org.jbpm.runtime.manager.impl.jpa.ContextMappingInfo</class>

    <class>org.jbpm.process.audit.ProcessInstanceLog</class>
    <class>org.jbpm.process.audit.NodeInstanceLog</class>
    <class>org.jbpm.process.audit.VariableInstanceLog</class>

    <properties>
      <property name="hibernate.dialect" value="org.hibernate.dialect.H2Dialect"/>
      <property name="hibernate.max_fetch_depth" value="3"/>
      <property name="hibernate.hbm2ddl.auto" value="update"/>
      <property name="hibernate.show_sql" value="true"/>
      <property name="hibernate.connection.release_mode" value="after_transaction"/>
      <property name="hibernate.transaction.jta.platform" value="org.hibernate.service.jta.platform.internal.JBossStandAloneJtaPlatform"/>
    </properties>
  </persistence-unit>
</persistence>

All this information can easily be queried and used in a lot of different use cases, ranging from creating a history log for one specific process instance to analyzing the performance of all instances of a specific process.

This audit log should only be considered a default implementation. We don’t know what information you need to store for analysis afterwards, and for performance reasons it is recommended to only store the relevant data. Depending on your use cases, you might define your own data model for storing the information you need, and use the process event listeners to extract that information.

10.2.3. Storing Process Events in a JMS queue for further processing

Process events are stored in the database synchronously and within the same transaction as actual process instance execution. That obviously takes some time especially in highly loaded systems and might have some impact on the database when both history log and runtime data are kept in the same database. To provide an alternative option for storing process events, a JMS based logger has been provided. It can be configured to submit messages to JMS queue instead of directly persisting them in the database. It can be configured to be transactional as well to avoid issues with inconsistent data in case of jBPM engine transaction is rolled back.

ConnectionFactory factory = ...;
Queue queue = ...;
StatefulKnowledgeSession ksession = ...;
Map<String, Object> jmsProps = new HashMap<String, Object>();
jmsProps.put("jbpm.audit.jms.transacted", true);
jmsProps.put("jbpm.audit.jms.connection.factory", factory);
jmsProps.put("jbpm.audit.jms.queue", queue);
AbstractAuditLogger auditLogger = AuditLoggerFactory.newInstance(Type.JMS, ksession, jmsProps);
ksession.addProcessEventListener(auditLogger);

// invoke methods one your session here

This is just one of possible ways to configure JMS audit logger, see javadocs for AuditLoggerFactory for more details.

10.2.4. Variables auditing

Process and task variables are stored in audit tables by default although there are stored in simplest possible way - by creating string representation of the variable - variable.toString(). In many cases this is enough as even for custom classes used as variables users can implement custom toString() method that produces expected "view" of the variable.

Though this might not cover all needs, especially when there is a need for efficient queries by variables (both task and process). Let’s take as an example a Person object that has following structure:

public class Person implements Serializable {

    private static final long serialVersionUID = -5172443495317321032L;
    private String name;
    private int age;

    public Person(String name, int age) {
        this.name = name;
        this.age = age;
    }

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }

    public int getAge() {
        return age;
    }

    public void setAge(int age) {
        this.age = age;
    }

    @Override
    public String toString() {
        return "Person [name=" + name + ", age=" + age + "]";
    }
}

while at first look this seems to be sufficient as the toString() methods provide human readable format it does not make it easy to be searched by. As searching through strings like "Person [name="john", age="34"] to find people with age 34 would make data base query very inefficient.

To solve the problem variable audit has been based on VariableIndexers that are responsible for extracting relevant parts of the variable that will be stored in audit log.

/**
 * Variable indexer that allows to transform variable instance into other representation (usually string)
 * to be able to use it for queries.
 *
 * @param <V> type of the object that will represent indexed variable
 */
public interface VariableIndexer<V> {

    /**
     * Tests if given variable shall be indexed by this indexer
     *
     * NOTE: only one indexer can be used for given variable
     *
     * @param variable variable to be indexed
     * @return true if variable should be indexed with this indexer
     */
    boolean accept(Object variable);

    /**
     * Performs index/transform operation of the variable. Result of this operation can be
     * either single value or list of values to support complex type separation.
     * For example when variable is of type Person that has name, address phone indexer could
     * build three entries out of it to represent individual fields:
     * person = person.name
     * address = person.address.street
     * phone = person.phone
     * that will allow more advanced queries to be used to find relevant entries.
     * @param name name of the variable
     * @param variable actual variable value
     * @return
     */
    List<V> index(String name, Object variable);
}

By default (indexer that takes the toString()) will produce single audit entry for single variable, so it’s one to one relationship. But that’s not the only option: indexers (as can be seen in the interface) returns list of objects that are the outcome of single variable indexation.

To make our person queries more efficient we could build custom indexer that would take Person instance and index it into separate audit entries one representing name and the other representing age.

public class PersonTaskVariablesIndexer implements TaskVariableIndexer {

    @Override
    public boolean accept(Object variable) {
        if (variable instanceof Person) {
            return true;
        }
        return false;
    }

    @Override
    public List<TaskVariable> index(String name, Object variable) {

        Person person = (Person) variable;
        List<TaskVariable> indexed = new ArrayList<TaskVariable>();

        TaskVariableImpl personNameVar = new TaskVariableImpl();
        personNameVar.setName("person.name");
        personNameVar.setValue(person.getName());

        indexed.add(personNameVar);

        TaskVariableImpl personAgeVar = new TaskVariableImpl();
        personAgeVar.setName("person.age");
        personAgeVar.setValue(person.getAge()+"");

        indexed.add(personAgeVar);

        return indexed;
    }

}

That indexer will then be used to index Person class only and rest of variables will be indexed with default (toString()) indexer. Now when we want to find process instances or tasks that have person with age 34 we simple refer to it as

  • variable name: person.age

  • variable value: 34

There is not even need to use like based queries so data base can optimize the query and make it efficient even with big set of data.

Building and registering custom indexers

Indexers are supported for both process and task variables. though they are supported by different interfaces as they do produce different type of objects representing audit view of the variable. Following are the interfaces to be implemented to build custom indexers:

  • process variables: org.kie.internal.process.ProcessVariableIndexer

  • task variables: org.kie.internal.task.api.TaskVariableIndexer

Implementation is rather simple, just two methods to be implemented

  • accept - indicates what types are handled by given indexer. Note that only one indexer can index given variable, so the first that accepts it will perform the work

  • index - actually does the work to index variables depending on custom requirements

Once the implementation is done, it should be packaged as jar file and following file needs to be included:

  • for process variables: META-INF/services/org.kie.internal.process.ProcessVariableIndexer with list of FQCN that represent the process variable indexers (single class name per line in that file)

  • for task variables: META-INF/services/org.kie.internal.task.api.TaskVariableIndexer with list of FQCN that represent the task variable indexers (single class name per line in that file)

Indexers are discovered by ServiceLoader mechanism and thus the META-INF/services files need. All found indexers will be examined whenever process or task variable is about to be indexed.

Only the default (toString() based) indexer is not discovered but added explicitly as last indexer to allow custom ones to take the precedence over it.

10.3. Transactions

The jBPM engine supports JTA transactions. It also supports local transactions only when using Spring. It does not support pure local transactions at the moment. For more information about using Spring to set up persistence, please see the Spring chapter in the Drools integration guide.

Whenever you do not provide transaction boundaries inside your application, the jBPM engine will automatically execute each method invocation on the jBPM engine in a separate transaction. If this behavior is acceptable, you don’t need to do anything else. You can, however, also specify the transaction boundaries yourself. This allows you, for example, to combine multiple commands into one transaction.

You need to register a transaction manager at the environment before using user-defined transactions. The following sample code uses the Narayana JTA transaction manager. Use the Java Transaction API (JTA) to specify transaction boundaries:

// create the entity manager factory
EntityManagerFactory emf = EntityManagerFactoryManager.get().getOrCreate("org.jbpm.persistence.jpa");
TransactionManager tm = TransactionManagerServices.getTransactionManager();

// setup the runtime environment
RuntimeEnvironment environment = RuntimeEnvironmentBuilder.Factory.get()
.newDefaultBuilder()
.addAsset(ResourceFactory.newClassPathResource("MyProcessDefinition.bpmn2"), ResourceType.BPMN2)
    .addEnvironmentEntry(EnvironmentName.TRANSACTION_MANAGER, tm)
    .get();

// get the kie session
RuntimeManager manager = RuntimeManagerFactory.Factory.get().newPerRequestRuntimeManager(environment);
RuntimeEngine runtime = manager.getRuntimeEngine(ProcessInstanceIdContext.get());
KieSession ksession = runtime.getKieSession();

// start the transaction
UserTransaction ut = InitialContext.doLookup("java:comp/UserTransaction");
ut.begin();

// perform multiple commands inside one transaction
ksession.insert( new Person( "John Doe" ) );
ksession.startProcess("MyProcess");

// commit the transaction
ut.commit();

You should also add a simple jndi.properties file in you root classpath to create a JNDI InitialContextFactory, as e.g. UserTransaction, TransactionManager and TransactionSynchronizationRegistry are registered in JNDI. If you are using the jbpm-test module, this is already included by default. If not, create a file named jndi.properties with the following content:

java.naming.factory.initial=org.jbpm.test.util.CloseSafeMemoryContextFactory
org.osjava.sj.root=target/test-classes/config
org.osjava.jndi.delimiter=/
org.osjava.sj.jndi.shared=true

This configuration assumes that simple-jndi:simple-jndi is contained in your project’s classpath, but you can use a different JNDI implementation.

If you would like to use a different JTA transaction manager, you can change the persistence.xml file to use your own transaction manager. For example, when running inside JBoss Application Server v5.x or v7.x, you can use the JBoss transaction manager. You need to change the transaction manager property in persistence.xml to:

<property name="hibernate.transaction.jta.platform" value="org.hibernate.transaction.JBossTransactionManagerLookup" />

Using the (runtime manager) Singleton strategy with JTA transactions (UserTransaction or CMT) is not recommended because there is a race condition when using this. This race condition can result in an IllegalStateException with a message similar to "Process instance XXX is disconnected.".

This race conditation can be avoided by explicitly synchronizing around the KieSession instance when invoking the transaction in the user application code.

synchronized (ksession) {
    try {
        tx.begin();

        // use ksession
        // application logic

        tx.commit();
    } catch (Exception e) {
        //...
    }
}

10.3.1. Container managed transactions

Special consideration need to be taken when embedding jBPM inside an application that executes in Container Managed Transaction (CMT) mode, for instance EJB beans. This especially applies to application servers that does not allow accessing UserTransaction instance from JNDI when being part of container managed transaction, e.g. WebSphere Application Server. Since default implementation of transaction manager in jBPM is based on UserTransaction to get transaction status which is used to decide if transaction should be started or not, in environments that prevent accessing UserTrancation it won’t do its job. To secure proper execution in CMT environments a dedicated transaction manager implementation is provided:

org.jbpm.persistence.jta.ContainerManagedTransactionManager

This transaction manager expects that transaction is active and thus will always return ACTIVE when invoking getStatus method. Operations like begin, commit, rollback are no-op methods as transaction manager runs under managed transaction and can’t affect it.

To make sure that container is aware of any exceptions that happened during process instance execution, user needs to ensure that exceptions thrown by the engine are propagated up to the container to properly rollback transaction.

To configure this transaction manager following must be done:

  • Insert transaction manager and persistence context manager into environment prior to creating/loading session

    Environment env = EnvironmentFactory.newEnvironment();
    env.set(EnvironmentName.ENTITY_MANAGER_FACTORY, emf);
    env.set(EnvironmentName.TRANSACTION_MANAGER, new ContainerManagedTransactionManager());
    env.set(EnvironmentName.PERSISTENCE_CONTEXT_MANAGER, new JpaProcessPersistenceContextManager(env));
    env.set(EnvironmentName.TASK_PERSISTENCE_CONTEXT_MANAGER, new JPATaskPersistenceContextManager(env));
  • configure JPA provider (example hibernate and WebSphere)

    <property name="hibernate.transaction.factory_class" value="org.hibernate.transaction.CMTTransactionFactory"/>
    <property name="hibernate.transaction.jta.platform" value="org.hibernate.service.jta.platform.internal.WebSphereJtaPlatform"/>

With following configuration jBPM should run properly in CMT environment.

10.3.1.1. CMT dispose ksession command

Usually when running within container managed transaction disposing ksession directly will cause exceptions on transaction completion as there are some transaction synchronization registered by jBPM to clean up the state after invocation is finished.

To overcome this problem specialized command has been provided org.jbpm.persistence.jta.ContainerManagedTransactionDisposeCommand which allows to simply execute this command instead of regular ksession.dispose which will ensure that ksession will be disposed at the transaction completion.

10.4. Configuration

By default, the jBPM engine does not save runtime data persistently. This means you can use the jBPM engine completely without persistence (so not even requiring an in memory database) if necessary, for example for performance reasons, or when you would like to manage persistence yourself. It is, however, possible to configure the jBPM engine to use persistence by configuring it to do so. This usually requires adding the necessary dependencies, configuring a data source, and creating the jBPM engine with persistence configured.

10.4.1. Adding dependencies

You need to make sure the necessary dependencies are available in the classpath of your application if you want to user persistence. By default, persistence is based on the Java Persistence API (JPA) and can thus work with several persistence mechanisms. We are using Hibernate by default.

If you’re using the Eclipse IDE and the jBPM Eclipse plugin, you should make sure the necessary JARs are added to your jBPM runtime directory. You don’t really need to do anything (as the necessary dependencies should already be there) if you are using the jBPM runtime that is configured by default when using the jBPM installer, or if you downloaded and unzipped the jBPM runtime artifact (from the downloads) and pointed the jBPM plugin to that directory.

If you would like to manually add the necessary dependencies to your project, you need to put the jbpm-persistence-jpa.jar on your project’s classpath as that contains the code for saving the runtime state whenever necessary. Depending on the persistence solution and database you are using, you may need additional dependencies.

For the default combination of:

  • Hibernate as the JPA persistence provider

  • H2 in-memory database

  • Narayana for JTA-based transaction management

  • Tomcat DBCP for connection pooling capabilities

The following additional dependencies are required:

  • jbpm-persistence-jpa (org.jbpm)

  • drools-persistence-jpa (org.drools)

  • persistence-api (javax.persistence)

  • hibernate-entitymanager (org.hibernate)

  • hibernate-annotations (org.hibernate)

  • hibernate-commons-annotations (org.hibernate)

  • hibernate-core (org.hibernate)

  • commons-collections (commons-collections)

  • dom4j (org.dom4j)

  • jta (javax.transaction)

  • narayana-jta (org.jboss.narayana.jta)

  • tomcat-dbcp (org.apache.tomcat)

  • jboss-transaction-api_1.2_spec (org.jboss.spec.javax.transaction)

  • javassist (javassist)

  • slf4j-api (org.slf4j)

  • slf4j-jdk14 (org.slf4j)

  • simple-jndi (simple-jndi)

  • h2 (com.h2database)

  • jbpm-test (org.jbpm) for testing only, do not include it in the actual application

10.4.2. Manually configuring the jBPM engine to use persistence

You can use the JPAKnowledgeService to create your KIE session. This is slightly more complex, but gives you full access to the underlying configurations. You can create a new KIE session using JPAKnowledgeService based on a KIE base, a KIE session configuration (if necessary) and an environment. The environment needs to contain a reference to your Entity Manager Factory. For example:

// create the entity manager factory and register it in the environment
EntityManagerFactory emf =
    Persistence.createEntityManagerFactory( "org.jbpm.persistence.jpa" );
Environment env = KnowledgeBaseFactory.newEnvironment();
env.set( EnvironmentName.ENTITY_MANAGER_FACTORY, emf );

// create a new KIE session that uses JPA to store the runtime state
StatefulKnowledgeSession ksession = JPAKnowledgeService.newStatefulKnowledgeSession( kbase, null, env );
int sessionId = ksession.getId();

// invoke methods on your method here
ksession.startProcess( "MyProcess" );
ksession.dispose();

You can also use the JPAKnowledgeService to recreate a session based on a specific session id:

// recreate the session from database using the sessionId
ksession = JPAKnowledgeService.loadStatefulKnowledgeSession(sessionId, kbase, null, env );

Note that we only save the minimal state that is needed to continue execution of the process instance at some later point. This means, for example, that it does not contain information about already executed nodes if that information is no longer relevant, or that process instances that have been completed or aborted are removed from the database. If you want to search for history-related information, you should use the history log, as explained later.

You need to add a persistence configuration to your classpath to configure JPA to use Hibernate and the H2 database (or your own preference), called persistence.xml in the META-INF directory, as shown below. For more details on how to change this for your own configuration, refer to the JPA and Hibernate documentation.

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<persistence
      version="2.0"
      xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd
      http://java.sun.com/xml/ns/persistence/orm http://java.sun.com/xml/ns/persistence/orm_2_0.xsd"
      xmlns="http://java.sun.com/xml/ns/persistence"
      xmlns:orm="http://java.sun.com/xml/ns/persistence/orm"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">

  <persistence-unit name="org.jbpm.persistence.jpa" transaction-type="JTA">
    <provider>org.hibernate.ejb.HibernatePersistence</provider>
    <jta-data-source>jdbc/jbpm-ds</jta-data-source>
    <mapping-file>META-INF/JBPMorm.xml</mapping-file>
    <class>org.drools.persistence.info.SessionInfo</class>
    <class>org.jbpm.persistence.processinstance.ProcessInstanceInfo</class>
    <class>org.drools.persistence.info.WorkItemInfo</class>
    <class>org.jbpm.persistence.correlation.CorrelationKeyInfo</class>
    <class>org.jbpm.persistence.correlation.CorrelationPropertyInfo</class>
    <class>org.jbpm.runtime.manager.impl.jpa.ContextMappingInfo</class>

    <properties>
      <property name="hibernate.dialect" value="org.hibernate.dialect.H2Dialect"/>
      <property name="hibernate.max_fetch_depth" value="3"/>
      <property name="hibernate.hbm2ddl.auto" value="update"/>
      <property name="hibernate.show_sql" value="true"/>
      <property name="hibernate.connection.release_mode" value="after_transaction"/>
            <property name="hibernate.transaction.jta.platform" value="org.hibernate.service.jta.platform.internal.JBossStandAloneJtaPlatform"/>
    </properties>
  </persistence-unit>
</persistence>

This configuration file refers to a data source called "jdbc/jbpm-ds". If you run your application in an application server (such as JBoss AS), these containers typically allow you to easily set up data sources using some configuration (such as adding a data source configuration file in the deploy directory). Please refer to your application server documentation to know how to do this.

For example, if you’re deploying to Wildfly, you can create a data source by dropping a configuration file in the deploy directory, for example:

<?xml version="1.0" encoding="UTF-8"?>
<datasources>
  <local-tx-datasource>
    <jndi-name>jdbc/jbpm-ds</jndi-name>
    <connection-url>jdbc:h2:tcp://localhost/~/test</connection-url>
    <driver-class>org.h2.jdbcx.JdbcDataSource</driver-class>
    <user-name>sa</user-name>
    <password></password>
  </local-tx-datasource>
</datasources>

If you are executing in a simple Java environment, you can use Narayana and Tomcat DBCP by using the DataSourceFactory class from the kie-test-util module of drools. See the following code fragment. This example uses the H2 in-memory database in combination with Narayana and Tomcat DBCP.

Properties driverProperties = new Properties();
driverProperties.put("user", "sa");
driverProperties.put("password", "sa");
driverProperties.put("url", "jdbc:h2:mem:jbpm-db;MVCC=true");
driverProperties.put("driverClassName", "org.h2.Driver");
driverProperties.put("className", "org.h2.jdbcx.JdbcDataSource");
PoolingDataSourceWrapper pdsw = DataSourceFactory.setupPoolingDataSource("jdbc/jbpm-ds", driverProperties);

10.4.3. Configuring the jBPM engine to use persistence

You need to configure the jBPM engine to use persistence. The is most effectively done through RuntimeEnvironmentBuilder.

It is easy to use RuntimeEnvironmentBuilder to create a session to run or test jBPM engine flows. By default RuntimeEnvironmentBuilder searches for the jdbc/jbpm-ds, so this simple code segment creates a KieSession with an empty context.

RuntimeEnvironmentBuilder builder = RuntimeEnvironmentBuilder.Factory.get()
        .newDefaultBuilder()
        .knowledgeBase(kbase);
RuntimeManager manager = RuntimeManagerFactory.Factory.get()
        .newSingletonRuntimeManager(builder.get(), "com.sample:example:1.0");
RuntimeEngine engine = manager.getRuntimeEngine(EmptyContext.get());
KieSession ksession = engine.getKieSession();

The above code will also need a kbase parameter. One simple method is to use a kmodule.xml kjar descriptor found on the classpath as shown in this example.

KieServices ks = KieServices.Factory.get();
KieContainer kContainer = ks.getKieClasspathContainer();
KieBase kbase = kContainer.getKieBase("kbase");

A kmodule.xml descriptor can include an attribute for resource packages to scan to find and deploy jBPM engine workflows.

<kmodule xmlns="http://jboss.org/kie/6.0.0/kmodule">
  <kbase name="kbase" packages="com.sample"/>
</kmodule>

Control over the persistence can be accomplished through the RuntimeEnvironmentBuilder::entityManagerFactory method as shown below.

EntityManagerFactory emf = Persistence.createEntityManagerFactory("org.jbpm.persistence.jpa");

RuntimeEnvironment runtimeEnv = RuntimeEnvironmentBuilder.Factory
        .get()
        .newDefaultBuilder()
        .entityManagerFactory(emf)
        .knowledgeBase(kbase)
        .get();

StatefulKnowledgeSession ksession = (StatefulKnowledgeSession) RuntimeManagerFactory.Factory.get()
        .newSingletonRuntimeManager(runtimeEnv)
        .getRuntimeEngine(EmptyContext.get())
        .getKieSession();

Once you have done that, you can just call methods on this ksession (like startProcess) and the jBPM engine will persist all runtime state in the created data source.

You can re-create your session by using an identifier in the session ID (which you can retrieve using ksession.getId()) to restore the session state from the database:

// recreate the session from database using the sessionId
StatefulKnowledgeSession ksession = JPAKnowledgeService.loadStatefulKnowledgeSession(sessionId, kbase, null, env );

10.5. Persisting process variables in a separate database schema in jBPM

When you create process variables in jBPM to use within the processes that you define, jBPM stores those process variables as binary data in a default database schema. You can persist process variables in a separate database schema for greater flexibility in maintaining and implementing your process data.

For example, persisting your process variables in a separate database schema can help you perform the following tasks:

  • Maintain process variables in human-readable format

  • Make the variables available to services outside of jBPM

  • Clear the log of the default database tables in jBPM without losing process variable data

This procedure applies to process variables only. This procedure does not apply to case variables.
Prerequisites
  • You have defined processes in jBPM for which you want to implement variables.

  • If you want to persist variables in a database schema outside of jBPM, you have created a data source and the separate database schema that you want to use. For information about creating data sources, see Data Source Management.

Procedure
  1. In the data object file that you use as a process variable, add the following elements to configure variable persistence:

    Example Person.java object configured for variable persistence
    @javax.persistence.Entity  (1)
    @javax.persistence.Table(name = "Person")  (2)
    public class Person extends org.drools.persistence.jpa.marshaller.VariableEntity  (3)
    implements java.io.Serializable {  (4)
    
    	static final long serialVersionUID = 1L;
    
    	@javax.persistence.GeneratedValue(strategy = javax.persistence.GenerationType.AUTO, generator = "PERSON_ID_GENERATOR")
    	@javax.persistence.Id  (5)
    	@javax.persistence.SequenceGenerator(name = "PERSON_ID_GENERATOR", sequenceName = "PERSON_ID_SEQ")
    	private java.lang.Long id;
    
    	private java.lang.String name;
    
    	private java.lang.Integer age;
    
    	public Person() {
    	}
    
    	public java.lang.Long getId() {
    		return this.id;
    	}
    
    	public void setId(java.lang.Long id) {
    		this.id = id;
    	}
    
    	public java.lang.String getName() {
    		return this.name;
    	}
    
    	public void setName(java.lang.String name) {
    		this.name = name;
    	}
    
    	public java.lang.Integer getAge() {
    		return this.age;
    	}
    
    	public void setAge(java.lang.Integer age) {
    		this.age = age;
    	}
    
    	public Person(java.lang.Long id, java.lang.String name,
    			java.lang.Integer age) {
    		this.id = id;
    		this.name = name;
    		this.age = age;
    	}
    
    }
    1 Configures the data object as a persistence entity.
    2 Defines the database table name used for the data object.
    3 Creates a separate MappedVariable mapping table that maintains the relationship between this data object and the associated process instance. If you do not need this relationship maintained, you do not need to extend the VariableEntity class. Without this extension, the data object is still persisted, but contains no additional data.
    4 Configures the data object as a serializable object.
    5 Sets a persistence ID for the object.

    To make the data object persistable using Business Central, navigate to the data object file in your project, click the Persistence icon in the upper-right corner of the window, and configure the persistence behavior:

    persistence in central
    Figure 38. Persistence configuration in Business Central
  2. In the pom.xml file of your project, add the following dependency for persistence support. This dependency contains the VariableEntity class that you configured in your data object.

    Project dependency for persistence
    <dependency>
      <groupId>org.drools</groupId>
      <artifactId>drools-persistence-jpa</artifactId>
      <version>${jbpm.version}</version>
      <scope>provided</scope>
    </dependency>
  3. In the ~/META-INF/kie-deployment-descriptor.xml file of your project, configure the JPA marshalling strategy and a persistence unit to be used with the marshaller. The JPA marshalling strategy and persistence unit are required for objects defined as entities.

    JPA marshaller and persistence unit configured in the kie-deployment-descriptor.xml file
    <marshalling-strategy>
      <resolver>mvel</resolver>
      <identifier>new org.drools.persistence.jpa.marshaller.JPAPlaceholderResolverStrategy("myPersistenceUnit", classLoader)</identifier>
      <parameters/>
    </marshalling-strategy>
  4. In the ~/META-INF directory of your project, create a persistence.xml file that specifies in which data source you want to persist the process variable:

    Example persistence.xml file with data source configuration
    <persistence xmlns="http://java.sun.com/xml/ns/persistence" xmlns:orm="http://java.sun.com/xml/ns/persistence/orm" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" version="2.0" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd http://java.sun.com/xml/ns/persistence/orm http://java.sun.com/xml/ns/persistence/orm_2_0.xsd">
        <persistence-unit name="myPersistenceUnit" transaction-type="JTA">
            <provider>org.hibernate.jpa.HibernatePersistenceProvider</provider>
            <jta-data-source>java:jboss/datasources/ExampleDS</jta-data-source>  (1)
            <class>org.space.example.Person</class>
            <exclude-unlisted-classes>true</exclude-unlisted-classes>
            <properties>
                <property name="hibernate.dialect" value="org.hibernate.dialect.PostgreSQLDialect"/>
                <property name="hibernate.max_fetch_depth" value="3"/>
                <property name="hibernate.hbm2ddl.auto" value="update"/>
                <property name="hibernate.show_sql" value="true"/>
                <property name="hibernate.id.new_generator_mappings" value="false"/>
                <property name="hibernate.transaction.jta.platform" value="org.hibernate.service.jta.platform.internal.JBossAppServerJtaPlatform"/>
            </properties>
        </persistence-unit>
    </persistence>
    1 Sets the data source in which the process variable is persisted

    To configure the marshalling strategy, persistence unit, and data source using Business Central, navigate to project SettingsDeploymentsMarshalling Strategies and to project SettingsPersistence:

    jpa marhsalling strategy
    Figure 39. JPA marshaller configuration in Business Central
    persistence unit
    Figure 40. Persistence unit and data source configuration in Business Central

Business Central

How to use the web-based Business Central application

11. Business Central (General)

11.1. Installation

11.1.1. War installation

Use the war from the Business Central distribution zip that corresponds to your application server. The differences between these war files are mainly superficial. For example, some JARs might be excluded if the application server already supplies them.

  • eap7: tailored for Red Hat JBoss Enterprise Application Platform 7

  • wildfly14: tailored for Wildfly 14

11.1.2. Business Central data

Business Central stores its data, by default in the directory $WORKING_DIRECTORY/.niogit, for example wildfly-14.0.1.Final/bin/.niogit, but it can be overridden with the system property-Dorg.uberfire.nio.git.dir.

In production, make sure to back up the Business Central data directory.

11.1.3. System properties

Here’s a list of all system properties:

  • org.kie.workbench.profile: Selects the Business Central profile. Possible values are FULL or PLANNER_AND_RULES. A prefix FULL_ will set the profile and hide the profile preferences from the admin preferences. Default: FULL.

  • kie.maven.offline.force: Forces Maven to behave as offline. If true, disable online dependency resolution. Default: false.

    Use this property for Business Central only. If you share a runtime environment with any other component, isolate the configuration and apply it only to Business Central.

  • org.appformer.m2repo.url: Location of the default Maven repository Business Central uses when looking for dependencies. Usually this points to the Maven repository inside the Workbench for example http://localhost:8080/business-central/maven2. Please set this before starting up the Workbench. Default: File path to the inner m2 repository.

  • org.uberfire.nio.git.dir: Location of the directory .niogit. Default: working directory

  • org.uberfire.nio.git.dirname: Name of the git directory. Default: .niogit

  • org.uberfire.nio.git.proxy.ssh.over.http: Defines that SSH should use an HTTP Proxy. Default: false

  • http.proxyHost: Defines the host name of the HTTP Proxy. Default: null

  • http.proxyPort: Defines the host port (integer value) of the HTTP Proxy. Default: null

  • org.uberfire.nio.git.proxy.ssh.over.https: Defines that SSH should use an HTTPS Proxy. Default: false

  • https.proxyHost: Defines the host name of the HTTPS Proxy. Default: null

  • https.proxyPort: Defines the host port (integer value) of the HTTPS Proxy. Default: null

  • org.uberfire.nio.git.http.enabled: Enables or disables the HTTP daemon. Default: true

  • org.uberfire.nio.git.http.host: If the HTTP daemon is enabled, it uses this property as the host identifier. This is an informative property that is used to display how to access the Git repository over HTTP. The HTTP still relies on the servlet container. Default: localhost

  • org.uberfire.nio.git.http.hostname: If the HTTP daemon is enabled, it uses this property as the host name identifier. This is an informative property that is used to display how to access the Git repository over HTTP. The HTTP still relies on the servlet container. Default: localhost

  • org.uberfire.nio.git.http.port: If the HTTP daemon is enabled, it uses this property as the port number. This is an informative property that is used to display how to access the Git repository over HTTP. The HTTP still relies on the servlet container. Default: 8080

  • org.uberfire.nio.git.https.enabled: Enables or disables the HTTPS daemon. Default: false

  • org.uberfire.nio.git.https.host: If the HTTPS daemon is enabled, it uses this property as the host identifier. This is an informative property that is used to display how to access the Git repository over HTTPS. The HTTPS still relies on the servlet container. Default: localhost

  • org.uberfire.nio.git.https.hostname: If the HTTPS daemon is enabled, it uses this property as the host name identifier. This is an informative property that is used to display how to access the Git repository over HTTPS. The HTTPS still relies on the servlet container. Default: localhost

  • org.uberfire.nio.git.https.port: If the HTTPS daemon is enabled, it uses this property as the port number. This is an informative property that is used to display how to access the Git repository over HTTPS. The HTTPS still relies on the servlet container. Default: 8080

  • org.uberfire.nio.git.daemon.enabled: Enables/disables git daemon. Default: true

  • org.uberfire.nio.git.daemon.host: If git daemon enabled, uses this property as local host identifier. Default: localhost

  • org.uberfire.nio.git.daemon.hostname: If the git daemon is enabled, uses this property as the local host name identifier. Default: localhost

  • org.uberfire.nio.git.daemon.port: If git daemon enabled, uses this property as port number. Default: 9418

  • org.uberfire.nio.git.http.sslVerify: Enables or disables SSL certificate checking for Git repositories. Default: true

    If the default or assigned port is already in use, a new port is automatically selected. Ensure that the ports are available and check the log for more information.

  • org.uberfire.nio.git.ssh.enabled: Enables/disables ssh daemon. Default: true

  • org.uberfire.nio.git.ssh.host: If ssh daemon enabled, uses this property as local host identifier. Default: localhost

  • org.uberfire.nio.git.ssh.hostname: If the SSH daemon is enabled, uses this property as local host name identifier. Default: localhost

  • org.uberfire.nio.git.ssh.port: If ssh daemon enabled, uses this property as port number. Default: 8001

  • org.uberfire.nio.git.ssh.ciphers: A comma-separated string of ciphers. The available ciphers are aes128-ctr, aes192-ctr, aes256-ctr, arcfour128, arcfour256, aes192-cbc, aes256-cbc. If the property is not used, all available ciphers are loaded.

  • org.uberfire.nio.git.ssh.macs: A comma-separated string of message authentication codes (MACs). The available MACs are hmac-md5, hmac-md5-96, hmac-sha1, hmac-sha1-96, hmac-sha2-256, hmac-sha2-512. If the property is not used, all available MACs are loaded.

    If the default or assigned port is already in use, a new port is automatically selected. Ensure that the ports are available and check the log for more information.

  • org.uberfire.nio.git.ssh.cert.dir: Location of the directory .security where local certificates will be stored. Default: working directory

  • org.uberfire.nio.git.ssh.passphrase: Passphrase to access your Operating Systems public keystore when cloning git repositories with scp style URLs; e.g. git@github.com:user/repository.git.

  • org.uberfire.nio.git.ssh.algorithm: Algorithm used by SSH. Default: DSA

    If you plan to use RSA or any algorithm other than DSA, make sure you setup properly your Application Server to use Bouncy Castle JCE library.

  • appformer.ssh.keystore: Defines the custom SSH keystore to be used with Business Central by specifying a class name. If the property is not available the default SSH keystore is used.

  • appformer.ssh.keys.storage.folder: When using the default SSH keystore, this parameter defines the storage folder for the user’s SSH public keys. If the property is not available the keys are stored in the Workbench .security folder.

  • org.uberfire.metadata.index.dir: Place where Lucene .index folder will be stored. Default: working directory

  • org.uberfire.ldap.regex.role_mapper: Regex pattern used to map LDAP principal names to application role name. Note that the variable role must be part of the pattern as it is substited by the application role name when matching a principal value to role name. Default: Not used.

  • org.uberfire.sys.repo.monitor.disabled: Disable configuration monitor (do not disable unless you know what you’re doing). Default: false

  • org.uberfire.secure.key: Secret password used by password encryption. Default: org.uberfire.admin

  • org.uberfire.secure.alg: Crypto algorithm used by password encryption. Default: PBEWithMD5AndDES

  • org.uberfire.domain: security-domain name used by uberfire. Default: ApplicationRealm

  • appformer.experimental.features: enables the Experimental Features Framework

  • org.guvnor.m2repo.dir: Place where Maven repository folder will be stored. Default: working-directory/repositories/kie

  • org.guvnor.project.gav.check.disabled: Disable GAV checks. Default: false

  • org.kie.demo: Enables external clone of a demo application from GitHub.

  • org.kie.build.disable-project-explorer: Disable automatic build of selected Project in Project Explorer. Default: false

  • org.kie.verification.disable-dtable-realtime-verification: Disables the realtime validation and verification of decision tables. Default: false

  • org.kie.workbench.controller: URL for connecting with a jBPM controller, for example: ws://localhost:8080/kie-server-controller/websocket/controller.

  • org.uberfire.gzip.enable: Enables or disables Gzip compression on GzipFilter. Default: true

Only Web Socket protocol is supported for connecting with a headless jBPM controller. When specifying this property, Business Central will automatically disable all the features related to running the embedded jBPM controller.

  • org.kie.workbench.controller.user: User name for connecting with a jBPM controller. Default: kieserver

  • org.kie.workbench.controller.pwd: Password for connecting with a jBPM controller. Default: kieserver1!

  • org.kie.workbench.controller.token: Token string for connecting with a jBPM controller.

Please refer to Using token based authentication for more details about how to use token based authentication.

  • kie.keystore.keyStoreURL: URL to a keystore which should be used for connecting with a headless jBPM controller.

  • kie.keystore.keyStorePwd: Password to a keystore.

  • kie.keystore.key.ctrl.alias: Alias of the key where password is stored.

  • kie.keystore.key.ctrl.pwd: Password of an alias with stored password.

Please refer to Securing password using key store for more details about how to use a key store for securing your passwords.

  • org.jbpm.wb.forms.renderer.ext: Switch form rendering between Business Central and Kie Server rendered forms. By default, form rendering is done by Business Central. Default: false.

  • org.jbpm.wb.forms.renderer.name: Allows to switch between Business Central or Kie Server rendered forms. Kie server includes two renderers, bootstrap and patternfly in addition to the default renderer, workbench. Default: workbench.

To change one of these system properties in a WildFly or JBoss EAP cluster:

  1. Edit the file $JBOSS_HOME/domain/configuration/host.xml.

  2. Locate the XML elements server that belong to the main-server-group and add a system property, for example:

    <system-properties>
      <property name="org.uberfire.nio.git.dir" value="..." boot-time="false"/>
      ...
    </system-properties>

11.1.4. Trouble shooting

11.1.4.1. Loading.. does not disappear and Business Central fails to show

There have been reports that Firewalls in between the server and the browser can interfere with Server Sent Events (SSE) used by Business Central.

The issue results in the "Loading…​" spinner remaining visible and Business Central failing to materialize.

The workaround is to disable the Business Central’s use of Server Sent Events by adding file /WEB-INF/classes/ErraiService.properties to the exploded WAR containing the value errai.bus.enable_sse_support=false. Re-package the WAR and re-deploy.

Some Users have also reported disabling Server Sent Events does not resolve the issue. The solution found to work is to configure the JVM to use a different Entropy Gathering Device on Linux for SecureRandom. This can be configured by setting System Property java.security.egd to file:/dev/./urandom. See this Stack Overflow post for details.

Please note however this affects the JVM’s random number generation and may present other challenges where strong cryptography is required. Configure with caution.

11.1.4.2. Not able to clone Business Central Git repository using ssh protocol.

Git clients using ssh to interact with the Git server that is bundled with Business Central are authenticated and authorized to perform git commands by the security API that is part of the Uberfire backend server. When using an LDAP security realm, some git clients were not being authorized as expected. This was due to the fact that for non-web clients such as Git via ssh, the principal (i.e., user or group) name assigned to a user by the application server’s user registry is the more complex DN associated to that principal by LDAP. The logic of the Uberfire backend server looked for on exact match of roles allowed with the principal name returned and therefore failed.

It is now possible to control the role-principal matching via the system property

org.uberfire.ldap.regex.role_mapper

which takes as its value a Regex pattern to be applied when matching LDAP principal to role names. The pattern must contain the literal word variable 'role'. During authorization the variable is replaced by each of the allow application roles. If the pattern is matched the role is added to the user.

For instance, if the DN for the admin group in LDAP is

DN: cn=admin,ou=groups,dc=example,dc=com

and its intended role is admin, then setting org.uberfire.ldap.regex.role_mapper with value

cn[\\ ]*=[\\ ]*role

will find a match on role 'admin'.

11.2. Quick Start

These steps help you get started with minimum of effort.

They should not be a substitute for reading the documentation in full.

11.2.1. Importing examples

Import Examples - Quick install examples

If Business Central is empty you are shown an empty Space page. Clicking "Try Samples" button below will show the examples that are available.

QuickStart example1

Once "Try Samples" page opens, you can select one or more examples and click "Ok".

QuickStart example2

If Business Central already contains Projects the examples can be imported with the "Try Samples" button found from the menu.

QuickStart import with pre existing projects

11.2.2. Add Project

Alternatively, to importing an example, a new empty project can be created from the Space page with "Add Project".

QuickStart example1
Figure 41. New Project button

Give the Project a name and optional description.

QuickStart new project wizard
Figure 42. Giving Project a name

11.2.3. Define Data Model

After a Project has been created you need to define Types to be used by your rules.

Select "Data Object" from the "Add Asset" menu.

You can also use types contained in existing JARs.

Please consult the full documentation for details.

QuickStart create a data model
Figure 43. Creating "Data Object"

Set the name and select a package for the new type.

QuickStart create data object popup
Figure 44. Creating a new type

Click "+ add field" button and set a field name and type and click "Create" to create a field for the type.

QuickStart create field
Figure 45. Click "Create" and add the field

Click "Save" to update the model.

QuickStart confirm save
Figure 46. Clicking "Save"

11.2.4. Define Rule

Select "DRL file" (for example) from the "Add Asset" menu.

QuickStart create drl file
Figure 47. Selecting "DRL file" from the "Add Asset" menu

Enter a file name for the new rule.

Make sure you select the same package as the rule had. It is possible to have rules and data models in different packages, but let’s keep things simple for demo purposes.

QuickStart new rule popup
Figure 48. Entering a file name for rule

Enter a definition for the rule.

The definition process differs from asset type to asset type.

The full documentation has details about the different editors.

QuickStart writing a rule
Figure 49. Defining a rule

Once the rule has been defined it will need to be saved in the same way we saved the model.

11.2.5. Build and Deploy

Once rules have been defined within a project; the project can be built and deployed to the Business Central’s Maven Artifact Repository.

To build a project select the "Build & Deploy" from the Project Authoring.

QuickStart build and deploy
Figure 50. Building a project

Click "Build & Deploy" to build the project and deploy it to the Business Central’s Maven Artifact Repository.

When you select Build & Deploy Business Central will deploy to any repositories defined in the Dependency Management section of the pom in your Business Central project. You can edit the pom.xml file associated with your Business Central project under the Repository View of the project explorer. Details on dependency management in maven can be found here : http://maven.apache.org/guides/introduction/introduction-to-dependency-mechanism.html

If there are errors during the build process they will be reported in the "Messages" panel.

Now the project has been built and deployed; it can be referenced from your own projects as any other Maven Artifact.

The full documentation contains details about integrating projects with your own applications.

11.3. Configuration

11.3.1. Basic user management

Business Central authenticates its users against the application server’s authentication and authorization (JAAS).

On JBoss EAP and WildFly, add a user with the script $JBOSS_HOME/bin/add-user.sh (or .bat):

$ ./add-user.sh
// Type: Application User
// Realm: empty (defaults to ApplicationRealm)
// Role: admin

There is no need to restart the application server.

11.3.2. Roles

Business Central uses the following roles:

  • admin

  • analyst

  • developer

  • manager

  • user

11.3.2.1. Admin

Administrates the BPMS system.

  • Manages users

  • Manages VFS Repositories

  • Has full access to make any changes necessary

11.3.2.2. Developer

Developer can do almost everything admin can do, except clone repositories.

  • Manages rules, models, process flows, forms and dashboards

  • Manages the asset repository

  • Can create, build and deploy projects

  • Can use the JBDS connection to view processes

11.3.2.3. Analyst

Analyst is a weaker version of developer and does not have access to the asset repository or the ability to deploy projects.

11.3.2.4. Business user

Daily user of the system to take actions on business tasks that are required for the processes to continue forward. Works primarily with the task lists.

  • Does process management

  • Handles tasks and dashboards

11.3.2.5. Manager/Viewer-only User

Viewer of the system that is interested in statistics around the business processes and their performance, business indicators, and other reporting of the system and people who interact with the system.

  • Only has access to dashboards

11.4. Introduction

11.4.1. Log in and log out

Create a user with the role admin and log in with those credentials.

After successfully logging in, the account username is displayed at the top right. Click it to review the roles of the current account.

11.4.2. Home screen

After logging in, the home screen shows. The actual content of the home screen depends on the Business Central variant (Drools, jBPM, …​).

home

11.4.3. Business Central overview

Business Central is structured with Spaces and Projects:

workbenchStructureOverview
11.4.3.1. Space

Spaces are useful to model departments and divisions.

A Space can hold multiple Projects.

Space
11.4.3.2. Project

Projects are the place where assets are stored and each project belongs to a single Space.

Projects are in fact a Virtual File System based storage, that by default uses GIT as backend. Such setup allows Business Central to work with multiple backends and, in the same time, take full advantage of backend specifics features like in GIT case versioning, branching and even external access.

A new Project can be created from scratch or cloned from an existing repository.

One of the biggest advantage of using GIT as backend is the ability to clone a repository from external and use your preferred tools to edit and build your assets.

Never clone your repositories directly from .niogit directory.

11.4.4. Business Central user interface concepts

Business Central consists of different logical entities:

  • Part

    A Part is a screen or editor with which the user can interact to perform operations.

    Example Parts are "Project Explorer", "Project Editor", "Guided Rule Editor" etc.

  • Page

    A perspective is a logical grouping of related Panels and Parts. A perspective is usually named as page, since it is a term far more familiar to end users whereas a perspective is more developer oriented. Notice however, Business Central supports both developer created pages and those created by end users from the page builder (aka Content Management) tooling but, generally speaking, page is used to refer both.

    The user can switch between pages by clicking on one of the top-level menu items; such as "Home", "Authoring", "Deploy" etc.

11.5. Changing the layout

11.5.1. Resizing

Move the mouse pointer over the panel splitter (a grey horizontal or vertical line in between panels).

The cursor will by changing indicate it is positioned correctly over the splitter. Press and hold the left mouse button and drag the splitter to the required position; then release the left mouse button.

11.6. Authoring (General)

11.6.1. Artifact Repository

Projects often need external artifacts in their classpath in order to build, for example a domain model JARs. The artifact repository holds those artifacts.

The Artifact Repository is a full blown Maven repository. It follows the semantics of a Maven remote repository: all snapshots are timestamped. But it is often stored on the local hard drive.

By default the artifact repository is stored under $WORKING_DIRECTORY/repositories/kie, but it can be overridden with the system property-Dorg.guvnor.m2repo.dir. There is only 1 Maven repository per installation.

The Artifact Repository screen shows a list of the artifacts in the Maven repository:

mavenRepositoryExplorer

To add a new artifact to that Maven repository, either:

  • Use the upload button and select a JAR. If the JAR contains a POM file under META-INF/maven (which every JAR build by Maven has), no further information is needed. Otherwise, a groupId, artifactId and version need be given too.

mavenRepositoryUpload
  • Using Maven, mvn deploy to that Maven repository. Refresh the list to make it show up.

This remote Maven repository is relatively simple. It does not support proxying, mirroring, …​ like Nexus or Archiva.

11.6.2. Asset Editor

The Asset Editor is the principle component of the Business Central user interface. It consists of two main views Editor and Overview.

  • The views

    AssetEditor edit
    Figure 51. The Asset Editor - Editor tab
    • A : The editing area - exactly what form the editor takes depends on the Asset type. An asset can only be edited by one user at a time to avoid conflicts. When a user begins to edit an asset, a lock will automatically be acquired. This is indicated by a lock symbol appearing on the asset title bar as well as in the project explorer view (see Project Explorer for details). If a user starts editing an already locked asset a pop-up notification will appear to inform the user that the asset can’t currently be edited, as it is being worked on by another user. Changes will be prevented until the editing user saves or closes the asset, or logs out of Business Central. Session timeouts will also cause locks to be released. Every user further has the option to force a lock release, if required (see the Metadata section below).

    • B : This menu bar contains various actions for the Asset; such as Save, Rename, Copy etc. Note that saving, renaming and deleting are deactivated if the asset is locked by a different user.

    • C : Different views for asset content or asset information.

      • Editor shows the main editor for the asset

      • Overview contains the metadata and conversation views for this editor. Explained in more detail below.

      • Source shows the asset in plain DRL. Note: This tab is only visible if the asset content can be generated into DRL.

      • Data Objects contains the model available for authoring. By default only Data Objects that reside within the same package as the asset are available for authoring. Data Objects outside of this package can be imported to become available for authoring the asset.

    AssetEditor dataobjects
    Figure 52. The Asset Editor - Data Objects tab
  • Overview

    • A : General information about the asset and the asset’s description.

      "Type:" The format name of the type of Asset.

      "Description:" Description for the asset.

      "Used in projects:" Names the projects where this rule is used.

      "Last Modified:" Who made the last change and when.

      "Created on:" Who created the asset and when.

    • B : Version history for the asset. Selecting a version loads the selected version into this editor.

    • C : Meta data (from the "Dublin Core" standard)

    • D : Comments regarding the development of the Asset can be recorded here.

Overview
Figure 53. The Asset Editor - Overview tab
  • Metadata

    • A : Meta data:-

      "Tags:" A tagging system for grouping the assets.

      "Note:" A comment made when the Asset was last updated (i.e. why a change was made)

      "URI:" URI to the asset inside the Git repository.

      "Subject/Type/External link/Source" : Other miscellaneous meta data for the Asset.

      "Lock status" : Shows the lock status of the asset and, if locked, allows to force unlocking the asset.

Metadata
Figure 54. The Metadata tab
  • Locking

    Business Central supports pessimistic locking of assets. When one User starts editing an asset it is locked to change by other Users. The lock is held until a period of inactivity lapses, the Editor is closed or the application stopped and restarted. Locks can also be forcibly removed on the MetaData section of the Overview tab.

    A "padlock" icon is shown in the Editor’s title bar and beside the asset in the Project Explorer when an asset is locked.

    AssetEditor locked
    Figure 55. The Asset Editor - Locked assets cannot be edited by other users

11.6.3. Tags Editor

Tags allow assets to be labelled with any number of tags that you define. These tags can be used to filter assets on the Project Explorer enabling "Tag filtering".

11.6.3.1. Creating Tags

To create tags you simply have to write them on the Tags input and press the "Add new Tag/s" button. The Tag Editor allows creating tags one by one or writing more than one separated with a white space.

CreatingTags
Figure 56. Creating Tags

Once you created new Tags they will appear over the Editor allowing you to remove them by pressing on them if you want.

ExistingTags
Figure 57. Existing Tags

11.6.4. Project Explorer

The Project Explorer provides the ability to browse files inside the current Project. The Project Explorer can be accessed from the left side when an Asset Editor is open.

11.6.4.1. Initial view

If a file is currently being edited by another user, a lock symbol will be displayed in front of the file name. The symbol is blue in case the lock is owned by the currently authenticated user, otherwise black. Moving the mouse pointer over the lock symbol will display a tooltip providing the name of the user who is currently editing the file (and therefore owning the lock). To learn more about locking see Asset Editor for details.

ProjectExplorer Project Expanded
Figure 58. Expanded asset group
11.6.4.2. Different views

Project Explorer supports multiple views.

  • Project View

    A simplified view of the underlying project structure. Certain system files are hidden from view.

  • Repository View

    A complete view of the underlying project structure including all files; either user-defined or system generated.

Views can be selected by clicking on the icon within the Project Explorer, as shown below.

Both Project and Repository Views can be further refined by selecting either "Show as Folders" or "Show as Links".

ProjectExplorer Switching View
Figure 59. Switching view
Repository View examples
ProjectExplorer Repository Folders
Figure 60. Repository View - Folders
ProjectExplorer Repository Links
Figure 61. Repository View - Links
11.6.4.3. Download Project or Repository

Download Download and Download Repository make it possible to download the project or repository as a zip file.

ProjectExplorer Downloads
Figure 62. Repository and Project Downloads
11.6.4.4. Filtering by Tag

To make easy view the elements on packages that contain a lot of assets, is possible to enabling the Tag filter, whichs allows you to filter the assets by their tags.

To see how to add tags to an asset look at: Tags Editor

ProjectExplorer Tag Filter Enable
Figure 63. Enabling Filter by Tag
ProjectExplorer Tag Filter Show
Figure 64. Filter by Tag
ProjectExplorer Tag Filter Working
Figure 65. Filtering by Tag
11.6.4.5. Copy, Rename, Delete and Download Actions

Copy, rename and delete actions are available on Links mode, for packages in of Project View and for files and directories in Repository View. Download action is available for directories. Download option downloads the selected the selected directory as a zip file.

  • A : Copy

  • B : Rename

  • C : Delete

  • D : Download

ProjectExplorer Project Links Copy Rename Delete
Figure 66. Project View - Package actions

Business Central roadmap includes a refactoring and an impact analyses tools, but currenctly doesn’t have it. Until both tools are provided make sure that your changes (copy/rename/delete) on packages, files or directories doesn’t have a major impact on your project.

In cases that your change had an unexcepcted impact, Business Central enables you to restore your repository using the Repository editor.

Files locked by other users as well as directories that contain such files cannot be renamed or deleted until the corresponding locks are released. If that is the case the rename and delete symbols will be deactivated. To learn more about locking see Asset Editor for details.

ProjectExplorer Delete NotAllowed

11.6.5. Project Editor

The Project Editor screen can be accessed from Project Explorer. Project Editor shows the settings for the currently active project.

Unlike most of the Business Central editors, project editor edits more than one file. Showing everything that is needed for configuring the KIE project in one place.

project editor menu
Figure 67. Project Screen and the different views
11.6.5.1. Build & Deploy

Build & Depoy builds the current project and deploys the KJAR into the Business Central internal Maven repository.

11.6.5.2. Project Settings

Project Settings edits the pom.xml file used by Maven.

Project General Settings

General settings provide tools for project name and GAV-data (Group, Artifact, Version). GAV values are used as identifiers to differentiate projects and versions of the same project.

general settings
Figure 68. Project Settings
Dependencies

The project may have any number of either internal or external dependencies. Dependency is a project that has been built and deployed to a Maven repository. Internal dependencies are projects built and deployed in the same Business Central as the project. External dependencies are retrieved from repositories outside of the current Business Central. Each dependency uses the GAV-values to specify the project name and version that is used by the project.

dependencies
Figure 69. Dependencies
Package Name White List

Classes and declared types in white listed packages show up as Data Objects that can be imported in assets. The full list is stored in package-name-white-list file that is stored in each project root.

Package white list has three modes:

  • All packages included: Every package defined in this jar is white listed.

  • Packages not included: None of the packages listed in this jar are white listed.

  • Some packages included: Only part of the packages in the jar are white listed.

Metadata

Metadata for the pom.xml file.

11.6.5.3. KIE base Settings

KIE base Settings edits the kmodule.xml file used by Drools.

kmodule
Figure 70. KIE base Settings

For more information about the KIE base properties, check the Drools Expert documentation for kmodule.xml.

KIE bases and sessions

KIE bases and sessions lists the KIE bases and the KIE sessions specified for the project.

KIE base list

Lists all the KIE bases by name. Only one KIE base can be set as default.

KIE base properties

KIE base can include other KIE bases. The models, rules and any other content in the included KIE base will be visible and usable by the currently selected KIE base.

Rules and models are stored in packages. The packages property specifies what packages are included into this KIE base.

Equals behavior is explained in the Drools Expert part of the documentation.

Event processing mode is explained in the Drools Fusion part of the documentation.

KIE sessions

The table lists all the KIE sessions in the selected KIE base. There can be only one default of each type. The types are stateless and stateful. Clicking the pen-icon opens a popup that shows more properties for the KIE session.

Metadata

Metadata for the kmodule.xml

11.6.5.4. Imports

Settings edits the project.imports file used by the Business Central editors.

ExternalDataObjects
Figure 71. Imports
External Data Objects

Data Objects provided by the Java Runtime environment may need to be registered to be available to rule authoring where such Data Objects are not implicitly available as part of an existing Data Object defined within the Business Central or a Project dependency. For example an Author may want to define a rule that checks for java.util.ArrayList in Working Memory. If a domain Data Object has a field of type java.util.ArrayList there is no need create a registraton.

Metadata

Metadata for the project.imports file.

11.6.5.5. Duplicate GAV detection

When performing any of the following operations a check is now made against all Maven Repositories, resolved for the Project, for whether the Project’s GroupId, ArtifactId and Version pre-exist. If a clash is found the operation is prevented; although this can be overridden by Users with the admin role.

The feature can be disabled by setting the System Property org.guvnor.project.gav.check.disabled to true.

Resolved repositories are those discovered in:-

  • The Project’s POM<repositories> section (or any parent POM).

  • The Project’s POM<distributionManagement> section.

  • Maven’s global settings.xml configuration file.

Affected operations:-

  • Creation of new Managed Repositories.

  • Saving a Project defintion with the Project Editor.

  • Adding new Modules to a Managed Multi-Module Repository.

  • Saving the pom.xml file.

  • Build & installing a Project with the Project Editor.

  • Build & deploying a Project with the Project Editor.

  • Asset Management operations building, installing or deloying Projects.

  • REST operations creating, installing or deploying Projects.

Users with the Admin role can override the list of Repositories checked using the "Repositories" settings in the Project Editor.

validation menu item
Figure 72. Project Editor - Viewing resolved Repositories
MavenRepositories2
Figure 73. Project Editor - The list of resolved Repositories
MavenRepositories3
Figure 74. Duplicate GAV detected

11.6.6. Validation

Business Central provides a common and consistent service for users to understand whether files authored within the environment are valid.

11.6.6.1. Problem Panel

The Problems Panel shows real-time validation results of assets within a Project.

When a Project is selected from the Project Explorer the Problems Panel will refresh with validation results of the chosen Project.

When files are created, saved or deleted the Problems Panel content will update to show either new validation errors, or remove existing if a file was deleted.

workbench problems panel
Figure 75. The Problems Panel
11.6.6.2. On demand validation

It is not always desirable to save a file in order to determine whether it is in a valid state.

All of the file editors provide the ability to validate the content before it is saved.

Clicking on the 'Validate' button shows validation errors, if any.

workbench validation

11.6.7. Data Modeller

11.6.7.1. First steps to create a data model

By default, a data model is always constrained to the context of a project. For the purpose of this tutorial, we will assume that a correctly configured project already exists and the authoring page is open.

To start the creation of a data model inside a project, take the following steps:

  1. From the home panel, select the Desing page and select the given project.

    authoring
    Figure 76. Go to authoring page and select a project
  2. Open the Data Modeller tool by clicking on a Data Object file, or using the "Add Asset → Data Object" menu option. Set Data Object name to "PurchaseOrder" and click Ok.

    open data model
    Figure 77. Click a Data Object

This will start up the Data Modeller tool, which has the following general aspect:

overview
Figure 78. Data modeller overview

The "Editor" tab is divided into the following sections:

  • The new field section is dedicated to the creation of new fields, and is opened when the "add field" button is pressed.

    create new field
    Figure 79. New field creation
  • The Data Object’s "field browser" section displays a list with the data object fields.

    data object field browser
    Figure 80. The Data Object’s field browser
  • The "Data Object / Field general properties" section. This is the rightmost section of the Data Modeller editor and visualizes the "Data Object" or "Field" general properties, depending on user selection.

    Data Object general properties can be selected by clicking on the Data Object Selector.

    data object selector
    Figure 81. Data Object selector
    data object general properties
    Figure 82. Data Object general properties

    Field general properties can be selected by clicking on a field.

field selector
Figure 83. Field selector
field general properties
Figure 84. Field general properties
  • On the right side of Business Central a new "Tool Bar" is provided that enables the selection of different context sensitive tool windows that will let the user do domain specific configurations. Currently four tool windows are provided for the following domains "Drools & jBPM", "OptaPlanner", "Persistence" and "Advanced" configurations.

    tool window selector
    Figure 85. Data modeller Tool Bar
    data object drools tool window
    Figure 86. Drools & jBPM tool window
    data object optaplanner tool window
    Figure 87. OptaPlanner tool window

    To see and use the OptaPlanner tool window, the user needs to have the role plannermgmt.

    data object persistence tool window
    Figure 88. Persistence tool window
    data object or field advanced tool window
    Figure 89. Advanced tool window

The "Source" tab shows an editor that allows the visualization and modification of the generated java code.

  • Round trip between the "Editor" and "Source" tabs is possible, and also source code preservation is provided. It means that not matter where the Java code was generated (e.g. Eclipse, Data modeller), the data modeller will only update the necessary code blocks to maintain the model updated.

    source editor tab
    Figure 90. Source editor

The "Overview" tab shows the standard metadata and version information as the other workbench editors.

11.6.7.2. Data Objects

A data model consists of data objects which are a logical representation of some real-world data. Such data objects have a fixed set of modeller (or application-owned) properties, such as its internal identifier, a label, description, package etc. Besides those, a data object also has a variable set of user-defined fields, which are an abstraction of a real-world property of the type of data that this logical data object represents.

Creating a data object can be achieved using the Business Central "New Item - Data Object" menu option.

create new data object
Figure 91. New Data Object menu option

Both resource name and location are mandatory parameters. When the "Ok" button is pressed a new Java file will be created and a new editor instance will be opened for the file edition. The optional "Persistable" attribute will add by default configurations on the data object in order to make it a JPA entity. Use this option if your jBPM project needs to store data object’s information in a data base.

11.6.7.3. Properties & relationships

Once the data object has been created, it now has to be completed by adding user-defined properties to its definition. This can be achieved by pressing the "add field" button. The "New Field" dialog will be opened and the new field can be created by pressing the "Create" button. The "Create and continue" button will also add the new field to the Data Object, but won’t close the dialog. In this way multiple fields can be created avoiding the popup opening multiple times. The following fields can (or must) be filled out:

  • The field’s internal identifier (mandatory). The value of this field must be unique per data object, i.e. if the proposed identifier already exists within current data object, an error message will be displayed.

  • A label (optional): as with the data object definition, the user can define a user-friendly label for the data object field which is about to be created. This has no further implications on how fields from objects of this data object will be treated. If a label is defined, then this is how the field will be displayed throughout the data modeller tool.

  • A field type (mandatory): each data object field needs to be assigned with a type.

    This type can be either of the following:

    1. A 'primitive java object' type: these include most of the object equivalents of the standard Java primitive types, such as Boolean, Short, Float, etc, as well as String, Date, BigDecimal and BigInteger.

      create field with primitive type
      Figure 92. Primitive object field types
    2. A 'data object' type: any user defined data object automatically becomes a candidate to be defined as a field type of another data object, thus enabling the creation of relationships between them. A data object field can be created either in 'single' or in 'multiple' form, the latter implying that the field will be defined as a collection of this type, which will be indicated by selecting "List" checkbox.

types entity
Figure 93. Data object field types
  1. A 'primitive java' type: these include java primitive types byte, short, int, long, float, double, char and boolean.

types primitive
Figure 94. Primitive field types

When finished introducing the initial information for a new field, clicking the 'Create' button will add the newly created field to the end of the data object’s fields table below:

new field was created
Figure 95. New field has been created

The new field will also automatically be selected in the data object’s field list, and its properties will be shown in the Field general properties editor. Additionally the field properties will be loaded in the different tool windows, in this way the field will be ready for edition in whatever selected tool window.

At any time, any field (without restrictions) can be deleted from a data object definition by clicking on the corresponding 'x' icon in the data object’s fields table.

11.6.7.4. Additional options

As stated before, both Data Objects as well as Fields require some of their initial properties to be set upon creation. Additionally there are three domains of properties that can be configured for a given Data Object. A domain is basically a set of properties related to a given business area. Current available domains are, "Drools & jJBPM", "Persistence" and the "Advanced" domain. To work on a given domain the user should select the corresponding "Tool window" (see below) on the right side toolbar. Every tool window usually provides two editors, the "Data Object" level editor and the "Field" level editor, that will be shown depending on the last selected item, the Data Object or the Field.

Drools & jBPM domain

The Drools & jBPM domain editors manages the set of Data Object or Field properties related to drools applications.

Drools & jBPM object editor

The Drools & jBPM object editor manages the object level drools properties

data object drools tool window
Figure 96. The data object’s properties
  • TypeSafe: this property allows to enable/disable the type safe behaviour for current type. By default all type declarations are compiled with type safety enabled. (See Drools for more information on this matter).

  • ClassReactive: this property allows to mark this type to be treated as "Class Reactive" by the Drools engine. (See Drools for more information on this matter).

  • PropertyReactive: this property allows to mark this type to be treated as "Property Reactive" by the Drools engine. (See Drools for more information on this matter).

  • Role: this property allows to configure how the Drools engine should handle instances of this type: either as regular facts or as events. By default all types are handled as a regular fact, so for the time being the only value that can be set is "Event" to declare that this type should be handled as an event. (See Drools Fusion for more information on this matter).

  • Timestamp: this property allows to configure the "timestamp" for an event, by selecting one of his attributes. If set the Drools engine will use the timestamp from the given attribute instead of reading it from the Session Clock. If not, the Drools engine will automatically assign a timestamp to the event. (See Drools Fusion for more information on this matter).

  • Duration: this property allows to configure the "duration" for an event, by selecting one of his attributes. If set the Drools engine will use the duration from the given attribute instead of using the default event duration = 0. (See Drools Fusion for more information on this matter).

  • Expires: this property allows to configure the "time offset" for an event expiration. If set, this value must be a temporal interval in the form: [d][#h][#m][#s][[ms]] Where [ ] means an optional parameter and # means a numeric value. e.g.: 1d2h, means one day and two hours. (See Drools Fusion for more information on this matter).

  • Remotable: If checked this property makes the Data Object available to be used with jBPM remote services as REST, JMS and WS. (See jBPM for more information on this matter).

Drools & jJBPM field editor

The Drools & jBPM object editor manages the field level drools properties

field drools tool window
Figure 97. The data object’s field properties
  • Equals: checking this property for a Data Object field implies that it will be taken into account, at the code generation level, for the creation of both the equals() and hashCode() methods in the generated Java class. We will explain this in more detail in the following section.

  • Position: this field requires a zero or positive integer. When set, this field will be interpreted by the Drools engine as a positional argument (see the section below and also the Drools documentation for more information on this subject).

Persistence domain

The Persistence domain editors manages the set of Data Object or Field properties related to persistence.

Persistence domain object editor

Persistence domain object editor manages the object level persistence properties

data object persistence tool window
Figure 98. The data object’s properties
  • Persistable: this property allows to configure current Data Object as persistable.

  • Table name: this property allows to set a user defined database table name for current Data Object.

Persistence domain field editor

The persistence domain field editor manages the field level persistence properties and is divided in three sections.

field persistence tool window sections
Figure 99. Persistence domain field editor sections
Identifier:

A persistable Data Object should have one and only one field defined as the Data Object identifier. The identifier is typically a unique number that distinguishes a given Data Object instance from all other instances of the same class.

  • Is Identifier: marks current field as the Data Object identifier. A persistable Data Object should have one and only one field marked as identifier, and it should be a base java type, like String, Integer, Long, etc. A field that references a Data Object, or is a multiple field can not be marked as identifier. And also composite identifiers are not supported in this version. When a persistable Data Object is created an identifier field is created by default with the properly initializations, it’s strongly recommended to use this identifier.

  • Generation Strategy: the generation strategy establishes how the identifier values will be automatically generated when the Data Object instances are created and stored in a database. (e.g. by the forms associated to jBPM processes human tasks.) When the by default Identifier field is created, the generation strategy will be also automatically set and it’s strongly recommended to use this configuration.

  • Sequence Generator: the generator represents the seed for the values that will be used by the Generation Strategy. When the by default Identifier field is created the Sequence Generator will be also automatically generated and properly configured to be used by the Generation Strategy.

Column Properties:

The column properties section enables the customization of some properties of the database column that will store the field value.

  • Column name: optional value that sets the database column name for the given field.

  • Unique: When checked the unique property establishes that current field value should be a unique key when stored in the database. (if not set the default value is false)

  • Nullable: When checked establishes that current field value can be null when stored in a database. (if not set the default value is true)

  • Insertable: When checked establishes that column will be included in SQL INSERT statements generated by the persistence provider. (if not set the default value is true)

  • Updatable: When checked establishes that the column will be included SQL UPDATE statements generated by the persistence provider. (if not set the default value is true)

Relationship Properties:

When the field’s type is a Data Object type, or a list of a Data Object type a relationship type should be set in order to let the persistence provider to manage the relation. Fortunately this relation type is automatically set when such kind of fields are added to an already marked as persistable Data Object. The relationship type is set by the following popup.

field persistence tool window sections relationship dialog
Figure 100. Relationship configuration popup
  • Relationship type: sets the type of relation from one of the following options:

    One to one: typically used for 1:1 relations where "A is related to one instance of B", and B exists only when A exists. e.g. PurchaseOrder → PurchaseOrderHeader (a PurchaseOrderHeader exists only if the PurchaseOrder exists)

    One to many: typically used for 1:N relations where "A is related to N instances of B", and the related instances of B exists only when A exists. e.g. PurchaseOrder → PurchaseOrderLine (a PurchaseOrderLine exists only if the PurchaseOrder exists)

    Many to one: typically used for 1:1 relations where "A is related to one instance of B", and B can exist even without A. e.g. PurchaseOrder → Client (a Client can exist in the database even without an associated PurchaseOrder)

    Many to many: typically used for N:N relations where "A can be related to N instances of B, and B can be related to M instances of A at the same time", and both B an A instances can exits in the database independently of the related instances. e.g. Course → Student. (Course can be related to N Students, and a given Student can attend to M courses)

    When a field of type "Data Object" is added to a given persistable Data Object, the "Many to One" relationship type is generated by default.

    And when a field of type "list of Data Object" is added to a given persistable Data Object , the "One to Many" relationship is generated by default.

  • Cascade mode: Defines the set of cascadable operations that are propagated to the associated entity. The value cascade=ALL is equivalent to cascade={PERSIST, MERGE, REMOVE, REFRESH}. e.g. when A → B, and cascade "PERSIST or ALL" is set, if A is saved, then B will be also saved.

    The by default cascade mode created by the data modeller is "ALL" and it’s strongly recommended to use this mode when Data Objects are being used by jBPM processes and forms.

  • Fetch mode: Defines how related data will be fetched from database at reading time.

    EAGER: related data will be read at the same time. e.g. If A → B, when A is read from database B will be read at the same time.

    LAZY: reading of related data will be delayed usually to the moment they are required. e.g. If PurchaseOrder → PurchaseOrderLine the lines reading will be postponed until a method "getLines()" is invoked on a PurchaseOrder instance.

    The default fetch mode created by the data modeller is "EAGER" and it’s strongly recommended to use this mode when Data Objects are being used by jBPM processes and forms.

  • Optional: establishes if the right side member of a relationship can be null.

  • Mapped by: used for reverse relations.

Advanced domain

The advanced domain enables the configuration of whatever parameter set by the other domains as well as the adding of arbitrary parameters. As it will be shown in the code generation section every "Data Object / Field" parameter is represented by a java annotation. The advanced mode enables the configuration of this annotations.

Advanced domain Data Object / Field editor.

The advanced domain editor has the same shape for both Data Object and Field.

data object or field advanced tool window
Figure 101. Advanced domain editor.

The following operations are available

  • delete: enables the deletion of a given Data Object or Field annotation.

  • clear: clears a given annotation parameter value.

  • edit: enables the edition of a given annotation parameter value.

  • add annotation: The add annotation button will start a wizard that will let the addition of whatever java annotation available in the project dependencies.

    Add annotation wizard step #1: the first step of the wizard requires the entering of a fully qualified class name of an annotation, and by pressing the "search" button the annotation definition will be loaded into the wizard. Additionally when the annotation definition is loaded, different wizard steps will be created in order to enable the completion of the different annotation parameters. Required parameters will be marked with "*".

    add annotation wizard step1 annotation loaded
    Figure 102. Annotation definition loaded into the wizard.

    Whenever it’s possible the wizard will provide a suitable editor for the given parameters.

    add annotation wizard step2 enum param editor
    Figure 103. Automatically generated enum values editor for an Enumeration annotation parameter.

    A generic parameter editor will be provided when it’s not possible to calculate a customized editor

    add annotation wizard step2 generic param editor
    Figure 104. Generic annotation parameter editor

    When all required parameters has been entered and validated, the finish button will be enabled and the wizard can be completed by adding the annotation to the given Data Object or Field.

11.6.7.5. Generate data model code.

The data model in itself is merely a visual tool that allows the user to define high-level data structures, for them to interact with the Drools engine on the one hand, and the jBPM platform on the other. In order for this to become possible, these high-level visual structures have to be transformed into low-level artifacts that can effectively be consumed by these platforms. These artifacts are Java POJOs (Plain Old Java Objects), and they are generated every time the data model is saved, by pressing the "Save" button in the top Data Modeller Menu. Additionally when the user round trip between the "Editor" and "Source" tab, the code is auto generated to maintain the consistency with the Editor view and vice versa.

save top
Figure 105. Save the data model from the top menu

The resulting code is generated according to the following transformation rules:

  • The data object’s identifier property will become the Java class’s name. It therefore needs to be a valid Java identifier.

  • The data object’s package property becomes the Java class’s package declaration.

  • The data object’s superclass property (if present) becomes the Java class’s extension declaration.

  • The data object’s label and description properties will translate into the Java annotations "@org.kie.api.definition.type.Label" and "@org.kie.api.definition.type.Description", respectively. These annotations are merely a way of preserving the associated information, and as yet are not processed any further.

  • The data object’s role property (if present) will be translated into the "@org.kie.api.definition.type.Role" Java annotation, that IS interpreted by the application platform, in the sense that it marks this Java class as a Drools Event Fact-Type.

  • The data object’s type safe property (if present) will be translated into the "@org.kie.api.definition.type.TypeSafe Java annotation. (see Drools)

  • The data object’s class reactive property (if present) will be translated into the "@org.kie.api.definition.type.ClassReactive Java annotation. (see Drools)

  • The data object’s property reactive property (if present) will be translated into the "@org.kie.api.definition.type.PropertyReactive Java annotation. (see Drools)

  • The data object’s timestamp property (if present) will be translated into the "@org.kie.api.definition.type.Timestamp Java annotation. (see Drools)

  • The data object’s duration property (if present) will be translated into the "@org.kie.api.definition.type.Duration Java annotation. (see Drools)

  • The data object’s expires property (if present) will be translated into the "@org.kie.api.definition.type.Expires Java annotation. (see Drools)

  • The data object’s remotable property (if present) will be translated into the "@org.kie.api.remote.Remotable Java annotation. (see jBPM)

A standard Java default (or no parameter) constructor is generated, as well as a full parameter constructor, i.e. a constructor that accepts as parameters a value for each of the data object’s user-defined fields.

The data object’s user-defined fields are translated into Java class fields, each one of them with its own getter and setter method, according to the following transformation rules:

  • The data object field’s identifier will become the Java field identifier. It therefore needs to be a valid Java identifier.

  • The data object field’s type is directly translated into the Java class’s field type. In case the field was declared to be multiple (i.e. 'List'), then the generated field is of the "java.util.List" type.

  • The equals property: when it is set for a specific field, then this class property will be annotated with the "@org.kie.api.definition.type.Key" annotation, which is interpreted by the Drools engine, and it will 'participate' in the generated equals() method, which overwrites the equals() method of the Object class. The latter implies that if the field is a 'primitive' type, the equals method will simply compares its value with the value of the corresponding field in another instance of the class. If the field is a sub-entity or a collection type, then the equals method will make a method-call to the equals method of the corresponding data object’s Java class, or of the java.util.List standard Java class, respectively.

    If the equals property is checked for ANY of the data object’s user defined fields, then this also implies that in addition to the default generated constructors another constructor is generated, accepting as parameters all of the fields that were marked with Equals. Furthermore, generation of the equals() method also implies that also the Object class’s hashCode() method is overwritten, in such a manner that it will call the hashCode() methods of the corresponding Java class types (be it 'primitive' or user-defined types) for all the fields that were marked with Equals in the Data Model.

  • The position property: this field property is automatically set for all user-defined fields, starting from 0, and incrementing by 1 for each subsequent new field. However the user can freely changes the position among the fields. At code generation time this property is translated into the "@org.kie.api.definition.type.Position" annotation, which can be interpreted by the Drools engine. Also, the established property order determines the order of the constructor parameters in the generated Java class.

As an example, the generated Java class code for the Purchase Order data object, corresponding to its definition as shown in the following figure purchase_example.jpg is visualized in the figure at the bottom of this chapter. Note that the two of the data object’s fields, namely 'header' and 'lines' were marked with Equals, and have been assigned with the positions 2 and 1, respectively).

generate purchase example
Figure 106. Purchase Order configuration
    package org.jbpm.examples.purchases;

    /**
    * This class was automatically generated by the data modeler tool.
    */
    @org.kie.api.definition.type.Label("Purchase Order")
    @org.kie.api.definition.type.TypeSafe(true)
    @org.kie.api.definition.type.Role(org.kie.api.definition.type.Role.Type.EVENT)
    @org.kie.api.definition.type.Expires("2d")
    @org.kie.api.remote.Remotable
    public class PurchaseOrder implements java.io.Serializable
    {

    static final long serialVersionUID = 1L;

    @org.kie.api.definition.type.Label("Total")
    @org.kie.api.definition.type.Position(3)
    private java.lang.Double total;

    @org.kie.api.definition.type.Label("Description")
    @org.kie.api.definition.type.Position(0)
    private java.lang.String description;

    @org.kie.api.definition.type.Label("Lines")
    @org.kie.api.definition.type.Position(2)
    @org.kie.api.definition.type.Key
    private java.util.List<org.jbpm.examples.purchases.PurchaseOrderLine> lines;

    @org.kie.api.definition.type.Label("Header")
    @org.kie.api.definition.type.Position(1)
    @org.kie.api.definition.type.Key
    private org.jbpm.examples.purchases.PurchaseOrderHeader header;

    @org.kie.api.definition.type.Position(4)
    private java.lang.Boolean requiresCFOApproval;

    public PurchaseOrder()
    {
    }

    public java.lang.Double getTotal()
    {
    return this.total;
    }

    public void setTotal(java.lang.Double total)
    {
    this.total = total;
    }

    public java.lang.String getDescription()
    {
    return this.description;
    }

    public void setDescription(java.lang.String description)
    {
    this.description = description;
    }

    public java.util.List<org.jbpm.examples.purchases.PurchaseOrderLine> getLines()
    {
    return this.lines;
    }

    public void setLines(java.util.List<org.jbpm.examples.purchases.PurchaseOrderLine> lines)
    {
    this.lines = lines;
    }

    public org.jbpm.examples.purchases.PurchaseOrderHeader getHeader()
    {
    return this.header;
    }

    public void setHeader(org.jbpm.examples.purchases.PurchaseOrderHeader header)
    {
    this.header = header;
    }

    public java.lang.Boolean getRequiresCFOApproval()
    {
    return this.requiresCFOApproval;
    }

    public void setRequiresCFOApproval(java.lang.Boolean requiresCFOApproval)
    {
    this.requiresCFOApproval = requiresCFOApproval;
    }

    public PurchaseOrder(java.lang.Double total, java.lang.String description,
    java.util.List<org.jbpm.examples.purchases.PurchaseOrderLine> lines,
    org.jbpm.examples.purchases.PurchaseOrderHeader header,
    java.lang.Boolean requiresCFOApproval)
    {
    this.total = total;
    this.description = description;
    this.lines = lines;
    this.header = header;
    this.requiresCFOApproval = requiresCFOApproval;
    }

    public PurchaseOrder(java.lang.String description,
    org.jbpm.examples.purchases.PurchaseOrderHeader header,
    java.util.List<org.jbpm.examples.purchases.PurchaseOrderLine> lines,
    java.lang.Double total, java.lang.Boolean requiresCFOApproval)
    {
    this.description = description;
    this.header = header;
    this.lines = lines;
    this.total = total;
    this.requiresCFOApproval = requiresCFOApproval;
    }

    public PurchaseOrder(
    java.util.List<org.jbpm.examples.purchases.PurchaseOrderLine> lines,
    org.jbpm.examples.purchases.PurchaseOrderHeader header)
    {
    this.lines = lines;
    this.header = header;
    }

    @Override
    public boolean equals(Object o)
    {
    if (this == o)
    return true;
    if (o == null || getClass() != o.getClass())
    return false;
    org.jbpm.examples.purchases.PurchaseOrder that = (org.jbpm.examples.purchases.PurchaseOrder) o;
    if (lines != null ? !lines.equals(that.lines) : that.lines != null)
    return false;
    if (header != null ? !header.equals(that.header) : that.header != null)
    return false;
    return true;
    }

    @Override
    public int hashCode()
    {
    int result = 17;
    result = 31 * result + (lines != null ? lines.hashCode() : 0);
    result = 31 * result + (header != null ? header.hashCode() : 0);
    return result;
    }

    }
11.6.7.6. Using external models

Using an external model means the ability to use a set for already defined POJOs in current project context. In order to make those POJOs available a dependency to the given JAR should be added. Once the dependency has been added the external POJOs can be referenced from current project data model.

There are two ways to add a dependency to an external JAR file:

  • Dependency to a JAR file already installed in current local M2 repository (typically associated the user home).

  • Dependency to a JAR file installed in current Business Central "Guvnor M2 repository". (internal to the application)

Dependency to a JAR file in local M2 repository

To add a dependency to a JAR file in local M2 repository follow this steps.

Click the "Add" button to add a new dependency line.
add dependency 2
Figure 108. New dependency line.
Save the project to update its dependencies.

When project is saved the POJOs defined in the external file will be available.

add dependency 4
Figure 110. Save project.
Dependency to a JAR file in current "Guvnor M2 repository".

To add a dependency to a JAR file in current "Guvnor M2 repository" follow this steps.

Open the Maven Artifact Repository editor.
add dependency guvnor m2 1
Figure 111. Guvnor M2 Repository editor.
Upload the file using the Upload button.
add dependency guvnor m2 3
Figure 113. File upload success.
Guvnor M2 repository files.

Once the file has been loaded it will be displayed in the repository files list.

add dependency guvnor m2 4
Figure 114. Files list.
Provide a GAV for the uploaded file (optional).

If the uploaded file is not a valid Maven JAR (don’t have a pom.xml file) the system will prompt the user in order to provide a GAV for the file to be installed.

add dependency guvnor m2 not gav 1
Figure 115. Not valid POM.
add dependency guvnor m2 not gav 2
Figure 116. Enter GAV manually.
Add dependency from repository.

Open the project editor (see bellow) and click the "Add from repository" button to open the JAR selector to see all the installed JAR files in current "Guvnor M2 repository". When the desired file is selected the project should be saved in order to make the new dependency available.

add dependency guvnor m2 5
Figure 117. Select JAR from "Maven Artifact Repository".
Using the external objects

When a dependency to an external JAR has been set, the external POJOs can be used in the context of current project data model in the following ways:

  • External POJOs can be extended by current model data objects.

  • External POJOs can be used as field types for current model data objects.

The following screenshot shows how external objects are prefixed with the string " -ext- " in order to be quickly identified.

add dependency select external pojo
Figure 118. Identifying external objects.
11.6.7.7. Roundtrip and concurrency

Current version implements roundtrip and code preservation between Data modeller and Java source code. No matter where the Java code was generated (e.g. Eclipse, Data modeller), the data modeller will only create/delete/update the necessary code elements to maintain the model updated, i.e, fields, getter/setters, constructors, equals method and hashCode method. Also whatever Type or Field annotation not managed by the Data Modeler will be preserved when the Java sources are updated by the Data modeller.

Aside from code preservation, like in the other Business Central editors, concurrent modification scenarios are still possible. Common scenarios are when two different users are updating the model for the same project, e.g. using the data modeller or executing a 'git push command' that modifies project sources.

From an application context’s perspective, we can basically identify two different main scenarios:

No changes have been undertaken through the application

In this scenario the application user has basically just been navigating through the data model, without making any changes to it. Meanwhile, another user modifies the data model externally.

In this case, no immediate warning is issued to the application user. However, as soon as the user tries to make any kind of change, such as add or remove data objects or properties, or change any of the existing ones, the following pop-up will be shown:

extchanges reopen ignore
Figure 119. External changes warning

The user can choose to either:

  • Re-open the data model, thus loading any external changes, and then perform the modification he was about to undertake, or

  • Ignore any external changes, and go ahead with the modification to the model. In this case, when trying to persist these changes, another pop-up warning will be shown:

    extchanges forcesave reopen
    Figure 120. Force save / re-open

    The "Force Save" option will effectively overwrite any external changes, while "Re-open" will discard any local changes and reload the model.

    "Force Save" overwrites any external changes!

Changes have been undertaken through the application

The application user has made changes to the data model. Meanwhile, another user simultaneously modifies the data model from outside the application context.

In this alternative scenario, immediately after the external user commits his changes to the asset repository (or e.g. saves the model with the data modeller in a different session), a warning is issued to the application user:

extchanges reopen ignore
Figure 121. External changes warning

As with the previous scenario, the user can choose to either:

  • Re-open the data model, thus losing any modifications that where made through the application, or

  • Ignore any external changes, and continue working on the model.

    One of the following possibilities can now occur: ** The user tries to persist the changes he made to the model by clicking the "Save" button in the data modeller top level menu. This leads to the following warning message:

    +

    extchanges forcesave reopen
    Figure 122. Force save / re-open

    The "Force Save" option will effectively overwrite any external changes, while "Re-open" will discard any local changes and reload the model.

11.6.8. Data Sets

A data set is basically a set of columns populated with some rows, a matrix of data composed of timestamps, texts and numbers. A data set can be stored in different systems: a database, an excel file, in memory or in a lot of other different systems. On the other hand, a data set definition tells Business Central modules how such data can be accessed, read and parsed.

Notice, it’s very important to make crystal clear the difference between a data set and its definition since Business Central does not take care of storing any data, it just provides an standard way to define access to those data sets regardless where the data is stored.

Let’s take for instance the data stored in a remote database. A valid data set could be, for example, an entire database table or the result of an SQL query. In both cases, the database will return a bunch of columns and rows. Now, imagine we want to get access to such data to feed some charts in a new Business Central page. First thing is to create and register a data set definition in order to indicate the following:

  • where the data set is stored,

  • how can be accessed, read and parsed and

  • what columns contains and of which type.

This chapter introduces the available Business Central tools for registering and handling data set definitions and how this definitions can be consumed in other Business Central modules like, for instance, the Page Editor.

For simplicity sake we will be using the term data set to refer to the actual data set definitions as Data set and Data set definition can be considered synonyms under the data set authoring context.

11.6.8.1. Data Set Authoring Page

Everything related to the authoring of data sets can be found under the Data Set Authoring page which is accessible from the following top level menu entry: Extensions>Data Sets, as shown in the following screenshot.

DataSetAuthoringPerspective
Figure 123. Data Set Authoring Page

The center panel, shows a welcome screen, whilst the left panel contains the Data Set Explorer listing all the data sets available

This page is only intended to Administrator users, since defining data sets can be considered a low level task.

11.6.8.2. Data Set Explorer

The Data Set Explorer list the data sets present in the system. Every time the user clicks on the data set it shows a brief summary alongside the following information:

DataSetExplorer
Figure 124. Data Set Explorer
  • (1) A button for creating a new Data set

  • (2) The list of currently available Data sets

  • (3) An icon that represents the Data set’s provider type (Bean, SQL, CSV, etc)

  • (4) Details of current cache and refresh policy status

  • (5) Details of current size on backend (unit as rows) and current size on client side (unit in bytes)

  • (6) The button for editing the Data set. Once clicked the Data set editor screen is opened on the center panel

The next sections explains how to create, edit and fine tune data set definitions.

11.6.8.3. Data Set Creation

Clicking on the New Data Set button opens a new screen from which the user is able to create a new data set definition in three steps:

  • Provider type selection

    Specify the kind of the remote storage system (BEAN, SQL, CSV, ElasticSearch)

  • Provider configuration

    Specify the attributes for being able to look up data from the remote system. The configuration varies depending on the data provider type selected.

  • Data set columns & filter

    Live data preview, column types and initial filter configuration.

Step 1: Provider type selection

Allows the user’s specify the type of data provider of the data set being created.

This screen lists all the current available data provider types and helper popovers with descriptions. Each data provider is represented with a descriptive image:

DataSetDefTypeSelection
Figure 125. Provider type selection

Four types are currently supported:

  • Bean (Java class) - To generate a data set directly from Java

  • SQL - For getting data from any ANSI-SQL compliant database

  • CSV - To upload the contents of a remote or local CSV file

  • Elastic Search - To query and get documents stored on Elastic Search nodes as data sets

Once a type is selected, click Next to continue with the next workflow step.

Step 2: Configuration
DataSetDefConfigScreen
Figure 126. CSV Configuration

The provider type selected in the previous step will determine which configuration settings the system asks for.

DataSetDefConfigTypes
Figure 127. Configuration screen per data set type

The UUID attribute is a read only field as it’s generated by the system. It’s only intended for usage in API calls or specific operations.

Step 3: Data set columns and preview

After clicking on the Test button (see previous step), the system executes a data set lookup test call in order to check if the remote system is up and the data is available. If everything goes ok the user will see the following screen:

DataSetDefLivePreview
Figure 128. Data set preview

This screen shows a live data preview along with the columns the user wants to be part of the resulting data set. The user can also navigate through the data and apply some changes to the data set structure. Once finished, we can click the Save button in order to register the new data set definition.

We can also change the configuration settings at any time just by going back to the configuration tab. We can repeat the Configuration>Test>Preview cycle as may times as needed until we consider it’s ready to be saved.

Columns

In the Columns tab area the user can select what columns are part of the resulting data set definition.

DataSetDefColumns
Figure 129. Data set columns
  • (1) To add or remove columns. Select only those columns you want to be part of the resulting data set

  • (2) Use the drop down image selector to change the column type

A data set may only contain columns of any of the following 4 types:

  • Label - For text values supporting group operations (similar to the SQL "group by" operator) which means you can perform data lookup calls and get one row per distinct value.

  • Text - For text values NOT supporting group operations. Typically for modeling large text columns such as abstracts, descriptions and the like.

  • Number - For numeric values. It does support aggregation functions on data lookup calls: sum, min, max, average, count, disctinct.

  • Date - For date or timestamp values. It does support time based group operations by different time intervals: minute, hour, day, month, year, …​

No matter which remote system you want to retrieve data from, the resulting data set will always return a set of columns of one of the four types above. There exists, by default, a mapping between the remote system column types and the data set types. The user is able to modify the type for some columns, depending on the data provider and the column type of the remote system. The system supports the following changes to column types:

  • Label <> Text - Useful when we want to enable/disable the categorization (grouping) for the target column. For instance, imagine a database table called "document" containing a large text column called "abstract". As we do not want the system to treat such column as a "label" we might change its column type to "text". Doing so, we are optimizing the way the system handles the data set and

  • Number <> Label - Useful when we want to treat numeric columns as labels. This can be used for instance to indicate that a given numeric column is not a numeric value that can be used in aggregation functions. Despite its values are stored as numbers we want to handle the column as a "label". One example of such columns are: an item’s code, an appraisal id., …​

BEAN data sets do not support changing column types as it’s up to the developer to decide which are the concrete types for each column.

Filter

A data set definition may define a filter. The goal of the filter is to leave out rows the user does not consider necessary. The filter feature works on any data provider type and it lets the user to apply filter operations on any of the data set columns available.

DataSetDefFilter
Figure 130. Data set filter

While adding or removing filter conditions and operations, the preview table on central area is updated with live data that reflects the current filter status.

There exists two strategies for filtering data sets and it’s also important to note that choosing between the two have important implications. Imagine a dashboard with some charts feeding from a expense reports data set where such data set is built on top of an SQL table. Imagine also we only want to retrieve the expense reports from the "London" office. You may define a data set containing the filter "office=London" and then having several charts feeding from such data set. This is the recommended approach. Another option is to define a data set with no initial filter and then let the individual charts to specify their own filter. It’s up to the user to decide on the best approach.

Depending on the case it might be better to define the filter at a data set level for reusing across other modules. The decision may also have impact on the performance since a filtered cached data set will have far better performance than a lot of individual non-cached data set lookup requests. (See the next section for more information about caching data sets).

Notice, for SQL data sets, the user can use both the filter feature introduced or, alternatively, just add custom filter criteria to the SQL sentence. Although, the first approach is more appropriated for non technical users since they might not have the required SQL language skills.

11.6.8.4. Data set editor

To edit an existing data set definition go the data set explorer, expand the desired data set definition and click the Edit button. This will cause a new editor panel to be opened and placed on the center of the screen, as shown in the next screenshot:

DataSetDefEditor
Figure 131. Data set definition editor
DataSetDefEditorSelector
Figure 132. Editor selector
  • Save - To validate the current changes and store the data set definition.

  • Delete - To remove permanently from storage the data set definition. Any client module referencing the data set may be affected.

  • Validate - To check that all the required parameters exists and are correct, as well as to validate the data set can be retrieved with no issues.

  • Copy - To create a brand new definition as a copy of the current one.

Data set definitions are stored in the underlying GIT repository as JSON files. Any action performed is registered in the repository logs so it is possible to audit the change log later on.

11.6.8.5. Advanced settings

In the Advanced settings tab area the user can specify caching and refresh settings. Those are very important for making the most of the system capabilities thus improving the performance and having better application responsive levels.

DataSetDefAdvanced
Figure 133. Advanced settings
  • (1) To enable or disable the client cache and specify the maximum size (bytes).

  • (2) To enable or disable the backend cache and specify the maximum cache size (number of rows).

  • (3) To enable or disable automatic refresh for the Data set and the refresh period.

  • (4) To enable or disable the refresh on stale data setting.

Let’s dig into more details about the meaning of these settings.

11.6.8.6. Caching

The system provides caching mechanisms out-of-the-box for holding data sets and performing data operations using in-memory strategies. The use of these features brings a lot of advantages, like reducing the network traffic, remote system payload, processing times etc. On the other hand, it’s up to the user to fine tune properly the caching settings to avoid hitting performance issues.

Two cache levels are supported:

  • Client level

  • Backend level

The following diagram shows how caching is involved in any data set operation:

DataSetCacheArchitecture
Figure 134. Data set caching

Any data look up call produces a resulting data set, so the use of the caching techniques determines where the data lookup calls are executed and where the resulting data set is located.

Client cache

If ON then the data set involved in a look up operation is pushed into the web browser so that all the components that feed from this data set do not need to perform any requests to the backend since data set operations are resolved at a client side:

  • The data set is stored in the web browser’s memory

  • The client components feed from the data set stored in the browser

  • Data set operations (grouping, aggregations, filters and sort) are processed within the web browser, by means of a Javascript data set operation engine.

If you know beforehand that your data set will remain small, you can enable the client cache. It will reduce the number of backend requests, including the requests to the storage system. On the other hand, if you consider that your data set will be quite big, disable the client cache so as to not hitting with browser issues such as slow performance or intermittent hangs.

Backend cache

Its goal is to provide a caching mechanism for data sets on backend side.

This feature allows to reduce the number of requests to the remote storage system , by holding the data set in memory and performing group, filter and sort operations using the in-memory Drools engine.

It’s useful for data sets that do not change very often and their size can be considered acceptable to be held and processed in memory. It can be also helpful on low latency connectivity issues with the remote storage. On the other hand, if your data set is going to be updated frequently, it’s better to disable the backend cache and perform the requests to the remote storage on each look up request, so the storage system is in charge of resolving the data set lookup request.

BEAN and CSV data providers relies by default on the backend cache, as in both cases the data set must be always loaded into memory in order to resolve any data lookup operation using the in-memory Drools engine. This is the reason why the backend settings are not visible in the Advanced settings tab.

11.6.8.7. Refresh

The refresh feature allows for the invalidation of any cached data when certain conditions are meet.

DataSetDefRefreshSettings
Figure 135. Refresh settings
  • (1) To enable or disable the refresh feature.

  • (2) To specify the refresh interval.

  • (3) To enable or disable data set invalidation when the data is outdated.

The data set refresh policy is tightly related to data set caching, detailed in previous section. This invalidation mechanism determines the cache life-cycle.

Depending on the nature of the data there exist three main use cases:

  • Source data changes predictable - Imagine a database being updated every night. In that case, the suggested configuration is to use a "refresh interval = 1 day" and disable "refresh on stale data". That way, the system will always invalidate the cached data set every day. This is the right configuration when we know in advance that the data is going to change.

  • Source data changes unpredictable - On the other hand, if we do not know whether the database is updated every day, the suggested configuration is to use a "refresh interval = 1 day" and enable "refresh on stale data". If so the system, before invalidating any data, will check for modifications. On data modifications, the system will invalidate the current stale data set so that the cache is populated with fresh data on the next data set lookup call.

  • Real time scenarios - In real time scenarios caching makes no sense as data is going to be updated constantly. In this kind of scenarios the data sent to the client has to be constantly updated, so rather than enabling the refresh settings (remember this settings affect the caching, and caching is not enabled) it’s up to the clients consuming the data set to decide when to refresh. When the client is a dashboard then it’s just a matter of modifying the refresh settings in the Displayer Editor configuration screen and set a proper refresh period, "refresh interval = 1 second" for example.

11.6.9. Data Source Management

The data source management system provides the ability of defining data sources for accessing external databases. This data sources can be later used by other Business Central components like the Data Sets.

11.6.9.1. Database Drivers

To be able to communicate with the target database a data source will need a database driver to access it. This is why the system additionally provides the ability of defining database drivers for the data sources operation. A database driver is basically a JDBC compliant driver. We will see them in the next topics.

11.6.9.2. Data Source Authoring Page

Everything related to the authoring of data sources and drivers can be found under the Data Source Authoring page accessible from the following top level menu entry: Extensions>Data Sources, as shown in the following screenshot.

DataSourceManagementPerspective
Figure 136. Data Source Authoring Page

This page is only intended for Administrator users, since defining data sources can be considered a low level task.

11.6.9.3. Data Source Explorer

The Data Source Explorer lists the data sources and drivers currently defined in the system, at the same time it provides the required actions for managing them.

DataSourceExplorer
Figure 137. Data Source Explorer
  • (1) Action link for creating a new data source

  • (2) List of currently available data sources

  • (3) Action link for creating a new driver

  • (4) List of currently available drivers

11.6.9.4. New Data Source Wizard

Clicking on the New Data Source action link opens the New Data Source Wizard:

NewDataSourceWizard
Figure 138. New Data Source Wizard

The following required parameters define a data source:

  • Name: A unique name for the data source definition.

  • Connection URL: A JDBC database connection url compliant with the selected driver type. This is an example of a connection url for a PostgreSQL database: jdbc:postgresql://localhost:5432/appformer.

  • User: A user name in the target database.

  • Password: The corresponding user password.

  • Driver: Selects the JDBC driver to be used for connecting to the target database. Note that the connection url format may vary depending on the driver, and different database vendors typically provides different drivers.

  • Test connection: Once clicked, the system will show a dialog similar to the one below showing the connection test status.

TestConnectionSuccessful
Figure 139. Test Connection Status

While not required, it’s recommended to use the test connection button to check the correctness of the data source parameters prior to finishing the data source creation.

11.6.9.5. Data Source Editor

The Data Source Editor is opened by clicking on a data source item in the Data Source Explorer.

The following screenshot shows the Data Source Editor opened for the data source of the example above.

DataSourceEditor
Figure 140. Data Source Editor
  • Main Panel: The main panel basically lets you modify the data source configuration parameters.

  • Test connection: Tests the connection.

It’s a recommended practice to test the connection prior saving a modified data source.

11.6.9.6. Data Source Content Browser

The data source content browser is opened by clicking on the Browse Content button, and enables the navigation through the database structure pointed by the data source. The navigation is performed in three levels, Schemas level, Current schema level and Current table level.

  • Schemas level: lists all the database schemas accessible by current data source. Which schemas are listed depends on the database access rights granted to the user which was used in the connection configuration. Similarly for the following item.

  • Current schema level: shows all the database tables for the selected schema.

  • Current table level: shows the table content for the selected table.

The following screenshots show the information shown at each level, for a user that realized the following navigation steps. Selects the "public" schema → Selects the "country" table.

Schema Selection:

Clicking on the Open button opens the Current schema level for the selected schema.

DataSourceContentBrowser1
Figure 141. Database schemas

Table Selection:

Clicking on the Open button opens the Current table level for the selected table.

DataSourceContentBrowser2
Figure 142. Schema tables

Table information:

The rows for the selected table are shown at this level.

DataSourceContentBrowser3
Figure 143. Table rows
11.6.9.7. External Data Sources

External data sources are typically not defined in Business Central, instead they exist in current container and for some containers like Wildfly 11 or the JBoss EAP 7 servers they can still be listed in read only mode. In this cases only the Data Source Content Browser is enabled.

ExternalDataSources
Figure 144. External Data Sources navigation
11.6.9.8. New Driver Wizard

Clicking on the New Driver action link opens the New Driver Wizard:

NewDriverWizard
Figure 145. New Driver Wizard

The following required parameters define a Driver:

  • Name: A unique name for the driver definition.

  • Driver Class Name: The java fully qualified name for the class that implements the JDBC driver contract.

  • Group Id: The maven group id for the artifact that contains the JDBC driver implementation.

  • Artifact Id: The maven artifact id for the artifact that contains the JDBC driver implementation.

  • Version: The maven version for the artifact that contains the JDBC driver implementation.

Some commercial database drivers (like Oracle) are not available in the maven central repository. You can use those by first uploading them via Artifact Repository page and then continue with the driver configuration as for the drivers available in the maven central repository.

11.6.9.9. Driver Editor

The Driver Editor is opened by clicking on a driver item in the Data Source Explorer.

The following screenshot shows the Driver Editor opened for the driver of the example above.

DriverEditor
Figure 146. Driver Editor
  • Main Panel: The main panel basically lets you modify the driver configuration parameters. See New Driver Wizard.

11.6.9.10. By Default Drivers

The system is shipped with a set of by default configured drivers for the most common used open source databases. And they are aligned with the latest database versions supported by the Wildfly 11 and the JBoss EAP 7 servers.

DefaultDrivers
Figure 147. By Default Drivers

The default drivers initialization can be enabled by setting the datasource.management.disableDefaultDrivers configuration property to false. It can be set by configuring the proper value in the datasource-management.properties file, or by passing the system property -Ddatasource.management.disableDefaultDrivers=false to the JVM. For more information see Advanced Settings.

11.6.9.11. Advanced Settings

The data source management system advanced settings can be found in the datasource-management.properties file in the WEB-INF/classes directory of the given Business Central distribution file.

The data source management system has the ability of working with two different internal implementations for the data sources and drivers. An implementation based on the Wildfly/EAP native data sources and drivers, and a container independent implementation. Wildfly/EAP Business Central distributions are configured by default for using the native Wildlfy/EAP containers implementations, and Tomcat8 distributions are configured for using the container independent implementations. This latter implementation can also be used for Wildfly/EAP containers.

The valid combinations are:

WildflyDataSourceProvider + WildflyDriverProvider
or
DBCPDataSourceProvider + DBCPDriverProvider

The datasource.management.wildfly.xxxxx properties are only suited for the WildflyXXXProviders.

11.6.9.12. Advanced Settings for Business Central Wildlfy/EAP distributions
Property name By default value Description

datasource.management.DataSourceProvider

WildflyDataSourceProvider

see Advanced Settings.

datasource.management.DriverProvider

WildflyDriverProvider

see Advanced Settings.

datasource.management.disableDefaultDrivers

true

Set to false to enable the default database drivers initialization.

datasource.management.wildfly.host

localhost

Name or ip address used for the Wildlfy server management interface binding.

datasource.management.wildfly.port

9990

Port used for the Wildlfy server management interface binding.

datasource.management.wildfly.admin

Administration user for connecting to the Wildfly server running current Business Central. In general it’s not necessary to set this value but might be needed in cases when the Wildlfy management interface is bound to an address different than localhost.

datasource.management.wildfly.password

Administration user password for connecting to the Wildfly server running current Business Central. In general it’s not necessary to set this value but might be needed in cases when the Wildlfy management interface is bound to an address different than localhost.

datasource.management.wildfly.realm

ManagementRealm

Realm for the administration user authentication.

datasource.management.wildfly.profile

The profile name used for starting the Wildfly domain, e.g. default, full, full-ha, etc. This value must only by set when Business Central is running in clustering mode and the hosting Wildfly servers are configured by using domains. Do not set if the Wildlfy servers are running as standalone servers.

datasource.management.wildfly.serverGroup

The server group to which current Wildfly server instance belongs, e.g. primary-server-group, etc. This value must only by set when Business Central is running in clustering mode and the hosting Wildfly servers are configured by using domains. Do not set if the Wildlfy servers are running as standalone servers.

datasource.management.DefChangeHandler

This value must only by set when Business Central is running in clustering mode. If the hosting Wildfly servers are configured by using domains the following value must be used DomainModeChangeHandler and the following value StandaloneModeChangeHandler must be used in cases when the hosting Wildlfy servers are running as standalone servers. Clustering installations that uses the DBCPXXXProviders must be configured for using the StandaloneModeChangeHandler.

The properties above can also be set by passing system properties to the JVM using the Java standard mechanism. e.g. -Ddatasource.management.wildfly.port=1234. Values configured by using this mechanism will override the values configured in the datasource-management.properties file.

11.6.9.13. Advanced Settings for Tomcat distributions
Property name By default value Description

datasource.management.DataSourceProvider

DBCPDataSourceProvider

This is the only option available for Tomcat 8 distributions, see Advanced Settings.

datasource.management.DriverProvider

DBCPDriverProvider

This is the only option available for Tomcat 8 distributions, see Advanced Settings.

datasource.management.disableDefaultDrivers

true

Set to false to enable the default database drivers initialization.

datasource.management.DefChangeHandler

This value must only by set when Business Central is running in clustering mode. Tomcat distributions only support the StandaloneModeChangeHandler value.

The properties above can also be set by passing system properties to the JVM using the Java standard mechanism. e.g. -Ddatasource.management.wildfly.port=1234. Values configured by using this mechanism will override the values configured in the datasource-management.properties file.

11.7. Security management

This section describes how administrator users can manage the application’s users, groups and permissions using an intuitive and friendly user interface in order to configure who can access the different resources and features available.

11.7.1. Basic concepts

11.7.1.1. Introduction to Business Central users, groups and roles

The Business Central security domain defines three kind of entities: user, group and role.

The security entities are being registered in the domain by consuming some realm. The realm can be either the application server’s one (Wildfly, EAP, Tomcat) or any other of the supported types, for example, using some Keycloak remote server that performs handles the target realm.

On the other hand, it’s important to notice that each realm provides, or potentially provides, its own capabilities, semantics or structure on the security domain. These kind of differences on the security domain results on inconsistencies between different environments when moving into the Business Central security domain. This way there exist some conventions which are important to understand - how security entities are being declared and how the platform behaves behind that complexity,

The way Business Central integrates the security entities from an external realm corresponds to:

  • User

A user, rather than attributes and some any other kind of metadata, which can be different across domains, represents the same kind entity in any of the supported security environments (Wildfly, EAP, Tomcat, Keybloak, etc), so the entity results in a user on Business Central as well

  • Role / Group

Both role and group are security entities, but rather than a user, the semantics, the behaviors or the structure in the domain is not usually common across environments. As an example consider that exist domains which do not support both, or domains were the semantics for group or role differs. As it results domain specific, the way the application behaves and figures out if an entity should be considered a group or a role, it’s by checking the application’s Role Registry. This way an entity will be considered an role in case it’s identifier is present in the application’s Role Registry, otherwise the entity will be considered as a group.

The Role Registry is an application’s component that provides the set of roles in the Business Central security domain. It’s being populated by consuming the entities (role-name) declared in the security-constraints section on the application’s deployment descriptor (web.xml). See source file org.uberfire.ext.security.server.RolesRegistry.

It means that depending on the concrete environment’s configuration, some entity can be as a role, on the security environment consumed by Business Central, but it results a group in the Business Central security domain, or vice versa. It depends on the entity’s identifier by checking it it is present in the Role Registry.

A User can be assigned to multiple roles and groups, but it is mandatory to have at least, a single role assignment for being considered valid in the Business Central security domain. It does not mean, for instance, that the user is able login, or able to consume remote services, because it depends on the concrete role/s assigned and how the roles and permissions are defined the application.

11.7.1.2. Permissions

A permission is basically something the user can do within the application. Usually, an action related to a specific resource. For instance:

  • View a page

  • Save a project

  • View a repository

  • Delete a dashboard

A permission can be granted or denied and it can be global or resource specific. For instance:

  • Global: “Create new pages”

  • Specific: “View the home page”

As you can see, a permission is a resource + action pair. In the concrete case of a page we have: read, update, delete and create as the available actions. That means that there are four possible permissions that could be granted for pages.

Permissions do not necessarily need to be tied to a resource. Sometimes it is also neccessary to protect access to specific features, like for instance "generate a sales report". That means, permissions can be used not only to protect access to resources but also to custom features within the application.

11.7.1.3. Authorization policy

The set of permissions assigned to every role and/or group is called the authorization (or security) policy. Every application contains a single security policy which is used every time the system checks a permission.

The authorization policy file is stored in a file called WEB-INF/classes/security-policy.properties under the application’s WAR structure.

If no policy is defined then the authorization management features are disabled and the application behaves as if all the resources & features were granted by default.

Here is an example of a security policy file:

# Role "admin"
role.admin.permission.perspective.read=true
role.admin.permission.perspective.read.Dashboard=false

# Role "user"
role.user.permission.perspective.read=false
role.user.permission.perspective.read.Home=true
role.user.permission.perspective.read.Dashboard=true

Every entry defines a single permission which is assigned to a role/group. On application start up, the policy file is loaded and stored into memory.

11.7.1.4. Security provider

A security environment is usually provided by the use of a realm. Realms are used to restrict access to the different application’s resources. So realms contains the information about the users, groups, roles, permissions and any other related information.

In most typical scenarios the application’s security is delegated to the container’s security mechanism, which consumes a given realm at same time. It’s important to consider that there exist several realm implementations, for example Wildfly provides a realm based on the application-users.properties/application-roles.properties files, Tomcat provides a realm based on the tomcat-users.xml file, etc. So there is no single security realm to rely on, it can be different in each installation.

Due to the potential different security environments that have to be supported, the security module provides a well defined API with some default built-in security providers. A security provider is the formal name given to a concrete user and group management service implementation for a given realm.

The user & group management features available will depend on the security provider configured. If the built-in providers do not fit with the application’s security realm, it is easy to build and register your own provider.

11.7.2. Installation and setup

At the time of this writing, the application provides two pre-installed security providers:

  • Wildfly 11 / EAP 7 distribution - Both distributions use the Wildfly security provider configured for the use of the default realm files application-users.properties and application-roles.properties

  • Tomcat distribution - It uses the Tomcat security provider configured for the use of the default realm file tomcat-users.xml

Please read each provider’s documentation in order to apply the concrete settings for the target deployment environment.

On the other hand, when either using a custom security provider or using one of the availables, consider the following installation options:

  • Enable the security management feature on an existing WAR distribution

  • Setup and installation in an existing or new project

NOTE: If no security provider is installed, there will be no available user interface for managing the security realm. Once a security provider is installed and setup, the user and group management features are automatically enabled in the security management UI (see the Usage section below).

11.7.2.1. Enabling user & group management

Given an existing WAR distribution, follow these steps in order to install and enable the user & group management features:

  • Ensure the following libraries are present on WEB-INF/lib:

    • WEB-INF/lib/uberfire-security-management-api-?.jar

    • WEB-INF/lib/uberfire-security-management-backend-?.jar

  • Copy the security provider library to WEB-INF/lib:

    • Eg: WEB-INF/lib/uberfire-security-management-wildfly-?.jar

    • If the provider requires additional libraries, copy them as well (read each provider’s documentation for more information).

  • Replace the whole content of the WEB-INF/classes/security-management.properties file, or if not present, create it. The settings present on this file depend on the concrete implementation used. Please read each provider’s documentation for more information.

  • If deploying on Wildfly or EAP, check if the WEB-INF/jboss-deployment-structure.xml requires any update (read each provider’s documentation for more information).

11.7.2.2. Disabling user & group management

The user & groups management features can be disabled, and thus no services or user interface will be available, by means of either:

  • Uninstalling the security provider from the application

    When no concrete security provider is installed, the user and group management features will be disabled and no services or user interface will be displayed to the user. This is the case for instance, in Weblogic and Websphere installations as there is no a security provider implementation available at the time of this writing.

  • Removing or commenting the security management configuration file

    Removing or commenting all the lines in the configuration file located at WEB-INF/classes/security-management.properties is another way to disable the user and group management features.

11.7.2.3. Upgrading an existing installation

In versions prior to 7, the only way to grant access to resources like Organizational Units, Repositories or Projects was to indicate which roles were able to access a given instance. Those roles were stored in GIT as part of the instance persistent status. The CLI was the tool used to add/remove roles:

  • remove-role-repo: remove role(s) from repository

  • add-role-org-unit: add role(s) to organizational unit

  • remove-role-org-unit: remove role(s) from organizational unit

  • add-role-project: add role(s) to project

  • remove-role-project: remove role(s) from project

As of version 7, the authorization policy is based on permissions. That means is no longer required to keep a list of roles per resource instance. What is required is to define proper permission entries into the active authorization policy using the security management UI (see the Usage section below).

The commands above are no longer required so they have been removed. Basically, what those commands did is to set what roles were able to read a specific item.

In order to guarantee backward compatibility with versions prior to 7, an automatic migration tool is bundled within the application, which converts the list of roles assigned to any organizational unit, repository or project into read permission entries of the security policy.

This tool is executed when the application start ups for the first time, during the security policy deployment. So existing customers, do not have to worry about it, as they will keep their security settings.

11.7.3. Usage

The Security Management page is available under the Home section in the top menu bar.

SecurityManagementMenuEntry
Figure 148. Link to the Security Management page

The next screenshot shows how this new page looks:

SecurityManagementHome
Figure 149. Security Management Home

This page supports:

  • List all the roles, groups and users available

  • Create & delete users and groups

  • Edit users, assign roles or groups, and change user properties

  • Edit both roles & groups security settings, which include:

    • The home page a user will be directed to after login

    • The permissions granted or denied to the different Business Central resources and features available

All of the above together provides a complete users and groups management subsystem as well as a permission configuration UI for protecting access to specific resources or features.

The next sections provide a deep insight into all these features.

The user and group management related features can be entirely disabled. See the previous section Disabling user & group management. If that’s the case then both the Groups and _Users tabs will remain hidden from the user.
11.7.3.1. User management

By selecting the Users tab in the left sidebar, the application shows all the users present by default on the application’s security realm:

SecurityManagementUsersExplorer
  • Searching for users

In addition to listing all the users, search is also allowed:

+ When specifying the search pattern in the search box the users listed will be reduced to only those that matches the search pattern.

+

SecurityManagementUsersSearch

+ Search patterns depend on the concrete security provider being used by the application. Please read each provider’s documentation for more information.

  • Creating new users

    By clicking on the "New user +" anchor, a form is displayed on the screen’s right.

    SecurityManagementNewUserForm

This is a wizard like interface where the application asks for the new user name, a password as well as what roles/groups to assign.

  • Editing a user

After clicking on a user in the left sidebar, the user editor is opened on the screen’s right.

For instance, the details screen for the admin user when using the Wildfly security provider looks like the following screenshot:

SecurityManagementViewUser

Same screen but when using the Keycloak security provider looks as:

SecurityManagementViewUserKC

Note that when using the Keycloak provider, a new user attributes section is displayed, but it’s not present when using the Wildfly provider. This is due to the fact that the information and actions available always depend on each provider’s capabilities as explained in the Security provider capabilities section below.

Next is the type of information handled in the user’s details screen:

  • The user name

  • The user’s attributes

  • The assigned groups

  • The assigned roles

  • The permissions granted or denied

In order to update or delete an existing user, click the Edit button present near to the username in the user editor screen:

SecurityManagementEditUser

Once the editor is in edit mode, different operations can be done (provided the security provider supports them):

For instance, to modify the set of roles and groups assigned to the user or to change the user’s password as well.

  • Permissions summary

The Permissions tab shows a summary of all the permissions assigned to this particular user. This is a very helpful view as it allows administrator users to verify if a target user has the right permission levels according to the security settings of its roles and groups.

SecurityManagementUserPermissions

Further details about how to assign permissions to roles and groups are in the Security Settings Editor section below.

  • Updating the user’s attributes

    User attributes can added or deleted using the actions available in the attributes table:

    SecurityManagementUserAttributes
  • Updating assigned groups

    From the Groups tab, a group selection popup is presented when clicking on the Add to groups button:

    SecurityManagementGroupsSelection

    This popup screen allows the user to search and select or deselect the groups assigned to the user.

  • Updating assigned roles

    From the Roles tab, a role selection popup is presented when clicking on Add to roles button:

    SecurityManagementRolesSelection

    This popup screen allows the user to search and select or deselect the roles assigned to the user.

  • Changing the user’s password

    A change password popup screen is presented when clicking on the Change password button:

    SecurityManagementChangePassword
  • Deleting users

    The user currently being edited can be deleted from the realm by clicking on the Delete button.

SecurityManagementDeleteUser
Security provider capabilities

Each security realm can provide support for different operations. For example consider the use of a Wildfly’s realm based on properties files. The contents for the applications-users.properties is like:

admin=207b6e0cc556d7084b5e2db7d822555c
salaboy=d4af256e7007fea2e581d539e05edd1b
maciej=3c8609f5e0c908a8c361ca633ed23844
kris=0bfd0f47d4817f2557c91cbab38bb92d
katy=fd37b5d0b82ce027bfad677a54fbccee
john=afda4373c6021f3f5841cd6c0a027244
jack=984ba30e11dda7b9ed86ba7b73d01481
director=6b7f87a92b62bedd0a5a94c98bd83e21
user=c5568adea472163dfc00c19c6348a665
guest=b5d048a237bfd2874b6928e1f37ee15e
kiewb=78541b7b451d8012223f29ba5141bcc2
kieserver=16c6511893651c9b4b57e0c027a96075

Notice that it’s based on key-value pairs where the key is the username, and the value is the hashed value for the user’s password. So a user is just represented by a key and its username, it does not have a name nor an address or any other meta information.

On the other hand, consider the use of a realm provided by a Keycloak server. The user information is composed by more meta-data, such as the surname, address, etc, as in the following image:

SecurityManagementViewUserKC

So the different services and client side components from the User and Group Management API are based on capabilities. Capabilities are used to expose or restrict the available functionality provided by the different services and client side components. Examples of capabilities are:

  • Create a user

  • Update a user

  • Delete a user

  • Update user’s attributes

  • Create a group

  • Update a group

  • Assign groups to a user

  • Assign roles to a user

Each security provider must specify a set of capabilities supported. From the previous examples, it is noted that the Wildfly security provider does not support the attributes management capability - the user is only composed by the user name. On the other hand the Keycloak provider does support this capability.

The different views and user interface components rely on the capabilities supported by each provider, so if a capability is not supported by the provider in use, the UI does not provide the views for the management of that capability. As an example, consider that a concrete provider does not support deleting users - the delete user button on the user interface will be not available.

Please take a look at the concrete service provider documentation to check all the supported capabilities for each one, the default ones can be found here.

11.7.3.2. Group management

By selecting the Groups tab in the left sidebar, the application shows all the groups present by default on the application’s security realm:

SecurityManagementGroupsExplorer
  • Searching for groups

In addition to listing all the groups, search is also allowed:

+ When specifying the search pattern in the search box the groups listed will be reduced to only those that matches the search pattern.

+

SecurityManagementGroupsSearch

+ Search patterns depend on the concrete security provider being used by the application. Please read each provider’s documentation for more information.

  • Creating new groups

    By clicking on the "New group +" anchor, a new screen will be presented on the center panel to perform a new group creation.

SecurityManagementNewGroup

After typing a name anc clicking Save, the next step is to assign users to it:

+

SecurityManagementNewGroupUserSelection

+ Clicking on the "Add selected users" button finishes the group creation.

  • Modifying a group

After clicking on a group in the left sidebar, the security settings editor for the selected group instance is opened on the screen’s right. Further details at the Security Settings Editor section.

  • Deleting groups

To delete an existing group just click the Delete button.

11.7.3.3. Role management

By selecting the Roles tab in the left sidebar, the application shows all the application roles:

SecurityManagementRolesExplorer

Unlike users and groups, roles can not be created nor deleted as they come from the application’s web.xml descriptor. After clicking on a role in the left sidebar, the role editor is opened on the screen’s right, which is exactly the same security settings editor used for groups. Further details at the Security Settings Editor section.

SecurityManagementEditRole

That means both role and group based permissions can be defined. The main diference between roles and group are:

  • Roles are an application defined resource. They are defined as <security-role> entries in the application’s web.xml descriptor.

  • Groups are dynamic and can be defined at runtime. The installed security provider determines where groups instances are stored.

They can be used together without any trouble. Groups are recommended though as they are a more flexible than roles.

  • Searching for roles

In addition to listing all the roles, search is also allowed:

+ When specifying the search pattern in the search box the roles listed will be reduced to only those that matches the search pattern.

+

SecurityManagementRolesSearch

+ Search patterns depend on the concrete security provider being used by the application. Please read each provider’s documentation for more information.

11.7.4. Security Settings Editor

This editor is used to set several security settings for both roles and groups.

SecurityManagementSecuritySettsEditor

+

11.7.4.1. Home page

This is the page where the user is directed after login. This makes it possible to have different home pages for different users, since users can be assigned to different roles or groups.

11.7.4.2. Priority

It is used to determine what settings (home page, permissions, …​) have precedence for those users with more that one role or group assigned.

Without this setting, it won’t be possible to determine what role/group should take precedence. For instance, an administrative role has higher priority than a non-administrative one. For users with both administrative and non-administrative roles granted, administrative privileges will always win, provided the administrative role’s priority is greater than the other.

11.7.4.3. Permissions

Currently, Business Central support the following permission categories.

  • Business Central: General Business Central permissions, not tied to any specific resource type.

  • Pages: If access to a page is denied then it will not be shown in any of application menus. Update, Delete and Create permissions change the behaviour of the page management plugin editor.

  • Organizational Units: Sets who can Create, Update or Delete organizational units from the Organizational Unit section at the Administration page. Sets also what organizational units are visible in the Project Explorer at the Project Authoring page.

  • Repositories: Sets who can Create, Update or Delete repositories from the Repositories section at the Administration page. Sets also what repositories are visible in the Project Explorer at the Project Authoring page.

  • Projects: In the Project Authoring page, sets who can Create, Update, Delete or Build projects from the Project Editor screen as well as what projects are visible in the Project Explorer.

For pages, organizational units, repositories and projects it is possible to define global permissions and add single instance exceptions afterwards. For instance, Read access can be granted to all the pages and deny access just to an individual page. This is called the grant all deny a few strategy.

SecurityManagementPerspectiveDenied

The opposite, deny all grant a few strategy is also supported:

SecurityManagementPerspectiveGranted
In the example above, the Update and Delete permissions are disabled as it does not makes sense to define such permissions if the user is not even able to read pages.

11.7.5. Security Policy Storage

The security policy is stored under the Business Central VFS. Most concrete, in a GIT repo called “security”. The ACL table is stored in a file called “security-policy.properties” under the “authz” directory. Next is an example of the entries this file contains:

role.admin.home=HomePage
role.admin.priority=0
role.admin.permission.perspective.read=true
role.admin.permission.perspective.create=true
role.admin.permission.perspective.delete=true
role.admin.permission.perspective.update=true

Every time the ACL is modified from the security settings UI the changes are stored into the GIT repo.

Initially, when the application is deployed for the first time there is no security policy stored in GIT. However, the application might need to set-up a default policy with the different access profiles for each of the application roles.

In order to support default policies the system allows for declaring a security policy as part of the webapp’s content. This can be done just by placing a security-policy.properties file under the webapp’s resource classpath (the WEB-INF/classes directory inside the WAR archive is a valid one). On app start-up the following steps are executed:

  • Check if an active policy is already stored in GIT

  • If not, then check if a policy has been defined under the webapp’s classpath

  • If found, such policy is stored under GIT

The above is an auto-deploy mechanism which is used in Business Central to set-up its default security policy.

One slight variation of the deployment process is the ability to split the “security-policy.properties” file into small pieces so that it is possible, for example, to define one file per role. The split files must start by the “security-module-” prefix, for instance: “security-module-admin.properties”. The deployment mechanism will read and deploy both the "security-policy.properties" and all the optional “security-module-?.properties” found on the classpath.

Notice, despite using the split approach, the “security-policy.properties” must always be present as it is used as a marker file by the security subsystem in order to locate the other policy files. This split mechanism allows for a better organization of the whole security policy.

11.8. SSH keystore

This section provides an overview of the Business Central SSH keystore and includes a guide for platform users. It explains how to use the Business Central SSH keystore to register and use it’s SSH public keys.

11.8.1. Introduction

Business Central includes an SSH keystore service to provide proper SSH authentication for users.

It provides a configurable default SSH keystore, extensible APIs to allow custom implementations, support for multiple SSH public keys formats, and a new UI available on the Admin page to enable users to register their SSH public keys.

11.8.1.1. The default SSH keystore

The default SSH keystore included with Business Central provides a file-based storage mechanism to store users' SSH public keys.

By default, it uses Business Central .security folder as a root path. It is possible to use a custom storage path by setting the appformer.ssh.keys.storage.folder property to direct to a different folder.

The SSH public keys are stored in the {securityFolderPath}/pkeys/{userName}/ folder structure.

Each SSH public key consists of a pair of files in the storage folder:

  • {keyId}.pub: a file containing the SSH public key content. The file name determines the logic key ID on the system, so do not modify the file name during runtime. For example

    ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDmak4Wu23RZ6XmN94bOsqecZxuTa4RRhhQmHmTZjMB7HM57/90u/B/gB/GhsPEu1nAXL0npY56tT/MPQ8vRm2C2W9A7CzN5+z5yyL3W01YZy3kzslk77CjULjfhrcfQSL3b2sPG5jv5E5/nyC/swSytucwT/PE7aXTS9H6cHIKUdYPzIt94SHoBxWRIK7PJi9d+eLB+hmDzvbVa1ezu5a8yu2kcHi6NxxfI5iRj2rsceDTp0imC1jMoC6ZDfBvZSxL9FXTMwFdNnmTlJveBtv9nAbnAvIWlilS0VOkdj1s3GxBxeZYAcKbcsK9sJzusptk5dxGsG2Z8vInaglN6OaOQ7b7tcomzCYYwviGQ9gRX8sGsVrw39gsDIGYP2tA4bRr7ecHnlNg1b0HCchA5+QCDk4Hbz1UrnHmPA2Lg9c3WGm2qedvQdVJXuS3mlwYOqL40aXPs6890PvFJUlpiVSznF50djPnwsMxJZEf1HdTXgZD1Bh54ogZf7czyUNfkNkE69yJDbTHjpQd0cKUQnu9tVxqmBzhX31yF4VcsMeADcf2Z8wlA3n4LZnC/GwonYlq5+G93zJpFOkPhme8c2XuPuCXF795lsxyJ8SB/AlwPJAhEtm0y0s0l1l4eWqxsDxkBOgN+ivU0czrVMssHJEJb4o0FLf7iHhOW56/iMdD9w== userName
  • .{keyId}.pub.meta: a file containing the key metadata in JSON format. If a key has no metadata, a new metadata file is dynamically generated. For example:

    {
        "name":"Key",
        "creationDate":"Oct 10, 2018 10:10:50 PM",
        "lastTimeUsed":"Oct 11, 2018 12:11:23 PM"
    }
11.8.1.2. Using a custom SSH keystore

It is possible to extend and customize the platform default SSH keystore to meet more specific requirements.

Use the system property appformer.ssh.keystore to specify the Java class name of the service to use. If the property does not exist or it contains a wrong value, the default SSH keystore is loaded.

To create a custom implementation of the SSH keystore, your Java Class must implement the class org.uberfire.ssh.service.backend.keystore.SSHKeyStore defined in the uberfire-ssh-api module.

11.8.2. Using the SSH keystore

This section describes how to use the SSH keystore to register your own keys and how to use them.

11.8.2.1. The SSH keystore UI

The SSH keystore provides an intuitive UI to enable users to manage their SSH public keys on the system. It is accessible from the Admin page by using the SSH Keys menu option.

ssh keystore menu
Figure 150. SSH Keys Menu Option on Admin Page

After you click the SSH Keys menu option the SSH Keys Editor will open. the editor displays a table showing the user SSH public keys and provides access to the main action buttons.

  • Add SSH Key: Used to add an SSH public key for the user.

    ssh keystore editor new
    Figure 151. Adding new SSH public key
  • Delete SSH Key: Used to remove an existing SSH public key

    ssh keystore editor delete
    Figure 152. Deleting a SSH public key
ssh keystore editor
Figure 153. SSH keystore UI
11.8.2.2. Adding SSH keys

This section explains step by step how to add an SSH public key to the SSH keystore.

Creating the SSH key on your computer
  1. Open a terminal on your computer

  2. Run the ssh-keygen command to create the key:

    ssh-keygen -t rsa -b 4096 -C "<your_user_login_here>"

    The SSH key formats supported by the keystore are 'ssh-rsa', 'ssh-dss', 'ecdsa-sha2-nistp256', 'ecdsa-sha2-nistp384' and 'ecdsa-sha2-nistp521'.

  3. When prompted, press Enter and accept the default key file location.

    Enter a file in which to save the key (/home/<your_login_here>/.ssh/id_rsa): [Press enter]
  4. When prompted, enter the pass phrase that you want to use.

    Enter passphrase (empty for no passphrase): [Type a passphrase]
    Enter same passphrase again: [Type passphrase again]
  5. Start the ssh-agent:

    eval "$(ssh-agent -s)"
    Agent pid <any-number-here>
  6. Add the new SSH private key to the ssh-agent. If you used a different key name, replace id_rsa with your key name

    ssh-add ~/.ssh/id_rsa
Registering your SSH public key with the SSH keystore
  1. In Business Central, go to the gear icon next to your login to open the Admin page.

    ssh keystore editor gear
    Figure 154. Accessing the Admin Page
  2. Open the SSH keystore UI by clicking the SSH Keys menu option.

    ssh keystore menu
    Figure 155. SSH Keys Menu Option on Admin Page
    ssh keystore editor empty
    Figure 156. SSH KeysStore UI Without keys
  3. Copy the contents of your SSH Public key onto the clipboard. Use the cat command to display your key content. If you used a different key name: replace id_rsa with your key name, and copy it.

    cat ~/.ssh/id_rsa.pub
  4. In the SSH keystore UI press the Add SSH Key button to open the New SSH public key form. Specify a name, copy the key content into the key field and click Add SSH Key to register the key.

    ssh keystore editor new
    Figure 157. Adding new SSH public key
    • Name field cannot be empty, this field defines a meaningful name for the user to identify the key on the SSH public keys table.

    • Key must be a valid SSH Public key, so it cannot be empty and the key format must be supported by the platform.

11.9. Embedding Business Central in Your Application

Apart from the individual perspectives (such as the Library or Content Management), Business Central provides a number of editors used for designing and managing assets in different formats. Within Business Central, each asset type has a corresponding editor.

Business Central provides the possibility to embed the perspectives and editors in the user’s application using the standalone mode. Without actually switching to Business Central, it is possible to display perspectives and edit various assets, such as rules, processes, or decision tables, in separate applications.

To embed a part of Business Central in an application, Business Central must be deployed and running on a web server or an application server. Then, in your application, include an HTML inline frame with the proper HTTP query parameters as described in the following table.

Table 24. HTTP query parameters for the standalone mode
Parameter Values Description

standalone

none

This parameter must be included in each URL of a perspective or an editor that will be used in the standalone mode.

perspective

LibraryPerspective, ContentManagerPerspective, or any custom-created page

Used for specifying the perspective to be displayed.

header

UberfireBreadcrumbsContainer

Displays the breadcrumbs at the top of the page that can be used for navigating to the lists of spaces and projects within the Library. This parameter can be used only if perspective=LibraryPerspective is specified.

path

default://master@MySpace/Shop/src/main/java/com/Product.java

Specifies the path to the asset to be opened in a corresponding editor. The path must be specified in the format default://BRANCH@SPACE/PROJECT/PATH_TO_ASSET/ASSET_NAME.FILE_EXTENSION.

Table 25. Usage examples
URL Description

http://localhost:8080/business-central/kie-wb.jsp?standalone&perspective=LibraryPerspective

Opens the Library where it is possible to select a project to be managed.

http://localhost:8080/business-central/kie-wb.jsp?standalone&perspective=LibraryPerspective&header=UberfireBreadcrumbsContainer

Opens the Library with the list of projects. The header parameter displays the breadcrumbs at the top of the page, which allow the user to switch between the spaces as well as the projects.

http://localhost:8080/business-central/kie-wb.jsp?standalone&path=default://master@MySpace/Shop/src/main/java/com/Product.java

Opens the editor of the specified asset.

http://localhost:8080/business-central/kie-wb.jsp?standalone&perspective=ContentManagerPerspective

Opens the Content Management perspective, where it is possible to create and manage custom pages.

http://localhost:8080/business-central/kie-wb.jsp?standalone&perspective=MyCustomPage

Opens the specified custom page that has been created before using the Content Management perspective. The value of the perspective parameter must correspond to the actual name of the page.

11.10. Execution Server Management UI

The Execution Server Management UI allows users create and modify Server Templates and Containers, it also allows users manage Remote Servers. This screen is available via Deploy → Rule Deployments menu.

NewExecServerUI
Figure 158. Execution Server Management

The management UI is only available for KIE Managed Servers.

11.10.1. Server Templates

Server templates are used to define a common configuration that can be used for multiple server, thus the name: Template.

Server Templates can be created directly from the management UI or it’s automatically create when a server connects to jBPM controller and there isn’t a template definition for that remote server. Server templates may have one or more capabilities, such capabilities can’t be modified, if you need modify the capabilities you’ll have to create a new template. Here is the list of current capabilities:

  • Rule (Drools)

  • Process (jBPM)

  • Planning (Optaplanner)

For Planner capability it’s mandatory to enable Rule’s capability too.

In order to create a new Server Template you have to click at New Server Template button and follow the wizard. It’s also possible to create a container during Wizard, but for now let’s limit to just the template.

NewServerTemplateWizard
Figure 159. New Server Template Wizard

Once created you’ll get the new Template listed on the left hand side, with the new Server Template highlighted. On the right hand side you get the 2nd level navigation that lists Containers and Remote Servers that are related to selected Server Template.

ServerTemplates
Figure 160. Server Templates

On top of the navigation is also possible to delete the current Server Template or create a copy of it.

ServerTemplateActions
Figure 161. Server Template Actions

11.10.2. Container

A Container is a KIE Container configuration of the Server Template. Click the Add Container button to create a new container for the current Server Template.

The search area can help users find an specific KJARs that they are looking for.

NewContainerWizard
Figure 162. New Container Wizard

For Server Templates that have Process capabilities enabled, the Wizard has a 2nd optional step where users can configure some process related behaviors.

ProcessConfigNewContainerWizard
Figure 163. Process Configuration

Kie Base Name determines which Kie Base of the deployed artifact will be used.

Kie Session Name determines which Kie Session of the selected Kie Base will be used.

Please notice that configurations on this tab takes effect only if the deployed project contains some business processeses. It is not enough if the server template has the extension for processes enabled.

Once created the new Container will be displayed on the containers list just above the list of remote servers. Just after created a container is by default Stopped which is the only state that allows users to remove it.

NewContainer
Figure 164. Container

A Container has the following tabs available for management and/or configuration:

  • Status

  • Version Configuration

  • Process Configuration

Status tab lists all the Remote Servers that are running the active Container. Each Remote Server is rendered as a Card, which displays to users status and endpoint.

Only started Containers are deployed to remote servers.

ContainerStatus
Figure 165. Status Container

For containers that do not have process capability the Version Configuration tab allows users to change the current version of the Container. Users can upgrade manually to a specific version using the "Upgrade" button or enable/disable the Scanner. It’s also possible to execute a Scan Now operation that will scan for new versions only once.

To redeploy SNAPSHOT kjars with your latest changes all existing containers with that version must first be removed. Executing 'build and deploy' will then create a container with the latest SNAPSHOT kjar. However, this is not possible for release versions. Following maven release convensions if the GAV of a kjar is anthing but SNAPSHOT, the GAV will need to be updated to the newer release version and deployed to its own container. The new release version can also be used to upgrade an existing container as describe previously provided the container does not have process capability.

ContainerVersionConfiguration
Figure 166. Version Configuration

Process Configuration is the same form that is displayed during New Container Wizard for Template Servers that have Process Capability. If Template Server doesn’t have such capability, the action buttons will be disabled.

ContainerProcessConfiguration
Figure 167. Process Configuration

11.10.3. Remote Server

Remote Server is a Managed KIE Server instance running that has a jBPM controller configured.

By default, Business Central comes with a jBPM controller embedded.

The list of Remote Servers are displayed just under the list of Containers. Once selected the screens reveals the Remote Server details and a list of cards, each card represents a running Container.

RemoteServers
Figure 168. Remote Servers

11.11. Experimental Features Framework

This section describes the Experimental Features Framework functionality and how to use it.

11.11.1. Introduction

The Experimental Features Framework is a platform service that allows developers to deliver features which are still not part of Business Central (for example, ongoing developments, tech previews, POCs…​) and expose these features to users to let them have a preview of what is comming in the future.

The Experimental Features Framework provides the following features:

  • New Editor UI, accessible on the Admin page, where users can enable and disable Experimental Features.

  • Support for user-level features (stored as system preferences for each user) and global features (only available to admin users, in the editor)

  • Ability to dynamically handle the visibility for different Experimental Resources on Business Central.

    • Business Central Perspectives

    • Business Central Screens

    • Business Central Editors

    • Library Asset Types

    • Page Builder Layout Components

11.11.2. Types of Experimental Features

There are two types of Experimental Features, each with different scopes:

  • User: This type of feature can be enabled or disabled for any platform user, making the feature available for a single user without affecting other users, storing the feature state as a user preference.

  • Global: This type of feature is global for all users. Only users with administrator permissions user can enable them.

11.11.3. Experimental Features Editor

The Experimental Features Framework provides an editor where users can configure the features that they want to use. To open the editor, navigate to the Admin page and click the Experimental menu option.

admin page experimental menu option
Figure 169. Experimental Features Menu Option

The Experimental menu option only appears if the Experimental Features Framework is enabled and there are Experimental Features installed on Business Central

admin page experimental editor screen
Figure 170. Experimental Features Editor

The features and groups displayed on this documentation are examples.

The Experimental Features Editor displays all the Experimental Features installed on Business Central. For a better user experience these features are organized in collapsible groups. Click a label to expand or collapse a group.

admin page experimental editor feature group
Figure 171. Experimental Features Group

Each row inside of the group corresponds to an experimental feature. Click toggle button to enable or disable the feature.

You can also enable or disable all group features by clicking the group’s *Enable all" / "Disable all" button.

admin page experimental editor feature group enable all
Figure 172. Enable all group features

11.11.4. Enabling the Experimental Features Framework

By default, the Experimental Features Framework is disabled. You can enable it by starting Business Central and setting the system property appformer.experimental.features=true.

Any Experimental Feature present on Business Central will not be accessible to users while the Experimental Features Framework is disabled.

11.12. Business Central profiles

Starting on 7.15.0.Final, KIE Workbench is renamed to Business Central. Business Central contains all KIE Workbench features. To select between the set of available features, the concept of profiles is introduced. This chapter describes profiles and show how you can configure them in Business Central.

11.12.1. Introduction

When you start the Business Central application, all the features are available to you by default. To configure a set of features, you can select from a list of profile.

A profile is a set of features which contains:

  • Menus

  • Resources that it can handle

  • Specific home page

Currently, we have two profiles: * Full: All workbench features will be enabled (default); * Planner and Rules: Only Optaplanner and Drools features will be available.

11.12.2. Selecting a profile

Profiles can be selected on Administration page, by selecting the Profiles preference.

Only admin users have access to the Profiles preference.

profiles menu option
Figure 173. Profile Menu Option

It is also possible to select a profile using the system property org.kie.workbench.profile, which can have the values FULL (for Full profile) and PLANNER_AND_RULES (For Planner and Rules profile).

12. Business Central integration

12.1. Knowledge Store REST API for Business Central spaces and projects

jBPM provides a Knowledge Store REST API that you can use to interact with your projects and spaces in jBPM without using the Business Central user interface. The Knowledge Store is the artifact repository for assets in jBPM. This API support enables you to facilitate and automate maintenance of Business Central projects and spaces.

With the Knowledge Store REST API, you can perform the following actions:

  • Retrieve information about all projects and spaces

  • Create, update, or delete projects and spaces

  • Build, deploy, and test projects

  • Retrieve information about previous Knowledge Store REST API requests, or jobs

Knowledge Store REST API requests require the following components:

Authentication

The Knowledge Store REST API requires HTTP Basic authentication or token-based authentication for the user role rest-all. To view configured user roles for your jBPM distribution, navigate to ~/$SERVER_HOME/standalone/configuration/application-roles.properties and ~/application-users.properties.

To add a user with the rest-all role, navigate to ~/$SERVER_HOME/bin and run the following command:

$ ./add-user.sh -a --user <USERNAME> --password <PASSWORD> --role rest-all

For more information about user roles and jBPM installation options, see Installing the KIE Server.

HTTP headers

The Knowledge Store REST API requires the following HTTP headers for API requests:

  • Accept: Data format accepted by your requesting client:

    • application/json (JSON)

  • Content-Type: Data format of your POST or PUT API request data:

    • application/json (JSON)

HTTP methods

The Knowledge Store REST API supports the following HTTP methods for API requests:

  • GET: Retrieves specified information from a specified resource endpoint

  • POST: Creates or updates a resource

  • DELETE: Deletes a resource

Base URL

The base URL for Knowledge Store REST API requests is http://SERVER:PORT/business-central/rest/, such as http://localhost:8080/business-central/rest/.

The REST API base URL for the Knowledge Store and for the jBPM controller built in to Business Central are the same because both are considered part of Business Central REST services.
Endpoints

Knowledge Store REST API endpoints, such as /spaces/{spaceName} for a specified space, are the URIs that you append to the Knowledge Store REST API base URL to access the corresponding resource or type of resource in jBPM.

Example request URL for /spaces/{spaceName} endpoint

http://localhost:8080/business-central/rest/spaces/MySpace

Request data

HTTP POST requests in the Knowledge Store REST API may require a JSON request body with data to accompany the request.

Example POST request URL and JSON request body data

http://localhost:8080/business-central/rest/spaces/MySpace/projects

{
  "name": "Employee_Rostering",
  "groupId": "employeerostering",
  "version": "1.0.0-SNAPSHOT",
  "description": "Employee rostering problem optimisation using Planner. Assigns employees to shifts based on their skill."
}

12.1.1. Sending requests with the Knowledge Store REST API using a REST client or curl utility

The Knowledge Store REST API enables you to interact with your projects and spaces in jBPM without using the Business Central user interface. You can send Knowledge Store REST API requests using any REST client or curl utility.

Prerequisites
  • Business Central is installed and running.

  • You have rest-all user role access to Business Central.

Procedure
  1. Identify the relevant API endpoint to which you want to send a request, such as [GET] /spaces to retrieve spaces in Business Central.

  2. In a REST client or curl utility, enter the following components for a GET request to /spaces. Adjust any request details according to your use case.

    For REST client:

    • Authentication: Enter the user name and password of the Business Central user with the rest-all role.

    • HTTP Headers: Set the following header:

      • Accept: application/json

    • HTTP method: Set to GET.

    • URL: Enter the Knowledge Store REST API base URL and endpoint, such as http://localhost:8080/business-central/rest/spaces.

    For curl utility:

    • -u: Enter the user name and password of the Business Central user with the rest-all role.

    • -H: Set the following header:

      • accept: application/json

    • -X: Set to GET.

    • URL: Enter the Knowledge Store REST API base URL and endpoint, such as http://localhost:8080/business-central/rest/spaces.

    curl -u 'baAdmin:password@1' -H "accept: application/json" -X GET "http://localhost:8080/business-central/rest/spaces"
  3. Execute the request and review the KIE Server response.

    Example server response (JSON):

    [
      {
        "name": "MySpace",
        "description": null,
        "projects": [
          {
            "name": "Employee_Rostering",
            "spaceName": "MySpace",
            "groupId": "employeerostering",
            "version": "1.0.0-SNAPSHOT",
            "description": "Employee rostering problem optimisation using Planner. Assigns employees to shifts based on their skill.",
            "publicURIs": [
              {
                "protocol": "git",
                "uri": "git://localhost:9418/MySpace/example-Employee_Rostering"
              },
              {
                "protocol": "ssh",
                "uri": "ssh://localhost:8001/MySpace/example-Employee_Rostering"
              }
            ]
          },
          {
            "name": "Mortgage_Process",
            "spaceName": "MySpace",
            "groupId": "mortgage-process",
            "version": "1.0.0-SNAPSHOT",
            "description": "Getting started loan approval process in BPMN2, decision table, business rules, and forms.",
            "publicURIs": [
              {
                "protocol": "git",
                "uri": "git://localhost:9418/MySpace/example-Mortgage_Process"
              },
              {
                "protocol": "ssh",
                "uri": "ssh://localhost:8001/MySpace/example-Mortgage_Process"
              }
            ]
          }
        ],
        "owner": "admin",
        "defaultGroupId": "com.myspace"
      },
      {
        "name": "MySpace2",
        "description": null,
        "projects": [
          {
            "name": "IT_Orders",
            "spaceName": "MySpace",
            "groupId": "itorders",
            "version": "1.0.0-SNAPSHOT",
            "description": "Case Management IT Orders project",
            "publicURIs": [
              {
                "protocol": "git",
                "uri": "git://localhost:9418/MySpace/example-IT_Orders-1"
              },
              {
                "protocol": "ssh",
                "uri": "ssh://localhost:8001/MySpace/example-IT_Orders-1"
              }
            ]
          }
        ],
        "owner": "admin",
        "defaultGroupId": "com.myspace"
      }
    ]
  4. In your REST client or curl utility, send another API request with the following components for a POST request to /spaces/{spaceName}/projects to create a project within a space. Adjust any request details according to your use case.

    For REST client:

    • Authentication: Enter the user name and password of the Business Central user with the rest-all role.

    • HTTP Headers: Set the following header:

      • Accept: application/json

      • Content-Type: application/json

    • HTTP method: Set to POST.

    • URL: Enter the Knowledge Store REST API base URL and endpoint, such as http://localhost:8080/business-central/rest/spaces/MySpace/projects.

    • Request body: Add a JSON request body with the identification data for the new project:

    {
      "name": "Employee_Rostering",
      "groupId": "employeerostering",
      "version": "1.0.0-SNAPSHOT",
      "description": "Employee rostering problem optimisation using Planner. Assigns employees to shifts based on their skill."
    }

    For curl utility:

    • -u: Enter the user name and password of the Business Central user with the rest-all role.

    • -H: Set the following headers:

      • accept: application/json

      • content-type: application/json

    • -X: Set to POST.

    • URL: Enter the Knowledge Store REST API base URL and endpoint, such as http://localhost:8080/business-central/rest/spaces/MySpace/projects.

    • -d: Add a JSON request body or file (@file.json) with the identification data for the new project:

    curl -u 'baAdmin:password@1' -H "accept: application/json" -H "content-type: application/json" -X POST "http://localhost:8080/business-central/rest/spaces/MySpace/projects" -d "{ \"name\": \"Employee_Rostering\", \"groupId\": \"employeerostering\", \"version\": \"1.0.0-SNAPSHOT\", \"description\": \"Employee rostering problem optimisation using Planner. Assigns employees to shifts based on their skill.\"}"
    curl -u 'baAdmin:password@1' -H "accept: application/json" -H "content-type: application/json" -X POST "http://localhost:8080/business-central/rest/spaces/MySpace/projects" -d @my-project.json
  5. Execute the request and review the KIE Server response.

    Example server response (JSON):

    {
      "jobId": "1541017411591-6",
      "status": "APPROVED",
      "spaceName": "MySpace",
      "projectName": "Employee_Rostering",
      "projectGroupId": "employeerostering",
      "projectVersion": "1.0.0-SNAPSHOT",
      "description": "Employee rostering problem optimisation using Planner. Assigns employees to shifts based on their skill."
    }

    If you encounter request errors, review the returned error code messages and adjust your request accordingly.

12.1.2. Supported Knowledge Store REST API endpoints

The Knowledge Store REST API provides endpoints for managing spaces and projects in jBPM and for retrieving information about previous Knowledge Store REST API requests, or jobs.

12.1.2.1. Spaces

The Knowledge Store REST API supports the following endpoints for managing spaces in Business Central. The Knowledge Store REST API base URL is http://SERVER:PORT/business-central/rest/. All requests require HTTP Basic authentication or token-based authentication for the rest-all user role.

[GET] /spaces

Returns all spaces in Business Central.

Example server response (JSON)
[
  {
    "name": "MySpace",
    "description": null,
    "projects": [
      {
        "name": "Employee_Rostering",
        "spaceName": "MySpace",
        "groupId": "employeerostering",
        "version": "1.0.0-SNAPSHOT",
        "description": "Employee rostering problem optimisation using Planner. Assigns employees to shifts based on their skill.",
        "publicURIs": [
          {
            "protocol": "git",
            "uri": "git://localhost:9418/MySpace/example-Employee_Rostering"
          },
          {
            "protocol": "ssh",
            "uri": "ssh://localhost:8001/MySpace/example-Employee_Rostering"
          }
        ]
      },
      {
        "name": "Mortgage_Process",
        "spaceName": "MySpace",
        "groupId": "mortgage-process",
        "version": "1.0.0-SNAPSHOT",
        "description": "Getting started loan approval process in BPMN2, decision table, business rules, and forms.",
        "publicURIs": [
          {
            "protocol": "git",
            "uri": "git://localhost:9418/MySpace/example-Mortgage_Process"
          },
          {
            "protocol": "ssh",
            "uri": "ssh://localhost:8001/MySpace/example-Mortgage_Process"
          }
        ]
      }
    ],
    "owner": "admin",
    "defaultGroupId": "com.myspace"
  },
  {
    "name": "MySpace2",
    "description": null,
    "projects": [
      {
        "name": "IT_Orders",
        "spaceName": "MySpace",
        "groupId": "itorders",
        "version": "1.0.0-SNAPSHOT",
        "description": "Case Management IT Orders project",
        "publicURIs": [
          {
            "protocol": "git",
            "uri": "git://localhost:9418/MySpace/example-IT_Orders-1"
          },
          {
            "protocol": "ssh",
            "uri": "ssh://localhost:8001/MySpace/example-IT_Orders-1"
          }
        ]
      }
    ],
    "owner": "admin",
    "defaultGroupId": "com.myspace"
  }
]
[GET] /spaces/{spaceName}

Returns information about a specified space.

Table 26. Request parameters
Name Description Type Requirement

spaceName

Name of the space to be retrieved

String

Required

Example server response (JSON)
{
  "name": "MySpace",
  "description": null,
  "projects": [
    {
      "name": "Mortgage_Process",
      "spaceName": "MySpace",
      "groupId": "mortgage-process",
      "version": "1.0.0-SNAPSHOT",
      "description": "Getting started loan approval process in BPMN2, decision table, business rules, and forms.",
      "publicURIs": [
        {
          "protocol": "git",
          "uri": "git://localhost:9418/MySpace/example-Mortgage_Process"
        },
        {
          "protocol": "ssh",
          "uri": "ssh://localhost:8001/MySpace/example-Mortgage_Process"
        }
      ]
    },
    {
      "name": "Employee_Rostering",
      "spaceName": "MySpace",
      "groupId": "employeerostering",
      "version": "1.0.0-SNAPSHOT",
      "description": "Employee rostering problem optimisation using Planner. Assigns employees to shifts based on their skill.",
      "publicURIs": [
        {
          "protocol": "git",
          "uri": "git://localhost:9418/MySpace/example-Employee_Rostering"
        },
        {
          "protocol": "ssh",
          "uri": "ssh://localhost:8001/MySpace/example-Employee_Rostering"
        }
      ]
    },
    {
      "name": "Evaluation_Process",
      "spaceName": "MySpace",
      "groupId": "evaluation",
      "version": "1.0.0-SNAPSHOT",
      "description": "Getting started Business Process for evaluating employees",
      "publicURIs": [
        {
          "protocol": "git",
          "uri": "git://localhost:9418/MySpace/example-Evaluation_Process"
        },
        {
          "protocol": "ssh",
          "uri": "ssh://localhost:8001/MySpace/example-Evaluation_Process"
        }
      ]
    },
    {
      "name": "IT_Orders",
      "spaceName": "MySpace",
      "groupId": "itorders",
      "version": "1.0.0-SNAPSHOT",
      "description": "Case Management IT Orders project",
      "publicURIs": [
        {
          "protocol": "git",
          "uri": "git://localhost:9418/MySpace/example-IT_Orders"
        },
        {
          "protocol": "ssh",
          "uri": "ssh://localhost:8001/MySpace/example-IT_Orders"
        }
      ]
    }
  ],
  "owner": "admin",
  "defaultGroupId": "com.myspace"
}
[POST] /spaces

Creates a space in Business Central.

Table 27. Request parameters
Name Description Type Requirement

body