JBoss.orgCommunity Documentation

jBPM Developers Guide


1. Introduction
1.1. License and EULA
1.2. Sources
1.3. JVM version
1.4. Library dependencies
1.5. What is it
1.6. Features
1.7. Purpose
2. Execution modes
2.1. Object execution mode
2.2. Persistent execution mode
2.3. Embedded execution mode
3. Architecture
3.1. APIs
3.2. Activity API
3.3. Event listener API
3.4. Client API
3.5. Environment
3.6. Commands
3.7. Services
4. Implementing basic activities
4.1. Activity
4.2. Activity example
4.3. ExternalActivity
4.4. ExternalActivity example
4.5. Basic process execution
4.6. Events
4.7. Event propagation
5. Process anatomy
6. Advanced graph execution
6.1. Loops
6.2. Sub processes
6.3. Implicit proceed behaviour
6.4. Functional activities
6.5. Execution and threads
6.6. Process concurrency
6.7. Exception handlers
6.8. Process modifications
6.9. Locking and execution state
7. Variables
8. Timers
9. Asynchronous continuations
10. Software logging
10.1. Configuration
10.2. Categories
10.3. JDK logging
10.4. Debugging persistence
11. History
11.1. Overview

The Process Virtual Machine is designed in such a way that it's easy to build workflow, BPM, orchestration and other graph based execution langauges on top of it. Examples of languages that have been built on top of this library:

Even while the nature of these languages is already very diverse, these are all examples of general purpose workflow languages. The real power of the Process Virtual Machine is that it's very easy to build Domain Specific Languages (DSL) with it. For instance, it's very easy to build a very simple (and dedicated) workflow language to specify approvals related to documents in a document management system.

BPM as a discipline refers to the management level effort to optimise efficiency of an organisation by analysing and optimising the procedures of how people and systems work together. In designing the Process Virtual Machine and the jPDL language in particular, we have spend great care on facilitating the link between BPM analysis notations and executable process languages. Here are the most known modeling notations:

There are basically three process execution modes: object, persistent and embedded. For the persistent and embedded execution modes, the process execution has to participate in a transaction. In that case, the process execution has to take place inside of an Environment. The environment will be used to bind process execution updates to a transaction in the application transaction. The environment can be used to bind to e.g. a JDBC connection, JTA, BMT, Spring transactions and so on.

Object execution mode is the simplest form of working with the Process Virtual Machine. This means working with the process definition and execution objects directly through the client API. Let's show this by an example. We start by creating a ClientProcessDefinition that looks like this:

Object execution mode is the simplest form of working with the Process Virtual Machine. This means working with the process definition and execution objects directly through the client API. Let's show this by an example. We start by creating a ClientProcessDefinition that looks like this:


ClientProcessDefinition processDefinition = ProcessFactory.build("loan")
  .activity("submit loan request").initial().behaviour(AutomaticActivity.class)
    .transition().to("evaluate")
  .activity("evaluate").behaviour(WaitState.class)
    .transition("approve").to("wire money")
    .transition("reject").to("end")
  .activity("wire money").behaviour(AutomaticActivity.class)
    .transition().to("archive")
  .activity("archive").behaviour(WaitState.class)
    .transition().to("end")
  .activity("end").behaviour(WaitState.class)
.done();

The ProcessFactory is a helper class that provides convenience for building an object graph that represents a process definition. AutomaticActivity is a pass-through activity without anything happening and WaitState will wait until an external signal is given. Both activity implementations will be covered in more depth later.

The processDefinition object serves as a factory for process instance objects. A process instance represents one execution of the process definition. More precise, the process instance is the main path of execution.

ClientExecution execution = processDefinition.startProcessInstance();

A process instance itself is also an Execution. Potentially, an execution can have child executions to represent concurrent paths of execution.

The execution can be seen as a state machine that operates as described in the process definition. Starting a process instance means that the initial activity of the process definition is executed. Since this is an automatic activity, the execution will proceed to the evaluate activity. The evaluate activity is a wait state. When the execution arrived at the evaluate activity, the method startProcessInstance will return and waits until an external signal is provided with the signal method. So after the startProcessInstance we can verify if the execution is positioned in the evaluate activity.

assertEquals("evaluate", execution.getActivityName());

To make the process execute further, we provide an external trigger with the signal method. The result of the evaluation will be given as the signalName parameter like this:

execution.signal("approve");

The WaitState activity implementation will take the transition that corresponds to the given signalName. So the execution will first execute the automatic activity wire money and then return after entering the next wait state archive.

assertEquals("archive", execution.getActivityName());

When the execution is waiting in the archive activity, the default signal will make it take the first unnamed transition.

execution.signal();
assertEquals("end", execution.getActivityName());

The process has executed in the thread of the client. The startProcessInstance method only returned when the evaluate activity was reached. In other words, the ClientProcessDefinition.startProcessInstance and ClientExecution.signal methods are blocking until the next wait state is reached.

The Process Virtual Machine also contains the hibernate mappings to store the process definitions and executions in any database. A special session facade called ExecutionService is provided for working with process executions in such a persistent environment.

Two configuration files should be available on the classpath: an environment configuration file and a hibernate.properties file. A basic configuration for persistent execution mode in a standard Java environment looks like this:

environment.cfg.xml:
<jbpm-configuration xmlns="http://jbpm.org/xsd/cfg">

  <process-engine-context>
  
    <deployer-manager>
      <assign-file-type>
        <file extension=".jpdl.xml" type="jpdl" />
      </assign-file-type>
      <parse-jpdl />
      <check-process />
      <check-problems />
      <save />
    </deployer-manager>
    
    <process-service />
    <execution-service />
    <management-service />
  
    <command-service>
      <retry-interceptor />
      <environment-interceptor />
      <standard-transaction-interceptor />
    </command-service>
    
    <hibernate-configuration>
      <properties resource="hibernate.properties" />
      <mapping resource="jbpm.pvm.typedefs.hbm.xml" />
      <mapping resource="jbpm.pvm.wire.hbm.xml" />
      <mapping resource="jbpm.pvm.definition.hbm.xml" />
      <mapping resource="jbpm.pvm.execution.hbm.xml" />
      <mapping resource="jbpm.pvm.variable.hbm.xml" />
      <mapping resource="jbpm.pvm.job.hbm.xml" />
      <mapping resource="jbpm.jpdl.hbm.xml" />
      <cache-configuration resource="jbpm.pvm.cache.xml" 
                           usage="nonstrict-read-write" />
    </hibernate-configuration>
    
    <hibernate-session-factory />
    
    <id-generator />
    <types resource="jbpm.pvm.types.xml" />
    <job-executor auto-start="false" />
  
  </process-engine-context>

  <transaction-context>
    <hibernate-session />
    <transaction />
    <pvm-db-session />
    <job-db-session />
    <message-session />
  </transaction-context>

</jbpm-configuration>

And next to it a hibernate.properties like this

hibernate.properties:
hibernate.dialect                      org.hibernate.dialect.HSQLDialect
hibernate.connection.driver_class      org.hsqldb.jdbcDriver
hibernate.connection.url               jdbc:hsqldb:mem:.
hibernate.connection.username          sa
hibernate.connection.password
hibernate.hbm2ddl.auto                 create-drop
hibernate.cache.use_second_level_cache true
hibernate.cache.provider_class         org.hibernate.cache.HashtableCacheProvider
# hibernate.show_sql                     true
hibernate.format_sql                   true
hibernate.use_sql_comments             true

Then you can obtain the services from the environment factory like this:

EnvironmentFactory environmentFactory = new PvmEnvironmentFactory("environment.cfg.xml");

ProcessService processService = environmentFactory.get(ProcessService.class);
ExecutionService executionService = environmentFactory.get(ExecutionService.class);
ManagementService managementService = environmentFactory.get(ManagementService.class);

The responsibility of the ProcessService is to manage the repository of process definitions. Before we can start a process execution, the process definition needs to be deployed into the process repository. Process definitions can be supplied in various formats and process definition languages. A deployment collects process definition information from various sources like a ZIP file, an XML file or a process definition object. The method ProcessService.deploy will take a deployment through all the deployers that are configured in the configuration file.

In this example, we'll supply a process definition programmatically for deployment.

ClientProcessDefinition processDefinition = ProcessFactory.build("loan")
  .activity("submit loan request").initial().behaviour(AutomaticActivity.class)
    .transition().to("evaluate")
  .activity("evaluate").behaviour(WaitState.class)
    .transition("approve").to("wire money")
    .transition("reject").to("end")
  .activity("wire money").behaviour(AutomaticActivity.class)
    .transition().to("archive")
  .activity("archive").behaviour(WaitState.class)
    .transition().to("end")
  .activity("end").behaviour(WaitState.class)
.done();

Deployment deployment = new Deployment(processDefinition);
processService.deploy(deployment);

Now, a version of that process definition is stored in the database. The check-version deployer will have assigned version 1 to the stored process definition. The create-id deployer will have distilled id loan:1 from the process name and the assigned version.

Deploying that process again will lead to a new process definition version being created in the database. But an incremented version number will be assigned. For the purpose of versioning, process definitions are considered equal if they have the same name.

It is recommended that a user provided key reference is supplied for all process executions. Starting a new process execution goes like this:

Execution execution = executionService.startExecution("loan:1", "request7836");

The return value is an execution interface, which prevents navigation of relations. That is because outside of the service methods, the transaction and hibernate session is not guaranteed to still be open. In fact, the default configuration as given above will only keep the transaction and session open for the duration of the service method. So navigating the relations outside of the service methods might result into a hibernate LazyInitializationException. But the current activity name can still be verified:

assertEquals("evaluate", execution.getActivityName());

Also very important is the generated id that can be obtained. The default id-generator will use the process definition id and the given key to make a unique id for the process execution like this:

assertEquals("loan:1/request7836", execution.getId());

That id must be when providing the subsequent external triggers to the process execution like this:

executionService.signalExecution("loan:1/request7836", "approve");

More information about service interfaces to run in persistent mode can be found in package org.jbpm.pvm of the api docs.

Embedded execution mode means that the state of a process is stored as a string column inside a user domain object like e.g. a loan.

public class Loan {

  /** the loan process definition as a static resource */
  private static final ClientProcessDefinition processDefinition = createLoanProcess();
  
  private static ClientProcessDefinition createLoanProcess() {
    ClientProcessDefinition processDefinition = ProcessFactory.build("loan")
      .activity("submit loan request").initial().behaviour(AutomaticActivity.class)
        .transition().to("evaluate")
      .activity("evaluate").behaviour(WaitState.class)
        .transition("approve").to("wire money")
        .transition("reject").to("end")
      .activity("wire money").behaviour(AutomaticActivity.class)
        .transition().to("archive")
      .activity("archive").behaviour(WaitState.class)
        .transition().to("end")
      .activity("end").behaviour(WaitState.class)
    .done();
    
    return processDefinition;
  }

  /** exposes the process definition to the execution hibernate type */
  private static ClientProcessDefinition getProcessDefinition() {
    return processDefinition;
  }
  

  long dbid;
  String customer;
  double amount;
  ClientExecution execution;
  
  /** constructor for persistence */
  protected Loan() {
  }

  public Loan(String customer, double amount) {
    this.customer = customer;
    this.amount = amount;
    this.execution = processDefinition.startProcessInstance();
  }

  public void approve() {
    execution.signal("approve");
  }

  public void reject() {
    execution.signal("reject");
  }

  public void archiveComplete() {
    execution.signal();
  }

  public String getState() {
    return execution.getActivityName();
  }

  ...getters...
}

If you ignore the bold parts for a second, you can see that this is a POJO without anything fancy. It's just a bean that can be stored with hibernate. The bold part indicate that implementation part of the class that is related to process and execution. Not that nothing of the process definition or execution is exposed to the user of the Loan class.

Each Loan object corresponds to a loan process instance. Some methods of the Loan class correspond to the external triggers that need to be given during the lifecycle of a Loan object.

Next we'll show how to use this class. To get started we need a

hibernate.cfg.xml:
<?xml version="1.0" encoding="utf-8"?>

<!DOCTYPE hibernate-configuration PUBLIC
          "-//Hibernate/Hibernate Configuration DTD 3.0//EN"
          "http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd">

<hibernate-configuration>
  <session-factory>

    <property name="hibernate.dialect">org.hibernate.dialect.HSQLDialect</property>
    <property name="hibernate.connection.driver_class">org.hsqldb.jdbcDriver</property>
    <property name="hibernate.connection.url">jdbc:hsqldb:mem:.</property>
    <property name="hibernate.connection.username">sa</property>
    <property name="hibernate.connection.password"></property>
    <property name="hibernate.hbm2ddl.auto">create</property>
    <property name="hibernate.show_sql">true"</property>
    <property name="hibernate.format_sql">true"</property>
    <property name="hibernate.use_sql_comments">true"</property>
    
    <mapping resource="Loan.hbm.xml"/>
    
  </session-factory>
</hibernate-configuration>

And a

Loan.hbm.xml:
<?xml version="1.0"?<

<!DOCTYPE hibernate-mapping PUBLIC 
          "-//Hibernate/Hibernate Mapping DTD 3.0//EN" 
          "http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd"<

<hibernate-mapping package="org.jbpm.pvm.api.db.embedded" default-access="field"<

  <typedef name="execution" class="org.jbpm.pvm.internal.hibernate.ExecutionType" /<

  <class name="Loan" table="LOAN"<

    <id name="dbid"<
      <generator class="sequence"/<
    </id<

    <property name="execution" type="execution" /<
    <property name="customer" /<
    <property name="amount" /<
    
  </class<

</hibernate-mapping<

Then you can use the Loan class like this in a test

Configuration configuration = new Configuration();
configuration.configure();
SessionFactory sessionFactory = configuration.buildSessionFactory();

// start a session/transaction
Session session = sessionFactory.openSession();
Transaction transaction = session.beginTransaction();

Loan loan = new Loan("john doe", 234.0);
session.save(loan);
assertEquals("evaluate", loan.getState());

// start a new session/transaction
transaction.commit();
session.close();
session = sessionFactory.openSession();
transaction = session.beginTransaction();

loan = (Loan) session.get(Loan.class, loan.getDbid());
assertEquals("evaluate", loan.getState());
loan.approve();
assertEquals("archive", loan.getState());

// start a new session/transaction
transaction.commit();
session.close();

After executing this code snippet, this is the loan record in the DB:


There are three services: ProcessService, ExecutionService and ManagementService. In general, services are session facades that expose methods for persistent usage of the PVM. The next fragments show the essential methods as example to illustrate those services.

The ProcessService manages the repository of process definitions.

public interface ProcessService {

  ProcessDefinition deploy(Deployment deployment);

  ProcessDefinition findLatestProcessDefinition(String processDefinitionName);

  ...

}

The ExecutionService manages the runtime executions.

public interface ExecutionService {

  Execution startExecution(String processDefinitionId, String executionKey);

  Execution signalExecution(String executionId, String signalName);
   
  ...

}

The ManagementService groups all management operations that are needed to keep the system up and running.

public interface ManagementService {

  List<Job> getJobsWithException(int firstResult, int maxResults);

  void executeJob(String jobId);
  
  ...
  
}

The implementation of all these methods is encapsulated in Commands. And the three services all delegate the execution of the commands to a CommandService:

public interface CommandService {

  <T> T execute(Command<T> command);

}

The CommandService is configured in the environment. A chain of CommandServices can act as interceptors around a command. This is the core mechanism on how persistence and transactional support can be offered in a variety of environments.

From the default configuration which is included in full above, here is the section that configures the services

<jbpm-configuration xmlns="http://jbpm.org/xsd/cfg">

  <process-engine>
  
    <process-service />
    <execution-service />
    <management-service />
  
    <command-service>
      <retry-interceptor />
      <environment-interceptor />
      <standard-transaction-interceptor />
    </command-service>
    
    ...
    

The three services process-service, execution-service and management-service will look up the configured command-service by type. The command-service tag corresponds to the default command service that essentially does nothing else then just execute the command providing it the current environment.

The configured command-service results into the following a chain of three interceptors followed by the default command executor.


The retry interceptor is the first in the chain and that one that will be exposed as the CommandService.class from the environment. So the retry interceptor will be given to the respective services process-service, execution-service and management-service.

The retry-interceptor will catch hibernate StaleObjectExceptions (indicating optimistic locking failures) and retry to execute the command.

The environment-interceptor will put an environment block around the execution of the command.

The standard-transaction-interceptor will initialize a StandardTransaction. The hibernate session/transaction will be enlisted as a resource with this standard transaction.

Different configurations of this interceptor stack will also enable to

  • delegate execution to a local ejb command service so that an container managed transaction is started.
  • delegate to a remote ejb command service so that the command actually gets executed on a different JVM.
  • package the command as an asynchronous message so that the command gets executed asynchronously in a different transaction.

This chapter explains the basics of process definitions, the features offered by the Process Virtual Machine and how activity implementations can be build. At the same time the client API is shown to execute processes with those activity implementations.

We'll start with a very original hello world example. A Display activity will print a message to the console:

public class Display implements Activity {

  String message;

  public Display(String message) {
    this.message = message;
  }

  public void execute(ActivityExecution execution) {
    System.out.println(message);
  }
}

Let' build our first process definition with this activity:


ClientProcessDefinition processDefinition = ProcessFactory.build()
    .activity("a").initial().behaviour(new Display("hello"))
      .transition().to("b")
    .activity("b").behaviour(new Display("world"))
.done();

Now we can execute this process as follows:

Execution execution = processDefinition.startExecution();

The invocation of startExecution will print hello world to the console:

hello
world

One thing already worth noticing is that activities can be configured with properties. In the Display example, you can see that the message property is configured differently in the two usages. With configuration properties it becomes possible to write reusable activities. They can then be configured differently each time they are used in a process. That is an essential part of how process languages can be build on top of the Process Virtual Machine.

The other part that needs explanation is that this activity implementation does not contain any instructions for the propagation of the execution. When a new process instance is started, the execution is positioned in the initial activity and that activity is executed. The method Display.execute makes use of what is called implicit propagation of execution. Concretely this means that the activity itself does not invoke any of the methods on the execution to propagate it. In that case implicit propagation kicks in. Implicit propagation will take the first transition if there is one. If not, it will end the execution. This explains why both activities a and b are executed and that the execution stops after activity b is executed.

More details about the implicit proceed behaviour can be found in Section 6.3, “Implicit proceed behaviour”

External activities are activities for which the responsibility for proceeding the execution is transferred externally, meaning outside the process system. This means that for the system that is executing the process, it's a wait state. The execution will wait until an external trigger is given.

For dealing with external triggers, ExternalActivity adds two methods to the Activity:

public interface ExternalActivity extends Activity {

  void signal(Execution execution,
              String signal, 
              Map<String, Object> parameters) throws Exception;
              
}

Just like with plain activities, when an execution arrives in a activity, the execute-method of the activity behaviour is invoked. In external activities, the execute method typically does something to transfer the responsibility to another system and then enters a wait state by invoking execution.waitForSignal(). For example in the execute method, responsibility could be transferred to a person by creating a task entry in a task management system and then wait until the person completes the task.

In case a activity behaves as a wait state, then the execution will wait in that activity until the execution's signal method is invoked. The execution will delegate that signal to the behaviour Activity of the current activity.

So the Activity's signal-method is invoked when the execution receives an external trigger during the wait state. With the signal method, responsibility is transferred back to the process execution. For example, when a person completes a task, the task management system calls the signal method on the execution.

A signal can optionally have a signal name and a map of parameters. Most common way on how activity behaviours interprete the signal and parameters is that the signal relates to the outgoing transition that needs to be taken and that the parameters are set as variables on the execution. But those are just examples, it is up to the activity to use the signal and the parameters as it pleases.

Here's a first example of a simple wait state implementation:

public class WaitState implements ExternalActivity {

  public void execute(ActivityExecution execution) {
    execution.waitForSignal();
  }

  public void signal(ActivityExecution execution, 
                     String signalName, 
                     Map<String, Object> parameters) {
    execution.take(signalName);
  }
}

The execute-method calls execution.waitForSignal(). The invocation of execution.waitForSignal() will bring the process execution into a wait state until an external trigger is given.

signal-method takes the transition with the signal parameter as the transition name. So when an execution receives an external trigger, the signal name is interpreted as the name of an outgoing transition and the execution will be propagated over that transition.

Here's the same simple process that has a transition from a to b. This time, the behaviour of the two activities will be WaitState's.


ClientProcessDefinition processDefinition = ProcessFactory.build()
    .activity("a").initial().behaviour(new WaitState())
      .transition().to("b")
    .activity("b").behaviour(new WaitState())
.done();

Let's start a new process instance for this process definition:

ClientExecution execution = processDefinition.startProcessInstance();

Starting this process will execute the WaitState activity in activity a. WaitState.execute will invoke ActivityExecution.waitForSignal. So when the processDefinition.startProcessInstance() returns, the execution will still be positioned in activity a.

assertEquals("a", execution.getActivityName());

Then we provide the external trigger by calling the signal method.

execution.signal();

The execution.signal() will delegate to the activity of the current activity. So in this case that is the WaitState activity in activity a. The WaitState.signal will invoke the ActivityExecution.take(String transitionName). Since we didn't supply a signalName, the first transition with name null will be taken. The only transition we specified out of activity a didn't get a name so that one will be taken. And that transition points to activity b. When the execution arrives in activity b, the WaitState in activity b is executed. Similar as we saw above, the execution will wait in activity b and this time the signal method will return, leaving the execution positioned in activity b.

assertEquals("b", execution.getActivityName());

In this next example, we'll combine automatic activities and wait states. This example builds upon the loan approval process with the WaitState and Display activities that we've just created. Graphically, the loan process looks like this:


Building process graphs in Java code can be tedious because you have to keep track of all the references in local variables. To resolve that, the Process Virtual Machine comes with a ProcessFactory. The ProcessFactory is a kind of domain specific language (DSL) that is embedded in Java and eases the construction of process graphs. This pattern is also known as a fluent interface.

ClientProcessDefinition processDefinition = ProcessFactory.build("loan")
  .activity("submit loan request").initial().behaviour(new Display("loan request submitted"))
    .transition().to("evaluate")
  .activity("evaluate").behaviour(new WaitState())
    .transition("approve").to("wire money")
    .transition("reject").to("end")
  .activity("wire money").behaviour(new Display("wire the money"))
    .transition().to("archive")
  .activity("archive").behaviour(new WaitState())
    .transition().to("end")
  .activity("end").behaviour(new WaitState())
.done();

For more details about the ProcessFactory, see the api docs. An alternative for the ProcessFactory would be to create an XML language and an XML parser for expressing processes. The XML parser can then instantiate the classes of package org.jbpm.pvm.internal.model directly. That approach is typically taken by process languages.

The initial activity submit loan request and the activity wire the money are automatic activities. In this example, the Display implementation of activity wire the money uses the Java API's to just print a message to the console. But the witty reader can imagine an alternative Activity implementation that uses the Java API of a payment processing library to make a real automatic payment.

A new execution for the process above can be started like this

ClientExecution execution = processDefinition.startProcessInstance();

When the startExecution-method returns, the activity submit loan request will be executed and the execution will be positioned in the activity evaluate.


Now, the execution is at an interesting point. There are two transitions out of the state evaluate. One transition is called approve and one transition is called reject. As we explained above, the WaitState implementation will take the transition that corresponds to the signal that is given. Let's feed in the 'approve' signal like this:

execution.signal("approve");

The approve signal will cause the execution to take the approve transition and it will arrive in the activity wire money.

In activity wire money, the message will be printed to the console. Since, the Display activity didn't invoke the execution.waitForSignal(), nor any of the other execution propagation methods, the implicit proceed behaviour will just make the execution continue over the outgoing transition to activity archive, which is again a WaitState.


So only when the archive wait state is reached, the signal("approve") returns.

Another signal like this:

execution.signal("approve");

will bring the execution eventually in the end state.


Events are points in the process definition to which a list of EventListeners can be subscribed.

public interface EventListener extends Serializable {
  
  void notify(EventListenerExecution execution) throws Exception;

}

The motivation for events is to allow for developers to add programming logic to a process without changing the process diagram. This is a very valuable instrument in facilitating the collaboration between business analysts and developers. Business analysts are responsible for expressing the requirements. When they use a process graph to document those requirements, developers can take this diagram and make it executable. Events can be a very handy to insert technical details into a process (like e.g. some database insert) in which the business analyst is not interested.

Most common events are fired by the execution automatically:

TODO: explain events in userguide

Events are identified by the combination of a process element and an event name. Users and process languages can also fire events programmatically with the fire method on the Execution:

public interface Execution extends Serializable {
  ...
  void fire(String eventName, ProcessElement eventSource);
  ...
}

A list of EventListeners can be associated to an event. But event listeners can not influence the control flow of the execution since they are merely listeners to an execution which is already in progress. This is different from activities that serve as the behaviour for activities. Activity behaviour activities are responsible for propagating the execution.

We'll create a PrintLn event listener which is very similar to the Display activity from above.

public class PrintLn implements EventListener {
  
  String message;
  
  public PrintLn(String message) {
    this.message = message;
  }

  public void notify(EventListenerExecution execution) throws Exception {
    System.out.println("message");
  }
}

Several PrintLn listeners will be subscribed to events in the process.


ClientProcessDefinition processDefinition = ProcessFactory.build()
  .activity("a").initial().behaviour(new AutomaticActivity())
    .event("end")
      .listener(new PrintLn("leaving a"))
      .listener(new PrintLn("second message while leaving a"))
    .transition().to("b")
      .listener(new PrintLn("taking transition"))
  .activity("b").behaviour(new WaitState())
    .event("start")
      .listener(new PrintLn("entering b"))
.done();

The first event shows how to register multiple listeners to the same event. They will be notified in the order as they are specified.

Then, on the transition, there is only one type of event. So in that case, the event type must not be specified and the listeners can be added directly on the transition.

A listeners will be called each time an execution fires the event to which the listener is subscribed. The execution will be provided in the activity interface as a parameter and can be used by listeners except for the methods that control the propagation of execution.

Events are by default propagated to enclosing process elements. The motivation is to allow for listeners on process definitions or composite activities that get executed for all events that occur within that process element. For example this feature allows to register an event listener on a process definition or a composite activity on end events. Such action will be executed if that activity is left. And if that event listener is registered on a composite activity, it will also be executed for all activities that are left within that composite activity.

To show this clearly, we'll create a DisplaySource event listener that will print the message leaving and the source of the event to the console.

public class DisplaySource implements EventListener {
    
  public void execute(EventListenerExecution execution) {
    System.out.println("leaving "+execution.getEventSource());
  }
}

Note that the purpose of event listeners is not to be visible, that's why the event listener itself should not be displayed in the diagram. A DisplaySource event listener will be added as a listener to the event end on the composite activity.

The next process shows how the DisplaySource event listener is registered as a listener to to the 'end' event on the composite activity:


TODO update code snippet

Next we'll start an execution.

ClientExecution execution = processDefinition.startProcessInstance();

After starting a new execution, the execution will be in activity a as that is the initial activity. No activities have been left so no message is logged. Next a signal will be given to the execution, causing it to take the transition from a to b.

execution.signal();

When the signal method returns, the execution will have taken the transition and the end event will be fired on activity a. That event will be propagated to the composite activity and to the process definition. Since our DisplaySource event listener is placed on the composite activity, it will receive the event and print the following message on the console:

leaving activity(a)

Another

execution.signal();

will take the transition from b to c. That will fire two activity-leave events. One on activity b and one on activity composite. So the following lines will be appended to the console output:

leaving activity(b)
leaving activity(composite)

Event propagation is build on the hierarchical composition structure of the process definition. The top level element is always the process definition. The process definition contains a list of activities. Each activity can be a leaf activity or it can be a composite activity, which means that it contains a list of nested activities. Nested activities can be used for e.g. super states or composite activities in nested process languages like BPEL.

So the even model also works similarly for composite activities as it did for the process definition above. Suppose that 'Phase one' models a super state as in state machines. Then event propagation allows to subscribe to all events within that super state. The idea is that the hierarchical composition corresponds to diagram representation. If an element 'e' is drawn inside another element 'p', then p is the parent of e. A process definition has a set of top level activities. Every activity can have a set of nested activities. The parent of a transition is considered as the first common parent for it's source and destination.

If an event listener is not interested in propagated events, propagation can be disabled with propagationDisabled() while building the process with the ProcessFactory. The next process is the same process as above except that propagated events will be disabled on the event listener. The graph diagram remains the same.


Building the process with the process factory:

TODO update code snippet

So when the first signal is given for this process, again the end event will be fired on activity a, but now the event listener on the composite activity will not be executed cause propagated events have been disabled. Disabling propagation is a property on the individual event listener and doesn't influence the other listeners. The event will always be fired and propagated over the whole parent hierarchy.

ClientExecution execution = processDefinition.startProcessInstance();

The first signal will take the process from a to b. No messages will be printed to the console.

execution.signal();

Next, the second signal will take the transition from b to c.

execution.signal()

Again two end events are fired just like above on activities b and composite respectively. The first event is the end event on activity b. That will be propagated to the composite activity. So the event listener will not be executed for this event cause it has propagation disabled. But the event listener will be executed for the end event on the composite activity. That is not propagated, but fired directly on the composite activity. So the event listener will now be executed only once for the composite activity as shown in the following console output:

leaving activity(composite)

Above we already touched briefly on the two main process constructs: Activities, transitions and activity composition. This chapter explores in full all the possibilities of the process definition structures.

There are basically two forms of process languages: graph based and composite process languages. First of all, the process supports both. Even graph based execution and activity composition can be used in combination to implement something like UML super states. Furthermore, automatic functional activities can be implemented so that they can be used with transitions as well as with activity composition.


Next we'll show a series of example diagram structures that can be formed with the PVM process model.










This section explains how the Process Virtual Machine boroughs the thread from the client to bring an execution from one wait state to another.

When a client invokes a method (like e.g. the signal method) on an execution, by default, the Process Virtual Machine will use that thread to progress the execution until it reached a wait state. Once the next wait state has been reached, the method returns and the client gets the thread back. This is the default way for the Process Virtual Machine to operate. Two more levels of asynchonous execution complement this default behaviour: Asynchronous continuations and the asynchronous command service.

The next process will show the basics concretely. It has three wait states and four automatic activities.


Here's how to build the process:

ClientProcessDefinition processDefinition = ProcessFactory.build("automatic")
    .activity("wait 1").initial().behaviour(new WaitState())
      .transition().to("automatic 1")
    .activity("automatic 1").behaviour(new Display("one"))
      .transition().to("wait 2")
    .activity("wait 2").behaviour(new WaitState())
      .transition().to("automatic 2")
    .activity("automatic 2").behaviour(new Display("two"))
      .transition().to("automatic 3")
    .activity("automatic 3").behaviour(new Display("three"))
      .transition().to("automatic 4")
    .activity("automatic 4").behaviour(new Display("four"))
      .transition().to("wait 3")
    .activity("wait 3").behaviour(new WaitState())
.done();

Let's walk you through one execution of this process.

ClientExecution execution = processDefinition.startProcessInstance();

Starting a new execution means that the initial activity is executed. So if an automatic activity is the initial activity, this means that immediately the first unnamed outgoing transition is taken. This happens all inside of the invocation of startProcessInstance.

In this case however, the initial activity is a wait state. So the method startProcessInstance returns immediately and the execution will be positioned in the initial activity 'wait 1'.


Then an external trigger is given with the signal method.

execution.signal();

As explained above when introducing the WaitState, that signal will cause the default transition to be taken. The transition will move the execution to activity automatic 1 and execute it. The execute method of the Display activity in automatic 1 print a line to the console and it will not call execution.waitForSignal(). Therefore, the execution will proceed by taking the default transition out of automatic 1. At this stage, the signal method is still blocking. Another way to think about it is that the execution methods like signal will use the thread of the client to interpret the process definition until a wait state is reached.

Then the execution arrives in wait 2 and executes the WaitState activity. That method will invoke the execution.waitForSignal(), which will cause the signal method to return. That is when the thread is given back to the client that invoked the signal method.

So when the signal method returns, the execution is positioned in wait 2.


Then the execution is now waiting for an external trigger just as an object (more precisely an object graph) in memory until the next external trigger is given with the signal method.

execution.signal();

This second invocation of signal will take the execution similarly all the way to wait 3 before it returns.


The benefits of using this paradigm is that the same process definition can be executed in client execution mode (in-memory without persistence) as well as in persistent execution mode, depending on the application and on the environment.

When executing a process in persistent mode, this is how you typically want to bind that process execution to transactions of the database:


In most situations, the computational work that needs to be done as part of the process after an external trigger (the red pieces) is pretty minimal. Typically transactions combining the process execution and processing the request from the UI takes typically less then a second. Whereas the wait state in business processes typically can span for hours, days or even years. The clue is to clearly distinct when a wait state starts so that only the computational work done before the start of that wait state should be included in the transaction.

Think of it this way: "When an approval arrives, what are all the automated processing that needs to be done before the process system needs to wait for another external trigger?" Unless pdf's need to be generated or mass emails need to be send, the amount of time that this takes is usually neglectable. That is why in the default persistent execution mode, the process work is executed in the thread of the client.

This reasoning even holds in case of concurrent paths of execution. When a single path of execution splits into concurrent paths of execution, the process overhead of calculating that is neglectable. So that is why it makes sense for a fork or split activity implementation that targets persistent execution mode to spawn the concurrent paths sequentially in the same thread. Basically it's all just computational work as part of the same transaction. This can only be done because the fork/split knows that each concurrent path of execution will return whenever a wait state is encountered.

Since this is a difficult concept to grasp, I'll explain it again with other words. Look at it from the overhead that is produced by the process execution itself in persistent execution mode. If in a transaction, an execution is given an external trigger and that causes the execution to split into multiple concurrent paths of execution. Then the process overhead of calculating this is neglectable. Also the overhead of the generated SQL is neglectable. And since all the work done in the concurrent branches must be done inside that single transaction, there is typically no point in having fork/split implementations spawn the concurrent paths of execution in multiple threads.

To make executable processes, developers need to know exactly what the automatic activities are, what the wait states are and which threads will be allocated to the process execution. For business analysts that draw the analysis process, things are a bit simpler. For the activities they draw, they usually know whether it's a human or a system that is responsible. But they typically don't not how this translates to threads and transactions.

So for the developer, the first job is to analyse what needs to be executed within the thread of control of the process and what is outside. Looking for the external triggers can be a good start to find the wait states in a process, just like verbs and nouns can be the rule of thumb in building UML class diagrams.

To model process concurrency, there is a parent-child tree structure on the execution. The idea is that the main path of execution is the root of that tree. The main path of execution is also called the process instance. It is the execution that is created when starting or creating a new process instance for a given process definition.

Now, because the main path of execution is the same object as the process instance, this keeps the usage simple in case of simple processes without concurrency.


To establish multiple concurrent paths of execution, activity implementations like a fork or split can create child executions with method ActivityExecution.createExecution. Activity implementations like join or merge can stop these concurrent paths of execution by calling method stop on the concurrent execution.

Only leaf executions can be active. Non-leave executions should be inactive. This tree structure of executions doesn't enforce a particular type of concurrency or join behaviour. It's up to the forks or and-splits and to the joins or and-merges to use the execution tree structure in any way they want to define the wanted concurrency behaviour. Here you see an example of concurrent executions.


There is a billing and a shipping path of execution. In this case, the flat bar activities represent activities that fork and join. The execution shows a three executions. The main path of execution is inactive (represented as gray) and the billing and shipping paths of execution are active and point to the activity bill and ship respectively.

It's up to the activity behaviour implementations how they want to use this execution structure. Suppose that multiple tasks have to be completed before the execution is to proceed. The activity behaviour can spawn a series of child executions for this. Or alternatively, the task component could support task groups that are associated to one single execution. In that case, the task component becomes responsible for synchronizing the tasks, thereby moving this responsibility outside the scope of the execution tree structure.

In all the code that is associated to a process like Activitys, EventListeners and Conditions, it's possible to associate exception handlers. This can be thought of as including try-catch blocks in the method implementations of those implementations. But in order to build more reusable building blocks for both the delegation classes and the exception handling logic, exception handlers are added to the core process model.

An exception handler can be associated to any process element. When an exception occurs in a delegation class, a matching exception handler will be searched for. If such an exception handler is found, it will get a chance to handle the exception.

If an exception handler completes without problems, then the exception is considered handled and the execution resumes right after the delegation code that was called. For example, a transition has three actions and the second action throws an exception that is handled by an exception handler, then

Writing automatic activities that are exception handler aware is easy. The default is to proceed anyway. No method needs to be called on the execution. So if an automatic activity throws an exception that is handled by an exception handler, the execution will just proceed after that activity. It becomes a big more difficult for control flow activities. They might have to include try-finally blocks to invoke the proper methods on the execution before an exception handler gets a chance to handle the exception. For example, if an activity is a wait state and an exception occurs, then there is a risk that the thread jumps over the invocation of execution.waitForSignal(), causing the execution to proceed after the activity.

TODO: exceptionhandler.isRethrowMasked

TODO: transactional exception handlers

TODO: we never catch errors

The state of an execution is either active or locked. An active execution is either executing or waiting for an external trigger. If an execution is not in STATE_ACTIVE, then it is locked. A locked execution is read only and cannot receive any external triggers.

When a new execution is created, it is in STATE_ACTIVE. To change the state to a locked state, use lock(String). Some STATE_* constants are provided that represent the most commonly used locked states. But the state '...' in the picture indicates that any string can be provided as the state in the lock method.


If an execution is locked, methods that change the execution will throw a PvmException and the message will reference the actual locking state. Firing events, updating variables, updating priority and adding comments are not considered to change an execution. Also creation and removal of child executions are unchecked, which means that those methods can be invoked by external API clients and activity behaviour methods, even while the execution is in a locked state.

Make sure that comparisons between getState() and the STATE_* constants are done with .equals and not with '==' because if executions are loaded from persistent storage, a new string is created instead of the constants.

An execution implementation will be locked:

  • When it is ended
  • When it is suspended
  • During asynchronous continuations

Furthermore, locking can be used by Activity implementations to make executions read only during wait states hen responsibility for the execution is transferred to an external entity such as:

  • A human task
  • A service invocation
  • A wait state that ends when a scanner detects that a file appears

In these situations the strategy is that the external entity should get full control over the execution because it wants to control what is allowed and what not. To get that control, they lock the execution so that all interactions have to go through the external entity.

One of the main reasons to create external entities is that they can live on after the execution has already proceeded. For example, in case of a service invocation, a timer could cause the execution to take the timeout transition. When the response arrives after the timeout, the service invocation entity should make sure it doesn't signal the execution. So the service invocation can be seen as a activity instance (aka activity instance) and is unique for every execution of the activity.

External entities themselves are responsible for managing the execution lock. If the timers and client applications are consequent in addressing the external entities instead of the execution directly, then locking is in theory unnecessary. It's up to the activity behaviour implementations whether they want to take the overhead of locking and unlocking.

TODO

TODO

TODO