Welcome
1. Introduction
1.1. Introduction
KIE (Knowledge Is Everything) is an umbrella project introduced to bring our related technologies together under one roof. It also acts as the core shared between our projects.
KIE contains the following different but related projects offering a complete portfolio of solutions for business automation and management:
-
Drools is a business-rule management system with a forward-chaining and backward-chaining inference-based rules engine, allowing fast and reliable evaluation of business rules and complex event processing. A rules engine is also a fundamental building block to create an expert system which, in artificial intelligence, is a computer system that emulates the decision-making ability of a human expert.
-
jBPM is a flexible Business Process Management suite allowing you to model your business goals by describing the steps that need to be executed to achieve those goals.
-
OptaPlanner is a constraint solver that optimizes use cases such as employee rostering, vehicle routing, task assignment and cloud optimization.
-
Business Central is is a full featured web application for the visual composition of custom business rules and processes.
-
UberFire is a web-based workbench framework inspired by Eclipse Rich Client Platform.
The 7.x series will follow a more agile approach with more regular and iterative releases. We plan to do some bigger changes than normal for a series of minor releases, and users need to be aware those are coming before adopting.
-
UI sections and links will become object oriented, rather than task oriented. https://en.wikipedia.org/wiki/Object-oriented_user_interface
-
Authoring/Library will become project oriented, rather than repository oriented. You’ll create, browse and open projects rather than repositories. The repository concept will be pushed lower, for instance it’ll be created automaticaly when you create the projcet.
-
The old form modeller will be removed and only the new one made available. Although old forms will continue to render.
-
The new designer will continue to mature with more nodes and improved UXD. Eventually it’ll become the default editor, but we will not remove the old one until there is feature parity in BPMN2 support.
-
Continued UXD improvements in lots of places.
-
We will introduce the AppFormer project, this will be a re-org and consolidation of existing projects and result in some artifact renames. UberFire will become AppFormer-Core, forms, data modeller and dashbuilder will come under AppFormer. Dashbuilder will most likely becalled Appformer-Insight.
The 8.x series will come towards the end of this year. We have ongoing parallel work to introduce concepts of workspaces with improved git support, that will have a built in workflow for forking and pull requests. This will be combined with horizontal scaling and improved high availability. These changes are important for usability and cloud scalability, but too much of a change for a minor release, hence the bump to 8.x
1.2. Getting Involved
We are often asked "How do I get involved". Luckily the answer is simple, just write some code and submit it :) There are no hoops you have to jump through or secret handshakes. We have a very minimal "overhead" that we do request to allow for scalable project development. Below we provide a general overview of the tools and "workflow" we request, along with some general advice.
If you contribute some good work, don’t forget to blog about it :)
1.2.1. Sign up to jboss.org
Signing to jboss.org will give you access to the JBoss wiki, forums and JIRA. Go to https://www.jboss.org/ and click "Register".
1.2.2. Sign the Contributor Agreement
The only form you need to sign is the contributor agreement, which is fully automated via the web. As the image below says "This establishes the terms and conditions for your contributions and ensures that source code can be licensed appropriately"
1.2.3. Submitting issues via JIRA
To be able to interact with the core development team you will need to use JIRA, the issue tracker. This ensures that all requests are logged and allocated to a release schedule and all discussions captured in one place. Bug reports, bug fixes, feature requests and feature submissions should all go here. General questions should be undertaken at the mailing lists.
Minor code submissions, like format or documentation fixes do not need an associated JIRA issue created.
1.2.4. Fork GitHub
With the contributor agreement signed and your requests submitted to JIRA you should now be ready to code :) Create a GitHub account and fork any of the Drools, jBPM or Guvnor repositories. The fork will create a copy in your own GitHub space which you can work on at your own pace. If you make a mistake, don’t worry blow it away and fork again. Note each GitHub repository provides you the clone (checkout) URL, GitHub will provide you URLs specific to your fork.
1.2.5. Writing Tests
When writing tests, try and keep them minimal and self contained. We prefer to keep the DRL fragments within the test, as it makes for quicker reviewing. If their are a large number of rules then using a String is not practical so then by all means place them in separate DRL files instead to be loaded from the classpath. If your tests need to use a model, please try to use those that already exist for other unit tests; such as Person, Cheese or Order. If no classes exist that have the fields you need, try and update fields of existing classes before adding a new class.
There are a vast number of tests to look over to get an idea, MiscTest is a good place to start.
1.2.6. Commit with Correct Conventions
When you commit, make sure you use the correct conventions. The commit must start with the JIRA issue id, such as DROOLS-1946. This ensures the commits are cross referenced via JIRA, so we can see all commits for a given issue in the same place. After the id the title of the issue should come next. Then use a newline, indented with a dash, to provide additional information related to this commit. Use an additional new line and dash for each separate point you wish to make. You may add additional JIRA cross references to the same commit, if it’s appropriate. In general try to avoid combining unrelated issues in the same commit.
Don’t forget to rebase your local fork from the original master and then push your commits back to your fork.
1.2.7. Submit Pull Requests
With your code rebased from original master and pushed to your personal GitHub area, you can now submit your work as a pull request. If you look at the top of the page in GitHub for your work area their will be a "Pull Request" button. Selecting this will then provide a gui to automate the submission of your pull request.
The pull request then goes into a queue for everyone to see and comment on. Below you can see a typical pull request. The pull requests allow for discussions and it shows all associated commits and the diffs for each commit. The discussions typically involve code reviews which provide helpful suggestions for improvements, and allows for us to leave inline comments on specific parts of the code. Don’t be disheartened if we don’t merge straight away, it can often take several revisions before we accept a pull request. Luckily GitHub makes it very trivial to go back to your code, do some more commits and then update your pull request to your latest and greatest.
It can take time for us to get round to responding to pull requests, so please be patient. Submitted tests that come with a fix will generally be applied quite quickly, where as just tests will often way until we get time to also submit that with a fix. Don’t forget to rebase and resubmit your request from time to time, otherwise over time it will have merge conflicts and core developers will general ignore those.
1.3. Installation and Setup (Core and IDE)
1.3.1. Installing and using
Drools provides an Eclipse-based IDE (which is optional), but at its core only Java 1.5 (Java SE) is required.
A simple way to get started is to download and install the Eclipse plug-in - this will also require the Eclipse GEF framework to be installed (see below, if you don’t have it installed already). This will provide you with all the dependencies you need to get going: you can simply create a new rule project and everything will be done for you. Refer to the chapter on Business Central and IDE for detailed instructions on this. Installing the Eclipse plug-in is generally as simple as unzipping a file into your Eclipse plug-in directory.
Use of the Eclipse plug-in is not required. Rule files are just textual input (or spreadsheets as the case may be) and the IDE (also known as Business Central) is just a convenience. People have integrated the Drools engine in many ways, there is no "one size fits all".
Alternatively, you can download the binary distribution, and include the relevant JARs in your projects classpath.
1.3.1.1. Dependencies and JARs
Drools is broken down into a few modules, some are required during rule development/compiling, and some are required at runtime. In many cases, people will simply want to include all the dependencies at runtime, and this is fine. It allows you to have the most flexibility. However, some may prefer to have their "runtime" stripped down to the bare minimum, as they will be deploying rules in binary form - this is also possible. The core Drools engine can be quite compact, and only requires a few 100 kilobytes across 3 JAR files.
The following is a description of the important libraries that make up JBoss Drools
-
knowledge-api.jar - this provides the interfaces and factories. It also helps clearly show what is intended as a user API and what is just an engine API.
-
knowledge-internal-api.jar - this provides internal interfaces and factories.
-
drools-core.jar - this is the core Drools engine, runtime component. Contains both the RETE engine and the LEAPS engine. This is the only runtime dependency if you are pre-compiling rules (and deploying via Package or RuleBase objects).
-
drools-compiler.jar - this contains the compiler/builder components to take rule source, and build executable rule bases. This is often a runtime dependency of your application, but it need not be if you are pre-compiling your rules. This depends on drools-core.
-
drools-jsr94.jar - this is the JSR-94 compliant implementation, this is essentially a layer over the drools-compiler component. Note that due to the nature of the JSR-94 specification, not all features are easily exposed via this interface. In some cases, it will be easier to go direct to the Drools API, but in some environments the JSR-94 is mandated.
-
drools-decisiontables.jar - this is the decision tables 'compiler' component, which uses the drools-compiler component. This supports both excel and CSV input formats.
There are quite a few other dependencies which the above components require, most of which are for the drools-compiler, drools-jsr94 or drools-decisiontables module. Some key ones to note are "POI" which provides the spreadsheet parsing ability, and "antlr" which provides the parsing for the rule language itself.
if you are using Drools in J2EE or servlet containers and you come across classpath issues with "JDT", then you can switch to the janino compiler. Set the system property "drools.compiler": For example: -Ddrools.compiler=JANINO. |
For up to date info on dependencies in a release, consult the released POMs, which can be found on the Maven repository.
1.3.1.2. Use with Maven, Gradle, Ivy, Buildr or Ant
The JARs are also available in the central Maven repository (and also in https://repository.jboss.org/nexus/index.html#nexus-search;gavorg.drools~[the JBoss Maven repository]).
If you use Maven, add KIE and Drools dependencies in your project’s pom.xml like this:
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.drools</groupId>
<artifactId>drools-bom</artifactId>
<type>pom</type>
<version>...</version>
<scope>import</scope>
</dependency>
...
</dependencies>
</dependencyManagement>
<dependencies>
<dependency>
<groupId>org.kie</groupId>
<artifactId>kie-api</artifactId>
</dependency>
<dependency>
<groupId>org.drools</groupId>
<artifactId>drools-compiler</artifactId>
<scope>runtime</scope>
</dependency>
...
<dependencies>
This is similar for Gradle, Ivy and Buildr. To identify the latest version, check the Maven repository.
If you’re still using Ant (without Ivy), copy all the JARs from the download zip’s binaries directory and manually verify that your classpath doesn’t contain duplicate JARs.
1.3.1.3. Runtime
The "runtime" requirements mentioned here are if you are deploying rules as their binary form (either as KnowledgePackage objects, or KnowledgeBase objects etc). This is an optional feature that allows you to keep your runtime very light. You may use drools-compiler to produce rule packages "out of process", and then deploy them to a runtime system. This runtime system only requires drools-core.jar and knowledge-api for execution. This is an optional deployment pattern, and many people do not need to "trim" their application this much, but it is an ideal option for certain environments.
1.3.1.4. Installing IDE (Rule Workbench)
The rule workbench (for Eclipse) requires that you have Eclipse 3.4 or greater, as well as Eclipse GEF 3.4 or greater. You can install it either by downloading the plug-in or using the update site.
Another option is to use the JBoss IDE, which comes with all the plug-in requirements pre packaged, as well as a choice of other tools separate to rules. You can choose just to install rules from the "bundle" that JBoss IDE ships with.
Installing GEF (a required dependency)
GEF is the Eclipse Graphical Editing Framework, which is used for graph viewing components in the plug-in.
If you don’t have GEF installed, you can install it using the built in update mechanism (or downloading GEF from the Eclipse.org website not recommended). JBoss IDE has GEF already, as do many other "distributions" of Eclipse, so this step may be redundant for some people.
Open the Help→Software updates…→Available Software→Add Site… from the help menu. Location is:
http://download.eclipse.org/tools/gef/updates/releases/
Next you choose the GEF plug-in:
Press next, and agree to install the plug-in (an Eclipse restart may be required). Once this is completed, then you can continue on installing the rules plug-in.
Installing GEF from zip file
To install from the zip file, download and unzip the file. Inside the zip you will see a plug-in directory, and the plug-in JAR itself. You place the plug-in JAR into your Eclipse applications plug-in directory, and restart Eclipse.
Installing Drools plug-in from zip file
Download the Drools Eclipse IDE plugin from the link below. Unzip the downloaded file in your main eclipse folder (do not just copy the file there, extract it so that the feature and plugin JARs end up in the features and plugin directory of eclipse) and (re)start Eclipse.
To check that the installation was successful, try opening the Drools perspective: Click the 'Open Perspective' button in the top right corner of your Eclipse window, select 'Other…' and pick the Drools perspective. If you cannot find the Drools perspective as one of the possible perspectives, the installation probably was unsuccessful. Check whether you executed each of the required steps correctly: Do you have the right version of Eclipse (3.4.x)? Do you have Eclipse GEF installed (check whether the org.eclipse.gef_3.4..jar exists in the plugins directory in your eclipse root folder)? Did you extract the Drools Eclipse plugin correctly (check whether the org.drools.eclipse_.jar exists in the plugins directory in your eclipse root folder)? If you cannot find the problem, try contacting us (e.g. on irc or on the user mailing list), more info can be found no our homepage here:
Drools Runtimes
A Drools runtime is a collection of JARs on your file system that represent one specific release of the Drools project JARs. To create a runtime, you must point the IDE to the release of your choice. If you want to create a new runtime based on the latest Drools project JARs included in the plugin itself, you can also easily do that. You are required to specify a default Drools runtime for your Eclipse workspace, but each individual project can override the default and select the appropriate runtime for that project specifically.
You are required to define one or more Drools runtimes using the Eclipse preferences view. To open up your preferences, in the menu Window select the Preferences menu item. A new preferences dialog should show all your preferences. On the left side of this dialog, under the Drools category, select "Installed Drools runtimes". The panel on the right should then show the currently defined Drools runtimes. If you have not yet defined any runtimes, it should like something like the figure below.
To define a new Drools runtime, click on the add button. A dialog as shown below should pop up, requiring the name for your runtime and the location on your file system where it can be found.
In general, you have two options:
-
If you simply want to use the default JARs as included in the Drools Eclipse plugin, you can create a new Drools runtime automatically by clicking the "Create a new Drools 5 runtime …" button. A file browser will show up, asking you to select the folder on your file system where you want this runtime to be created. The plugin will then automatically copy all required dependencies to the specified folder. After selecting this folder, the dialog should look like the figure shown below.
-
If you want to use one specific release of the Drools project, you should create a folder on your file system that contains all the necessary Drools libraries and dependencies. Instead of creating a new Drools runtime as explained above, give your runtime a name and select the location of this folder containing all the required JARs.
After clicking the OK button, the runtime should show up in your table of installed Drools runtimes, as shown below. Click on checkbox in front of the newly created runtime to make it the default Drools runtime. The default Drools runtime will be used as the runtime of all your Drools project that have not selected a project-specific runtime.
You can add as many Drools runtimes as you need. For example, the screenshot below shows a configuration where three runtimes have been defined: a Drools 4.0.7 runtime, a Drools 5.0.0 runtime and a Drools 5.0.0.SNAPSHOT runtime. The Drools 5.0.0 runtime is selected as the default one.
Note that you will need to restart Eclipse if you changed the default runtime and you want to make sure that all the projects that are using the default runtime update their classpath accordingly.
Whenever you create a Drools project (using the New Drools Project wizard or by converting an existing Java project to a Drools project using the "Convert to Drools Project" action that is shown when you are in the Drools perspective and you right-click an existing Java project), the plugin will automatically add all the required JARs to the classpath of your project.
When creating a new Drools project, the plugin will automatically use the default Drools runtime for that project, unless you specify a project-specific one. You can do this in the final step of the New Drools Project wizard, as shown below, by deselecting the "Use default Drools runtime" checkbox and selecting the appropriate runtime in the drop-down box. If you click the "Configure workspace settings …" link, the workspace preferences showing the currently installed Drools runtimes will be opened, so you can add new runtimes there.
You can change the runtime of a Drools project at any time by opening the project properties (right-click the project and select Properties) and selecting the Drools category, as shown below. Check the "Enable project specific settings" checkbox and select the appropriate runtime from the drop-down box. If you click the "Configure workspace settings …" link, the workspace preferences showing the currently installed Drools runtimes will be opened, so you can add new runtimes there. If you deselect the "Enable project specific settings" checkbox, it will use the default runtime as defined in your global preferences.
1.3.2. Building from source
1.3.2.1. Getting the sources
The source code of each Maven artifact is available in the JBoss Maven repository as a source JAR. The same source JARs are also included in the download zips. However, if you want to build from source, it’s highly recommended to get our sources from our source control.
Git allows you to fork our code, independently make personal changes on it, yet still merge in our latest changes regularly and optionally share your changes with us. To learn more about git, read the free book Git Pro.
1.3.2.2. Building the sources
In essense, building from source is very easy, for example if you want to build the guvnor project:
$ git clone git@github.com:kiegroup/guvnor.git
...
$ cd guvnor
$ mvn clean install -DskipTests -Dfull
...
However, there are a lot potential pitfalls, so if you’re serious about building from source and possibly contributing to the project, follow the instructions in the README file in droolsjbpm-build-bootstrap.
1.3.3. Eclipse
1.3.3.1. Importing Eclipse Projects
With the Eclipse project files generated they can now be imported into Eclipse. When starting Eclipse open the workspace in the root of your subversion checkout.
When calling mvn install
all the project dependencies were downloaded and added to the local Maven repository.
Eclipse cannot find those dependencies unless you tell it where that repository is.
To do this setup an M2_REPO
classpath variable.
KIE
KIE is the shared core for Drools and jBPM. It provides a unified methodology and programming model for building, deploying and utilizing resources.
2. KIE
2.1. Overview
2.1.1. Anatomy of Projects
The process of researching an integration knowledge solution for Drools and jBPM has simply used the "kiegroup" group name. This name permeates GitHub accounts and Maven POMs. As scopes broadened and new projects were spun KIE, an acronym for Knowledge Is Everything, was chosen as the new group name. The KIE name is also used for the shared aspects of the system; such as the unified build, deploy and utilization.
KIE currently consists of the following subprojects:
OptaPlanner, a local search and optimization tool, has been spun off from Drools Planner and is now a top level project with Drools and jBPM. This was a natural evolution as Optaplanner, while having strong Drools integration, has long been independent of Drools.
From the Polymita acquisition, along with other things, comes the powerful Dashboard Builder which provides powerful reporting capabilities. Dashboard Builder is currently a temporary name and after the 6.0 release a new name will be chosen. Dashboard Builder is completely independent of Drools and jBPM and will be used by many projects at JBoss, and hopefully outside of JBoss :)
UberFire is the new base Business Central project, spun off from the ground up rewrite. UberFire provides Eclipse-like workbench capabilities, with panels and pages from plugins. The project is independent of Drools and jBPM and anyone can use it as a basis of building flexible and powerful workbenches like Business Central. UberFire will be used for console and workbench development throughout JBoss.
It was determined that the Guvnor brand leaked too much from its intended role; such as the authoring metaphors, like Decision Tables, being considered Guvnor components instead of Drools components. This wasn’t helped by the monolithic projects structure used in 5.x for Guvnor. In 6.0 Guvnor’s focus has been narrowed to encapsulate the set of UberFire plugins that provide the basis for building a web based IDE. Such as Maven integration for building and deploying, management of Maven repositories and activity notifications via inboxes. Drools and jBPM build Business Central distributions using Uberfire as the base and including a set of plugins, such as Guvnor, along with their own plugins for things like decision tables, guided editors, BPMN2 designer, human tasks. Business Central is called business-central.
KIE-WB is the uber workbench that combined all the Guvnor, Drools and jBPM plugins. The jBPM-WB is ghosted out, as it doesn’t actually exist, being made redundant by KIE-WB.
2.1.2. Lifecycles
The different aspects, or life cycles, of working with KIE system, whether it’s Drools or jBPM, can typically be broken down into the following:
-
Author
-
Authoring of knowledge using a UI metaphor, such as: DRL, BPMN2, decision table, class models.
-
-
Build
-
Builds the authored knowledge into deployable units.
-
For KIE this unit is a JAR.
-
-
Test
-
Test KIE knowledge before it’s deployed to the application.
-
-
Deploy
-
Deploys the unit to a location where applications may utilize (consume) them.
-
KIE uses Maven style repository.
-
-
Utilize
-
The loading of a JAR to provide a KIE session (KieSession), for which the application can interact with.
-
KIE exposes the JAR at runtime via a KIE container (KieContainer).
-
KieSessions, for the runtime’s to interact with, are created from the KieContainer.
-
-
Run
-
System interaction with the KieSession, via API.
-
-
Work
-
User interaction with the KieSession, via command line or UI.
-
-
Manage
-
Manage any KieSession or KieContainer.
-
2.1.3. Installation environment options for Drools
With Drools, you can set up a development environment to develop business applications, a runtime environment to run those applications to support decisions, or both.
-
Development environment: Typically consists of one Business Central installation and at least one KIE Server installation. You can use Business Central to design decisions and other artifacts, and you can use KIE Server to execute and test the artifacts that you created.
-
Runtime environment: Consists of one or more KIE Server instances with or without Business Central. Business Central has an embedded Drools controller. If you install Business Central, use the Menu → Deploy → Execution servers page to create and maintain containers. If you want to automate KIE Server management without Business Central, you can use the headless Drools controller.
You can also cluster both development and runtime environments. A clustered development or runtime environment consists of a unified group or "cluster" of two or more servers. The primary benefit of clustering Drools development environments is high availability and enhanced collaboration, while the primary benefit of clustering Drools runtime environments is high availability and load balancing. High availability decreases the chance of a loss of data when a single server fails. When a server fails, another server fills the gap by providing a copy of the data that was on the failed server. When the failed server comes online again, it resumes its place in the cluster. Load balancing shares the computing load across the nodes of the cluster to improve the overall performance.
Clustering of the runtime environment is currently supported on Red Hat JBoss EAP 7.2 only. Clustering of Business Central is currently a Technology Preview feature that is not yet intended for production use. |
2.1.4. Decision-authoring assets in Drools
Drools supports several assets that you can use to define business decisions for your decision service. Each decision-authoring asset has different advantages, and you might prefer to use one or a combination of multiple assets depending on your goals and needs.
The following table highlights the main decision-authoring assets supported in Drools projects to help you decide or confirm the best method for defining decisions in your decision service.
Asset | Highlights | Authoring tools | Documentation |
---|---|---|---|
Decision Model and Notation (DMN) models |
|
Business Central or other DMN-compliant editor |
|
Guided decision tables |
|
Business Central |
|
Spreadsheet decision tables |
|
Spreadsheet editor |
|
Guided rules |
|
Business Central |
|
Guided rule templates |
|
Business Central |
|
DRL rules |
|
Business Central or integrated development environment (IDE) |
|
Predictive Model Markup Language (PMML) models |
|
PMML or XML editor |
2.1.5. Project storage and build options with Drools
As you develop a Drools project, you need to be able to track the versions of your project with a version-controlled repository, manage your project assets in a stable environment, and build your project for testing and deployment. You can use Business Central for all of these tasks, or use a combination of Business Central and external tools and repositories. Drools supports Git repositories for project version control, Apache Maven for project management, and a variety of Maven-based, Java-based, or custom-tool-based build options.
The following options are the main methods for Drools project versioning, storage, and building:
Versioning option | Description | Documentation |
---|---|---|
Business Central Git VFS |
Business Central contains a built-in Git Virtual File System (VFS) that stores all processes, rules, and other artifacts that you create in the authoring environment. Git is a distributed version control system that implements revisions as commit objects. When you commit your changes into a repository, a new commit object in the Git repository is created. When you create a project in Business Central, the project is added to the Git repository connected to Business Central. |
NA |
External Git repository |
If you have Drools projects in Git repositories outside of Business Central, you can import them into Drools spaces and use Git hooks to synchronize the internal and external Git repositories. |
NA |
Management option | Description | Documentation | ||
---|---|---|---|---|
Business Central Maven repository |
Business Central contains a built-in Maven repository that organizes and builds project assets that you create in the authoring environment. Maven is a distributed build-automation tool that uses repositories to store Java libraries, plug-ins, and other build artifacts. When building projects and archetypes, Maven dynamically retrieves Java libraries and Maven plug-ins from local or remote repositories to promote shared dependencies across projects.
|
|||
External Maven repository |
If you have Drools projects in an external Maven repository, such as Nexus or Artifactory, you can create a |
Build option | Description | Documentation |
---|---|---|
Business Central (KJAR) |
Business Central builds Drools projects stored in either the built-in Maven repository or a configured external Maven repository. Projects in Business Central are packaged automatically as knowledge JAR (KJAR) files with all components needed for deployment when you build the projects. |
|
Standalone Maven project (KJAR) |
If you have a standalone Drools Maven project outside of Business Central, you can edit the project |
|
Embedded Java application (KJAR) |
If you have an embedded Java application from which you want to build your Drools project, you can use a |
|
CI/CD tool (KJAR) |
If you use a tool for continuous integration and continuous delivery (CI/CD), you can configure the tool set to integrate with your Drools Git repositories to build a specified project. Ensure that your projects are packaged and built as KJAR files to ensure optimal deployment. |
NA |
2.1.6. Project deployment options with Drools
After you develop, test, and build your Drools project, you can deploy the project to begin using the business assets you have created. You can deploy a Drools project to a configured KIE Server, to an embedded Java application, or into a Red Hat OpenShift Container Platform environment for an enhanced containerized implementation.
The following options are the main methods for Drools project deployment:
Deployment option | Description | Documentation |
---|---|---|
Deployment to KIE Server |
KIE Server is the server provided with Drools that runs the decision services, process applications, and other deployable assets from a packaged and deployed Drools project (KJAR file). These services are consumed at run time through an instantiated KIE container, or deployment unit. You can deploy and maintain deployment units in KIE Server using Business Central or using a headless Drools controller with its associated REST API (considered a managed KIE Server instance). You can also deploy and maintain deployment units using the KIE Server REST API or Java client API from a standalone Maven project, an embedded Java application, or other custom environment (considered an unmanaged KIE Server instance). |
|
Deployment to an embedded Java application |
If you want to deploy Drools projects to your own Java virtual machine (JVM) environment, microservice, or application server, you can bundle the application resources in the project WAR files to create a deployment unit similar to a KIE container. You can also use the core KIE APIs (not KIE Server APIs) to configure a KIE scanner to periodically update KIE containers. |
2.1.7. Asset execution options with Drools
After you build and deploy your Drools project to KIE Server or other environment, you can execute the deployed assets for testing or for runtime consumption. You can also execute assets locally in addition to or instead of executing them after deployment.
The following options are the main methods for Drools asset execution:
Execution option | Description | Documentation |
---|---|---|
Execution in KIE Server |
If you deployed Drools project assets to KIE Server, you can use the KIE Server REST API or Java client API to execute and interact with the deployed assets. You can also use Business Central or the headless Drools controller outside of Business Central to manage the configurations and KIE containers in the KIE Server instances associated with your deployed assets. |
|
Execution in an embedded Java application |
If you deployed Drools project assets in your own Java virtual machine (JVM) environment, microservice, or application server, you can use custom APIs or application interactions with core KIE APIs (not KIE Server APIs) to execute assets in the embedded engine. |
|
Execution in a local environment for extended testing |
As part of your development cycle, you can execute assets locally to ensure that the assets you have created in Drools function as intended. You can use local execution in addition to or instead of executing assets after deployment. |
Smart Router (KIE Server router)
Depending on your deployment and execution environment, you can use a Smart Router to aggregate multiple independent KIE Server instances as though they are a single server. Smart Router is a single endpoint that can receive calls from client applications to any of your services and route each call automatically to the KIE Server that runs the service. For more information about Smart Router, see KIE Server router. |
2.1.8. Example decision management architectures with Drools
The following scenarios illustrate common variations of Drools installation, asset authoring, project storage, project deployment, and asset execution in a decision management architecture. Each section summarizes the methods and tools used and the advantages for the given architecture. The examples are basic and are only a few of the many combinations you might consider, depending on your specific goals and needs with Drools.
- Drools on Wildfly with Business Central and KIE Server
-
-
Installation environment: Drools on Wildfly
-
Project storage and build environment: External Git repository for project versioning synchronized with the Business Central Git repository using Git hooks, and external Maven repository for project management and building configured with KIE Server
-
Asset-authoring tool: Business Central
-
Main asset types: Decision Model and Notation (DMN) models for decisions
-
Project deployment and execution environment: KIE Server
-
Scenario advantages:
-
Stable implementation of Drools in an on-premise development environment
-
Access to the repositories, assets, asset designers, and project build options in Business Central
-
Standardized asset-authoring approach using DMN for optimal integration and stability
-
Access to KIE Server functionality and KIE APIs for asset deployment and execution
-
Figure 2. Drools on Wildfly with Business Central and KIE Server -
- Drools on Wildfly with an IDE and KIE Server
-
-
Installation environment: Drools on Wildfly
-
Project storage and build environment: External Git repository for project versioning (not synchronized with Business Central) and external Maven repository for project management and building configured with KIE Server
-
Asset-authoring tools: Integrated development environment (IDE), such as Eclipse, and a spreadsheet editor or a Decision Model and Notation (DMN) modeling tool for other decision formats
-
Main asset types: Drools Rule Language (DRL) rules, spreadsheet decision tables, and Decision Model and Notation (DMN) models for decisions
-
Project deployment and execution environment: KIE Server
-
Scenario advantages:
-
Flexible implementation of Drools in an on-premise development environment
-
Ability to define business assets using an external IDE and other asset-authoring tools of your choice
-
Access to KIE Server functionality and KIE APIs for asset deployment and execution
-
Figure 3. Drools on Wildfly with an IDE and KIE Server -
- Drools with an IDE and an embedded Java application
-
-
Installation environment: Drools libraries embedded within a custom application
-
Project storage and build environment: External Git repository for project versioning (not synchronized with Business Central) and external Maven repository for project management and building configured with your embedded Java application (not configured with KIE Server)
-
Asset-authoring tools: Integrated development environment (IDE), such as Eclipse, and a spreadsheet editor or a Decision Model and Notation (DMN) modeling tool for other decision formats
-
Main asset types: Drools Rule Language (DRL) rules, spreadsheet decision tables, and Decision Model and Notation (DMN) models for decisions
-
Project deployment and execution environment: Embedded Java application, such as in a Java virtual machine (JVM) environment, microservice, or custom application server
-
Scenario advantages:
-
Custom implementation of Drools in an on-premise development environment with an embedded Java application
-
Ability to define business assets using an external IDE and other asset-authoring tools of your choice
-
Use of custom APIs to interact with core KIE APIs (not KIE Server APIs) and to execute assets in the embedded engine
-
Figure 4. Drools with an IDE and an embedded Java application -
2.2. Build, Deploy, Utilize and Run
2.2.1. Introduction
6.0 introduces a new configuration and convention approach to building KIE bases, instead of using the programmatic builder approach in 5.x. The builder is still available to fall back on, as it’s used for the tooling integration.
Building now uses Maven, and aligns with Maven practices. A KIE project or module is simply a Maven Java project or module; with an additional metadata file META-INF/kmodule.xml. The kmodule.xml file is the descriptor that selects resources to KIE bases and configures those KIE bases and sessions. There is also alternative XML support via Spring and OSGi BluePrints.
While standard Maven can build and package KIE resources, it will not provide validation at build time. There is a Maven plugin which is recommended to use to get build time validation. The plugin also generates many classes, making the runtime loading faster too.
The example project layout and Maven POM descriptor is illustrated in the screenshot
KIE uses defaults to minimise the amount of configuration. With an empty kmodule.xml being the simplest configuration. There must always be a kmodule.xml file, even if empty, as it’s used for discovery of the JAR and its contents.
Maven can either 'mvn install' to deploy a KieModule to the local machine, where all other applications on the local machine use it. Or it can 'mvn deploy' to push the KieModule to a remote Maven repository. Building the Application will pull in the KieModule and populate the local Maven repository in the process.
JARs can be deployed in one of two ways. Either added to the classpath, like any other JAR in a Maven dependency listing, or they can be dynamically loaded at runtime. KIE will scan the classpath to find all the JARs with a kmodule.xml in it. Each found JAR is represented by the KieModule interface. The terms classpath KieModule and dynamic KieModule are used to refer to the two loading approaches. While dynamic modules supports side by side versioning, classpath modules do not. Further once a module is on the classpath, no other version may be loaded dynamically.
Detailed references for the API are included in the next sections, the impatient can jump straight to the examples section, which is fairly self-explanatory on the different use cases.
2.2.2. Building
2.2.2.1. Creating and building a Kie Project
A Kie Project has the structure of a normal Maven project with the only peculiarity of including a kmodule.xml file defining in a declaratively way the KieBase
s and KieSession
s that can be created from it.
This file has to be placed in the resources/META-INF folder of the Maven project while all the other Kie artifacts, such as DRL or a Excel files, must be stored in the resources folder or in any other subfolder under it.
Since meaningful defaults have been provided for all configuration aspects, the simplest kmodule.xml file can contain just an empty kmodule tag like the following:
<?xml version="1.0" encoding="UTF-8"?>
<kmodule xmlns="http://www.drools.org/xsd/kmodule"/>
In this way the kmodule will contain one single default KieBase
.
All Kie assets stored under the resources folder, or any of its subfolders, will be compiled and added to it.
To trigger the building of these artifacts it is enough to create a KieContainer
for them.
For this simple case it is enough to create a KieContainer
that reads the files to be built from the classpath:
KieServices kieServices = KieServices.Factory.get();
KieContainer kContainer = kieServices.getKieClasspathContainer();
` KieServices` is the interface from where it possible to access all the Kie building and runtime facilities:
In this way all the Java sources and the Kie resources are compiled and deployed into the KieContainer which makes its contents available for use at runtime.
2.2.2.2. The kmodule.xml file
As explained in the former section, the kmodule.xml file is the place where it is possible to declaratively configure the KieBase
(s) and KieSession
(s) that can be created from a KIE project.
In particular a KieBase
is a repository of all the application’s knowledge definitions.
It will contain rules, processes, functions, and type models.
The KieBase
itself does not contain data; instead, sessions are created from the KieBase
into which data can be inserted and from which process instances may be started.
Creating the KieBase
can be heavy, whereas session creation is very light, so it is recommended that KieBase
be cached where possible to allow for repeated session creation.
However end-users usually shouldn’t worry about it, because this caching mechanism is already automatically provided by the KieContainer
.
Conversely the KieSession
stores and executes on the runtime data.
It is created from the KieBase
or more easily can be created directly from the KieContainer
if it has been defined in the kmodule.xml file
The kmodule.xml allows to define and configure one or more KieBase
s and for each KieBase
all the different KieSession
s that can be created from it, as showed by the follwing example:
<kmodule xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://www.drools.org/xsd/kmodule">
<configuration>
<property key="drools.evaluator.supersetOf" value="org.mycompany.SupersetOfEvaluatorDefinition"/>
</configuration>
<kbase name="KBase1" default="true" eventProcessingMode="cloud" equalsBehavior="equality" declarativeAgenda="enabled" packages="org.domain.pkg1">
<ksession name="KSession2_1" type="stateful" default="true"/>
<ksession name="KSession2_2" type="stateless" default="false" beliefSystem="jtms"/>
</kbase>
<kbase name="KBase2" default="false" eventProcessingMode="stream" equalsBehavior="equality" declarativeAgenda="enabled" packages="org.domain.pkg2, org.domain.pkg3" includes="KBase1">
<ksession name="KSession3_1" type="stateful" default="false" clockType="realtime">
<fileLogger file="drools.log" threaded="true" interval="10"/>
<workItemHandlers>
<workItemHandler name="name" type="org.domain.WorkItemHandler"/>
</workItemHandlers>
<calendars>
<calendar name="monday" type="org.domain.Monday"/>
</calendars>
<listeners>
<ruleRuntimeEventListener type="org.domain.RuleRuntimeListener"/>
<agendaEventListener type="org.domain.FirstAgendaListener"/>
<agendaEventListener type="org.domain.SecondAgendaListener"/>
<processEventListener type="org.domain.ProcessListener"/>
</listeners>
</ksession>
</kbase>
</kmodule>
Here the
tag contains a list of key-value pairs that are the optional properties used to configure the KieBase
s building process.
For instance this sample kmodule.xml file defines an additional custom operator named supersetOf
and implemented by the org.mycompany.SupersetOfEvaluatorDefinition
class.
After this 2 KieBase
s have been defined and it is possible to instance 2 different types of KieSession
s from the first one, while only one from the second.
A list of the attributes that can be defined on the kbase tag, together with their meaning and default values follows:
Attribute name | Default value | Admitted values | Meaning |
---|---|---|---|
name |
none |
any |
The name with which retrieve this KieBase from the KieContainer. This is the only mandatory attribute. |
includes |
none |
any comma separated list |
A comma separated list of other KieBases contained in this kmodule. The artifacts of all these KieBases will be also included in this one. |
packages |
all |
any comma separated list |
By default all the Drools artifacts under the resources folder, at any level, are included into the KieBase. This attribute allows to limit the artifacts that will be compiled in this KieBase to only the ones belonging to the list of packages. |
default |
false |
true, false |
Defines if this KieBase is the default one for this module, so it can be created from the KieContainer without passing any name to it. There can be at most one default KieBase in each module. |
equalsBehavior |
identity |
identity, equality |
Defines the behavior of Drools when a new fact is inserted into the Working Memory. With identity it always create a new FactHandle unless the same object isn’t already present in the Working Memory, while with equality only if the newly inserted object is not equal (according to its equal method) to an already existing fact. |
eventProcessingMode |
cloud |
cloud, stream |
When compiled in cloud mode the KieBase treats events as normal facts, while in stream mode allow temporal reasoning on them. |
declarativeAgenda |
disabled |
disabled, enabled |
Defines if the Declarative Agenda is enabled or not. |
Similarly all attributes of the ksession tag (except of course the name) have meaningful default. They are listed and described in the following table:
Attribute name | Default value | Admitted values | Meaning |
---|---|---|---|
name |
none |
any |
Unique name of this KieSession. Used to fetch the KieSession from the KieContainer. This is the only mandatory attribute. |
type |
stateful |
stateful, stateless |
A stateful session allows to iteratively work with the Working Memory, while a stateless one is a one-off execution of a Working Memory with a provided data set. |
default |
false |
true, false |
Defines if this KieSession is the default one for this module, so it can be created from the KieContainer without passing any name to it. In each module there can be at most one default KieSession for each type. |
clockType |
realtime |
realtime, pseudo |
Defines if events timestamps are determined by the system clock or by a psuedo clock controlled by the application. This clock is specially useful for unit testing temporal rules. |
beliefSystem |
simple |
simple, jtms, defeasible |
Defines the type of belief system used by the KieSession. |
As outlined in the former kmodule.xml sample, it is also possible to declaratively create on each KieSession
a file (or a console) logger, one or more WorkItemHandler
s and Calendar
s plus some listeners that can be of 3 different types: ruleRuntimeEventListener, agendaEventListener and processEventListener
Having defined a kmodule.xml like the one in the former sample, it is now possible to simply retrieve the KieBases and KieSessions from the KieContainer using their names.
KieServices kieServices = KieServices.Factory.get();
KieContainer kContainer = kieServices.getKieClasspathContainer();
KieBase kBase1 = kContainer.getKieBase("KBase1");
KieSession kieSession1 = kContainer.newKieSession("KSession2_1");
StatelessKieSession kieSession2 = kContainer.newStatelessKieSession("KSession2_2");
It has to be noted that since KSession2_1 and KSession2_2 are of 2 different types (the first is stateful, while the second is stateless) it is necessary to invoke 2 different methods on the KieContainer
according to their declared type.
If the type of the KieSession
requested to the KieContainer
doesn’t correspond with the one declared in the kmodule.xml file the KieContainer
will throw a RuntimeException
.
Also since a KieBase
and a KieSession
have been flagged as default is it possible to get them from the KieContainer
without passing any name.
KieContainer kContainer = ...
KieBase kBase1 = kContainer.getKieBase(); // returns KBase1
KieSession kieSession1 = kContainer.newKieSession(); // returns KSession2_1
Since a Kie project is also a Maven project the groupId, artifactId and version declared in the pom.xml file are used to generate a ReleaseId
that uniquely identifies this project inside your application.
This allows creation of a new KieContainer from the project by simply passing its ReleaseId
to the KieServices
.
KieServices kieServices = KieServices.Factory.get();
ReleaseId releaseId = kieServices.newReleaseId( "org.acme", "myartifact", "1.0" );
KieContainer kieContainer = kieServices.newKieContainer( releaseId );
2.2.2.3. Building with Maven
The KIE plugin for Maven ensures that artifact resources are validated and pre-compiled, it is recommended that this is used at all times.
To use the plugin simply add it to the build section of the Maven pom.xml and activate it by using packaging kjar
.
<packaging>kjar</packaging>
...
<build>
<plugins>
<plugin>
<groupId>org.kie</groupId>
<artifactId>kie-maven-plugin</artifactId>
<version>7.26.0.Final</version>
<extensions>true</extensions>
</plugin>
</plugins>
</build>
The plugin comes with support for all the Drools/jBPM knowledge resources.
However, in case you are using specific KIE annotations in your Java classes, like for example @kie.api.Position
, you will need to add compile time dependency on kie-api
into your project.
We recommend to use the provided scope for all the additional KIE dependencies.
That way the kjar stays as lightweight as possible, and not dependant on any particular KIE version.
Building a KIE module without the Maven plugin will copy all the resources, as is, into the resulting JAR. When that JAR is loaded by the runtime, it will attempt to build all the resources then. If there are compilation issues it will return a null KieContainer. It also pushes the compilation overhead to the runtime. In general this is not recommended, and the Maven plugin should always be used.
2.2.2.4. Defining a KieModule programmatically
It is also possible to define the KieBase
s and KieSession
s belonging to a KieModule programmatically instead of the declarative definition in the kmodule.xml file.
The same programmatic API also allows in explicitly adding the file containing the Kie artifacts instead of automatically read them from the resources folder of your project.
To do that it is necessary to create a KieFileSystem
, a sort of virtual file system, and add all the resources contained in your project to it.
Like all other Kie core components you can obtain an instance of the KieFileSystem
from the KieServices
.
The kmodule.xml configuration file must be added to the filesystem.
This is a mandatory step.
Kie also provides a convenient fluent API, implemented by the KieModuleModel
, to programmatically create this file.
To do this in practice it is necessary to create a KieModuleModel
from the KieServices
, configure it with the desired KieBase
s and KieSession
s, convert it in XML and add the XML to the KieFileSystem
.
This process is shown by the following example:
KieServices kieServices = KieServices.Factory.get();
KieModuleModel kieModuleModel = kieServices.newKieModuleModel();
KieBaseModel kieBaseModel1 = kieModuleModel.newKieBaseModel( "KBase1 ")
.setDefault( true )
.setEqualsBehavior( EqualityBehaviorOption.EQUALITY )
.setEventProcessingMode( EventProcessingOption.STREAM );
KieSessionModel ksessionModel1 = kieBaseModel1.newKieSessionModel( "KSession1" )
.setDefault( true )
.setType( KieSessionModel.KieSessionType.STATEFUL )
.setClockType( ClockTypeOption.get("realtime") );
KieFileSystem kfs = kieServices.newKieFileSystem();
kfs.writeKModuleXML(kieModuleModel.toXML());
At this point it is also necessary to add to the KieFileSystem
, through its fluent API, all others Kie artifacts composing your project.
These artifacts have to be added in the same position of a corresponding usual Maven project.
KieFileSystem kfs = ...
kfs.write( "src/main/resources/KBase1/ruleSet1.drl", stringContainingAValidDRL )
.write( "src/main/resources/dtable.xls",
kieServices.getResources().newInputStreamResource( dtableFileStream ) );
This example shows that it is possible to add the Kie artifacts both as plain Strings and as Resource
s.
In the latter case the Resource
s can be created by the KieResources
factory, also provided by the KieServices
.
The KieResources
provides many convenient factory methods to convert an InputStream
, a URL
, a File
, or a String
representing a path of your file system to a Resource
that can be managed by the KieFileSystem
.
Normally the type of a Resource
can be inferred from the extension of the name used to add it to the KieFileSystem
.
However it also possible to not follow the Kie conventions about file extensions and explicitly assign a specific ResourceType
to a Resource
as shown below:
KieFileSystem kfs = ...
kfs.write( "src/main/resources/myDrl.txt",
kieServices.getResources().newInputStreamResource( drlStream )
.setResourceType(ResourceType.DRL) );
Add all the resources to the KieFileSystem
and build it by passing the KieFileSystem
to a KieBuilder
When the contents of a KieFileSystem
are successfully built, the resulting KieModule
is automatically added to the KieRepository
.
The KieRepository
is a singleton acting as a repository for all the available KieModule
s.
After this it is possible to create through the KieServices
a new KieContainer
for that KieModule
using its ReleaseId
.
However, since in this case the KieFileSystem
doesn’t contain any pom.xml file (it is possible to add one using the KieFileSystem.writePomXML
method), Kie cannot determine the ReleaseId
of the KieModule
and assign to it a default one.
This default ReleaseId
can be obtained from the KieRepository
and used to identify the KieModule
inside the KieRepository
itself.
The following example shows this whole process.
KieServices kieServices = KieServices.Factory.get();
KieFileSystem kfs = ...
kieServices.newKieBuilder( kfs ).buildAll();
KieContainer kieContainer = kieServices.newKieContainer(kieServices.getRepository().getDefaultReleaseId());
At this point it is possible to get KieBase
s and create new KieSession
s from this KieContainer
exactly in the same way as in the case of a KieContainer
created directly from the classpath.
It is a best practice to check the compilation results.
The KieBuilder
reports compilation results of 3 different severities: ERROR, WARNING and INFO.
An ERROR indicates that the compilation of the project failed and in the case no KieModule
is produced and nothing is added to the KieRepository
.
WARNING and INFO results can be ignored, but are available for inspection.
KieBuilder kieBuilder = kieServices.newKieBuilder( kfs ).buildAll();
assertEquals( 0, kieBuilder.getResults().getMessages( Message.Level.ERROR ).size() );
2.2.2.5. Changing the Default Build Result Severity
In some cases, it is possible to change the default severity of a type of build result. For instance, when a new rule with the same name of an existing rule is added to a package, the default behavior is to replace the old rule by the new rule and report it as an INFO. This is probably ideal for most use cases, but in some deployments the user might want to prevent the rule update and report it as an error.
Changing the default severity for a result type, configured like any other option in Drools, can be done by API calls, system properties or configuration files. As of this version, Drools supports configurable result severity for rule updates and function updates. To configure it using system properties or configuration files, the user has to use the following properties:
// sets the severity of rule updates
drools.kbuilder.severity.duplicateRule = <INFO|WARNING|ERROR>
// sets the severity of function updates
drools.kbuilder.severity.duplicateFunction = <INFO|WARNING|ERROR>
2.2.2.6. Building and running Drools in a fat jar
Many modules of Drools (e.g. drools-core, drools-compiler) have a file named kie.conf
containing the names of the classes implementing the services
provided by the corresponding module. When running Drools in a fat JAR, for example created by the Maven Shade Plugin, those various kie.conf
files
need to be merged, otherwise , the fat JAR will contain only 1 kie.conf from a single dependency, resulting into errors. You can merge resources in
the Maven Shade Plugin using transformers, like this:
<transformer implementation="org.apache.maven.plugins.shade.resource.AppendingTransformer">
<resource>META-INF/kie.conf</resource>
</transformer>
For instance this is required when running Drools in a Vert.x application. In this case the Maven Shade Plugin can be configured as it follows:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>3.1.0</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<transformers>
<transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
<manifestEntries>
<Main-Class>io.vertx.core.Launcher</Main-Class>
<Main-Verticle>${main.verticle}</Main-Verticle>
</manifestEntries>
</transformer>
<transformer implementation="org.apache.maven.plugins.shade.resource.AppendingTransformer">
<resource>META-INF/services/io.vertx.core.spi.VerticleFactory</resource>
</transformer>
<transformer implementation="org.apache.maven.plugins.shade.resource.AppendingTransformer">
<resource>META-INF/kie.conf</resource>
</transformer>
</transformers>
<artifactSet>
</artifactSet>
<outputFile>${project.build.directory}/${project.artifactId}-${project.version}-fat.jar</outputFile>
</configuration>
</execution>
</executions>
</plugin>
2.2.3. Deploying
2.2.3.1. KieBase
The KieBase
is a repository of all the application’s knowledge definitions.
It will contain rules, processes, functions, and type models.
The KieBase
itself does not contain data; instead, sessions are created from the KieBase
into which data can be inserted and from which process instances may be started.
The KieBase
can be obtained from the KieContainer
containing the KieModule
where the KieBase
has been defined.
Sometimes, for instance in a OSGi environment, the KieBase
needs to resolve types that are not in the default class loader.
In this case it will be necessary to create a KieBaseConfiguration
with an additional class loader and pass it to KieContainer
when creating a new KieBase
from it.
KieServices kieServices = KieServices.Factory.get();
KieBaseConfiguration kbaseConf = kieServices.newKieBaseConfiguration( null, MyType.class.getClassLoader() );
KieBase kbase = kieContainer.newKieBase( kbaseConf );
2.2.3.2. KieSessions and KieBase Modifications
KieSessions will be discussed in more detail in section "Running". The KieBase
creates and returns KieSession
objects, and it may optionally keep references to those.
When KieBase
modifications occur those modifications are applied against the data in the sessions.
This reference is a weak reference and it is also optional, which is controlled by a boolean flag.
2.2.3.3. KieScanner
The KieScanner
allows continuous monitoring of your Maven repository to check whether a new release of a Kie project has been installed.
A new release is deployed in the KieContainer
wrapping that project.
The use of the KieScanner
requires kie-ci.jar to be on the classpath.
A KieScanner
can be registered on a KieContainer
as in the following example.
KieServices kieServices = KieServices.Factory.get();
ReleaseId releaseId = kieServices.newReleaseId( "org.acme", "myartifact", "1.0-SNAPSHOT" );
KieContainer kContainer = kieServices.newKieContainer( releaseId );
KieScanner kScanner = kieServices.newKieScanner( kContainer );
// Start the KieScanner polling the Maven repository every 10 seconds
kScanner.start( 10000L );
In this example the KieScanner
is configured to run with a fixed time interval, but it is also possible to run it on demand by invoking the scanNow()
method on it.
If the KieScanner
finds, in the Maven repository, an updated version of the Kie project used by that KieContainer
it automatically downloads the new version and triggers an incremental build of the new project.
At this point, existing KieBase
s and KieSession
s under the control of KieContainer
will get automatically upgraded with it - specifically, those KieBase
s obtained with getKieBase()
along with their related KieSession
s, and any KieSession
obtained directly with KieContainer.newKieSession()
thus referencing the default KieBase
.
Additionally, from this moment on, all the new KieBase
s and KieSession
s created from that KieContainer
will use the new project version.
Please notice however any existing KieBase
which was obtained via newKieBase()
before the KieScanner upgrade, and any of its related KieSession
s, will not get automatically upgraded; this is because KieBase
s obtained via newKieBase()
are not under the direct control of the KieContainer
.
The KieScanner
will only pickup changes to deployed jars if it is using a SNAPSHOT, version range, the LATEST, or the RELEASE setting.
Fixed versions will not automatically update at runtime.
In case you don’t want to install a maven repository, it is also possible to have a KieScanner
that works
by simply fetching update from a folder of a plain file system. You can create such a KieScanner
as simply as
KieServices kieServices = KieServices.Factory.get();
KieScanner kScanner = kieServices.newKieScanner( kContainer, "/myrepo/kjars" );
where "/myrepo/kjars" will be the folder where the KieScanner
will look for kjar updates. The jar files placed in this folder
have to follow the maven convention and then have to be a name in the form {artifactId}-{versionId}.jar
2.2.3.4. Maven Versions and Dependencies
Maven supports a number of mechanisms to manage versioning and dependencies within applications. Modules can be published with specific version numbers, or they can use the SNAPSHOT suffix. Dependencies can specify version ranges to consume, or take avantage of SNAPSHOT mechanism.
StackOverflow provides a very good description for this, which is reproduced below.
Since Maven 3.x metaversions RELEASE and LATEST are no longer supported for the sake of reproducible builds. |
See the POM Syntax section of the Maven book for more details.
Here’s an example illustrating the various options. In the Maven repository, com.foo:my-foo has the following metadata:
<metadata>
<groupId>com.foo</groupId>
<artifactId>my-foo</artifactId>
<version>2.0.0</version>
<versioning>
<release>1.1.1</release>
<versions>
<version>1.0</version>
<version>1.0.1</version>
<version>1.1</version>
<version>1.1.1</version>
<version>2.0.0</version>
</versions>
<lastUpdated>20090722140000</lastUpdated>
</versioning>
</metadata>
If a dependency on that artifact is required, you have the following options (other version ranges can be specified of course, just showing the relevant ones here): Declare an exact version (will always resolve to 1.0.1):
<version>[1.0.1]</version>
Declare an explicit version (will always resolve to 1.0.1 unless a collision occurs, when Maven will select a matching version):
<version>1.0.1</version>
Declare a version range for all 1.x (will currently resolve to 1.1.1):
<version>[1.0.0,2.0.0)</version>
Declare an open-ended version range (will resolve to 2.0.0):
<version>[1.0.0,)</version>
Declare the version as LATEST (will resolve to 2.0.0):
<version>LATEST</version>
Declare the version as RELEASE (will resolve to 1.1.1):
<version>RELEASE</version>
Note that by default your own deployments will update the "latest" entry in the Maven metadata, but to update the "release" entry, you need to activate the "release-profile" from the Maven super POM. You can do this with either "-Prelease-profile" or "-DperformRelease=true"
2.2.3.5. Settings.xml and Remote Repository Setup
The maven settings.xml is used to configure Maven execution. Detailed instructions can be found at the Maven website:
The settings.xml file can be located in 3 locations, the actual settings used is a merge of those 3 locations.
-
The Maven install:
$M2_HOME/conf/settings.xml
-
A user’s install:
${user.home}/.m2/settings.xml
-
Folder location specified by the system property
kie.maven.settings.custom
The settings.xml is used to specify the location of remote repositories. It is important that you activate the profile that specifies the remote repository, typically this can be done using "activeByDefault":
<profiles>
<profile>
<id>profile-1</id>
<activation>
<activeByDefault>true</activeByDefault>
</activation>
...
</profile>
</profiles>
Maven provides detailed documentation on using multiple remote repositories:
2.2.4. Running
2.2.4.1. KieBase
The KieBase
is a repository of all the application’s knowledge definitions.
It will contain rules, processes, functions, and type models.
The KieBase
itself does not contain data; instead, sessions are created from the KieBase
into which data can be inserted and from which process instances may be started.
The KieBase
can be obtained from the KieContainer
containing the KieModule
where the KieBase
has been defined.
KieBase kBase = kContainer.getKieBase();
2.2.4.2. KieSession
The KieSession
stores and executes on the runtime data.
It is created from the KieBase
.
KieSession ksession = kbase.newKieSession();
2.2.4.3. KieRuntime
KieRuntime
The KieRuntime
provides methods that are applicable to both rules and processes, such as setting globals and registering channels.
("Exit point" is an obsolete synonym for "channel".)
Globals are named objects that are made visible to the Drools engine, but in a way that is fundamentally different from the one for facts: changes in the object backing a global do not trigger reevaluation of rules. Still, globals are useful for providing static information, as an object offering services that are used in the RHS of a rule, or as a means to return objects from the Drools engine. When you use a global on the LHS of a rule, make sure it is immutable, or, at least, don’t expect changes to have any effect on the behavior of your rules.
A global must be declared in a rules file, and then it needs to be backed up with a Java object.
global java.util.List list
With the KIE base now aware of the global identifier and its type, it is now possible to call ksession.setGlobal()
with the global’s name and an object, for any session, to associate the object with the global.
Failure to declare the global type and identifier in DRL code will result in an exception being thrown from this call.
List list = new ArrayList();
ksession.setGlobal("list", list);
Make sure to set any global before it is used in the evaluation of a rule.
Failure to do so results in a NullPointerException
.
2.2.4.4. Event Model
The event package provides means to be notified of Drools engine events, including rules firing, objects being asserted, etc. This allows separation of logging and auditing activities from the main part of your application (and the rules).
The KieRuntimeEventManager
interface is implemented by the KieRuntime
which provides two interfaces, RuleRuntimeEventManager
and ProcessEventManager
.
We will only cover the RuleRuntimeEventManager
here.
The RuleRuntimeEventManager
allows for listeners to be added and removed, so that events for the working memory and the agenda can be listened to.
The following code snippet shows how a simple agenda listener is declared and attached to a session. It will print matches after they have fired.
ksession.addEventListener( new DefaultAgendaEventListener() {
public void afterMatchFired(AfterMatchFiredEvent event) {
super.afterMatchFired( event );
System.out.println( event );
}
});
Drools also provides DebugRuleRuntimeEventListener
and DebugAgendaEventListener
which implement each method with a debug print statement.
To print all Working Memory events, you add a listener like this:
ksession.addEventListener( new DebugRuleRuntimeEventListener() );
All emitted events implement the KieRuntimeEvent
interface which can be used to retrieve the actual KnowlegeRuntime
the event originated from.
The events currently supported are:
-
MatchCreatedEvent
-
MatchCancelledEvent
-
BeforeMatchFiredEvent
-
AfterMatchFiredEvent
-
AgendaGroupPushedEvent
-
AgendaGroupPoppedEvent
-
ObjectInsertEvent
-
ObjectDeletedEvent
-
ObjectUpdatedEvent
-
ProcessCompletedEvent
-
ProcessNodeLeftEvent
-
ProcessNodeTriggeredEvent
-
ProcessStartEvent
2.2.4.5. KieRuntimeLogger
The KieRuntimeLogger uses the comprehensive event system in Drools to create an audit log that can be used to log the execution of an application for later inspection, using tools such as the Eclipse audit viewer.
KieRuntimeLogger logger =
KieServices.Factory.get().getLoggers().newFileLogger(ksession, "logdir/mylogfile");
...
logger.close();
2.2.4.6. Commands and the CommandExecutor
KIE has the concept of stateful or stateless sessions. Stateful sessions have already been covered, which use the standard KieRuntime, and can be worked with iteratively over time. Stateless is a one-off execution of a KieRuntime with a provided data set. It may return some results, with the session being disposed at the end, prohibiting further iterative interactions. You can think of stateless as treating an engine like a function call with optional return results.
The foundation for this is the CommandExecutor
interface, which both the stateful and stateless interfaces extend.
This returns an ExecutionResults
:
The CommandExecutor
allows for commands to be executed on those sessions, the only difference being that the StatelessKieSession executes fireAllRules()
at the end before disposing the session.
The commands can be created using the CommandExecutor
.The Javadocs provide the full list of the allowed comands using the CommandExecutor
.
setGlobal and getGlobal are two commands relevant to both Drools and jBPM.
Set Global calls setGlobal underneath.
The optional boolean indicates whether the command should return the global’s value as part of the ExecutionResults
.
If true it uses the same name as the global name.
A String can be used instead of the boolean, if an alternative name is desired.
StatelessKieSession ksession = kbase.newStatelessKieSession();
ExecutionResults bresults =
ksession.execute( CommandFactory.newSetGlobal( "stilton", new Cheese( "stilton" ), true);
Cheese stilton = bresults.getValue( "stilton" );
Allows an existing global to be returned. The second optional String argument allows for an alternative return name.
StatelessKieSession ksession = kbase.newStatelessKieSession();
ExecutionResults bresults =
ksession.execute( CommandFactory.getGlobal( "stilton" );
Cheese stilton = bresults.getValue( "stilton" );
All the above examples execute single commands.
The BatchExecution
represents a composite command, created from a list of commands.
It will iterate over the list and execute each command in turn.
This means you can insert some objects, start a process, call fireAllRules and execute a query, all in a single execute(…)
call, which is quite powerful.
The StatelessKieSession will execute fireAllRules()
automatically at the end.
However the keen-eyed reader probably has already noticed the FireAllRules
command and wondered how that works with a StatelessKieSession.
The FireAllRules
command is allowed, and using it will disable the automatic execution at the end; think of using it as a sort of manual override function.
Any command, in the batch, that has an out identifier set will add its results to the returned ExecutionResults
instance.
Let’s look at a simple example to see how this works.
The example presented includes command from the Drools and jBPM, for the sake of illustration.
They are covered in more detail in the Drool and jBPM specific sections.
StatelessKieSession ksession = kbase.newStatelessKieSession();
List cmds = new ArrayList();
cmds.add( CommandFactory.newInsertObject( new Cheese( "stilton", 1), "stilton") );
cmds.add( CommandFactory.newStartProcess( "process cheeses" ) );
cmds.add( CommandFactory.newQuery( "cheeses" ) );
ExecutionResults bresults = ksession.execute( CommandFactory.newBatchExecution( cmds ) );
Cheese stilton = ( Cheese ) bresults.getValue( "stilton" );
QueryResults qresults = ( QueryResults ) bresults.getValue( "cheeses" );
In the above example multiple commands are executed, two of which populate the ExecutionResults
.
The query command defaults to use the same identifier as the query name, but it can also be mapped to a different identifier.
All commands support XML and jSON marshalling using XStream, as well as JAXB marshalling. This is covered in Drools commands.
2.2.4.7. StatelessKieSession
The StatelessKieSession
wraps the KieSession
, instead of extending it.
Its main focus is on the decision service type scenarios.
It avoids the need to call dispose()
.
Stateless sessions do not support iterative insertions and the method call fireAllRules()
from Java code; the act of calling execute()
is a single-shot method that will internally instantiate a KieSession
, add all the user data and execute user commands, call fireAllRules()
, and then call dispose()
.
While the main way to work with this class is via the BatchExecution
(a subinterface of Command
) as supported by the CommandExecutor
interface, two convenience methods are provided for when simple object insertion is all that’s required.
The CommandExecutor
and BatchExecution
are talked about in detail in their own section.
Our simple example shows a stateless session executing a given collection of Java objects using the convenience API. It will iterate the collection, inserting each element in turn.
StatelessKieSession ksession = kbase.newStatelessKieSession();
ksession.execute( collection );
If this was done as a single Command it would be as follows:
ksession.execute( CommandFactory.newInsertElements( collection ) );
If you wanted to insert the collection itself, and the collection’s individual elements, then CommandFactory.newInsert(collection)
would do the job.
Methods of the CommandFactory
create the supported commands, all of which can be marshalled using XStream and the BatchExecutionHelper
. BatchExecutionHelper
provides details on the XML format as well as how to use Drools Pipeline to automate the marshalling of BatchExecution
and ExecutionResults
.
StatelessKieSession
supports globals, scoped in a number of ways.
We cover the non-command way first, as commands are scoped to a specific execution call.
Globals can be resolved in three ways.
-
The StatelessKieSession method
getGlobals()
returns a Globals instance which provides access to the session’s globals. These are used for all execution calls. Exercise caution regarding mutable globals because execution calls can be executing simultaneously in different threads.Example 26. Session scoped globalStatelessKieSession ksession = kbase.newStatelessKieSession(); // Set a global hbnSession, that can be used for DB interactions in the rules. ksession.setGlobal( "hbnSession", hibernateSession ); // Execute while being able to resolve the "hbnSession" identifier. ksession.execute( collection );
-
Using a delegate is another way of global resolution. Assigning a value to a global (with
setGlobal(String, Object)
) results in the value being stored in an internal collection mapping identifiers to values. Identifiers in this internal collection will have priority over any supplied delegate. Only if an identifier cannot be found in this internal collection, the delegate global (if any) will be used. -
The third way of resolving globals is to have execution scoped globals. Here, a
Command
to set a global is passed to theCommandExecutor
.
The CommandExecutor
interface also offers the ability to export data via "out" parameters.
Inserted facts, globals and query results can all be returned.
// Set up a list of commands
List cmds = new ArrayList();
cmds.add( CommandFactory.newSetGlobal( "list1", new ArrayList(), true ) );
cmds.add( CommandFactory.newInsert( new Person( "jon", 102 ), "person" ) );
cmds.add( CommandFactory.newQuery( "Get People", "getPeople" ) );
// Execute the list
ExecutionResults results =
ksession.execute( CommandFactory.newBatchExecution( cmds ) );
// Retrieve the ArrayList
results.getValue( "list1" );
// Retrieve the inserted Person fact
results.getValue( "person" );
// Retrieve the query as a QueryResults instance.
results.getValue( "Get People" );
2.2.4.8. Marshalling
The KieMarshallers
are used to marshal and unmarshal KieSessions.
An instance of the KieMarshallers
can be retrieved from the KieServices
.
A simple example is shown below:
// ksession is the KieSession
// kbase is the KieBase
ByteArrayOutputStream baos = new ByteArrayOutputStream();
Marshaller marshaller = KieServices.Factory.get().getMarshallers().newMarshaller( kbase );
marshaller.marshall( baos, ksession );
baos.close();
However, with marshalling, you will need more flexibility when dealing with referenced user data.
To achieve this use the ObjectMarshallingStrategy
interface.
Two implementations are provided, but users can implement their own.
The two supplied strategies are IdentityMarshallingStrategy
and SerializeMarshallingStrategy
. SerializeMarshallingStrategy
is the default, as shown in the example above, and it just calls the Serializable
or Externalizable
methods on a user instance. IdentityMarshallingStrategy
creates an integer id for each user object and stores them in a Map, while the id is written to the stream.
When unmarshalling it accesses the IdentityMarshallingStrategy
map to retrieve the instance.
This means that if you use the IdentityMarshallingStrategy
, it is stateful for the life of the Marshaller instance and will create ids and keep references to all objects that it attempts to marshal.
Below is the code to use an Identity Marshalling Strategy.
ByteArrayOutputStream baos = new ByteArrayOutputStream();
KieMarshallers kMarshallers = KieServices.Factory.get().getMarshallers()
ObjectMarshallingStrategy oms = kMarshallers.newIdentityMarshallingStrategy()
Marshaller marshaller =
kMarshallers.newMarshaller( kbase, new ObjectMarshallingStrategy[]{ oms } );
marshaller.marshall( baos, ksession );
baos.close();
Im most cases, a single strategy is insufficient.
For added flexibility, the ObjectMarshallingStrategyAcceptor
interface can be used.
This Marshaller has a chain of strategies, and while reading or writing a user object it iterates the strategies asking if they accept responsibility for marshalling the user object.
One of the provided implementations is ClassFilterAcceptor
.
This allows strings and wild cards to be used to match class names.
The default is ".", so in the above example the Identity Marshalling Strategy is used which has a default "." acceptor.
Assuming that we want to serialize all classes except for one given package, where we will use identity lookup, we could do the following:
ByteArrayOutputStream baos = new ByteArrayOutputStream();
KieMarshallers kMarshallers = KieServices.Factory.get().getMarshallers()
ObjectMarshallingStrategyAcceptor identityAcceptor =
kMarshallers.newClassFilterAcceptor( new String[] { "org.domain.pkg1.*" } );
ObjectMarshallingStrategy identityStrategy =
kMarshallers.newIdentityMarshallingStrategy( identityAcceptor );
ObjectMarshallingStrategy sms = kMarshallers.newSerializeMarshallingStrategy();
Marshaller marshaller =
kMarshallers.newMarshaller( kbase,
new ObjectMarshallingStrategy[]{ identityStrategy, sms } );
marshaller.marshall( baos, ksession );
baos.close();
Note that the acceptance checking order is in the natural order of the supplied elements.
2.2.4.9. Persistence and Transactions
Longterm out of the box persistence with Java Persistence API (JPA) is possible with Drools. It is necessary to have some implementation of the Java Transaction API (JTA) installed. For development purposes the Bitronix Transaction Manager is suggested, as it’s simple to set up and works embedded, but for production use JBoss Transactions is recommended.
KieServices kieServices = KieServices.Factory.get();
Environment env = kieServices.newEnvironment();
env.set( EnvironmentName.ENTITY_MANAGER_FACTORY,
Persistence.createEntityManagerFactory( "emf-name" ) );
env.set( EnvironmentName.TRANSACTION_MANAGER,
TransactionManagerServices.getTransactionManager() );
// KieSessionConfiguration may be null, and a default will be used
KieSession ksession =
kieServices.getStoreServices().newKieSession( kbase, null, env );
int sessionId = ksession.getId();
UserTransaction ut =
(UserTransaction) new InitialContext().lookup( "java:comp/UserTransaction" );
ut.begin();
ksession.insert( data1 );
ksession.insert( data2 );
ksession.startProcess( "process1" );
ut.commit();
To use a JPA, the Environment must be set with both the EntityManagerFactory
and the TransactionManager
.
If rollback occurs the ksession state is also rolled back, hence it is possible to continue to use it after a rollback.
To load a previously persisted KieSession you’ll need the id, as shown below:
KieSession ksession =
kieServices.getStoreServices().loadKieSession( sessionId, kbase, null, env );
To enable persistence several classes must be added to your persistence.xml, as in the example below:
<persistence-unit name="org.drools.persistence.jpa" transaction-type="JTA">
<provider>org.hibernate.ejb.HibernatePersistence</provider>
<jta-data-source>jdbc/BitronixJTADataSource</jta-data-source>
<class>org.drools.persistence.info.SessionInfo</class>
<class>org.drools.persistence.info.WorkItemInfo</class>
<properties>
<property name="hibernate.dialect" value="org.hibernate.dialect.H2Dialect"/>
<property name="hibernate.max_fetch_depth" value="3"/>
<property name="hibernate.hbm2ddl.auto" value="update" />
<property name="hibernate.show_sql" value="true" />
<property name="hibernate.transaction.manager_lookup_class"
value="org.hibernate.transaction.BTMTransactionManagerLookup" />
</properties>
</persistence-unit>
The jdbc JTA data source would have to be configured first. Bitronix provides a number of ways of doing this, and its documentation should be consulted for details. For a quick start, here is the programmatic approach:
PoolingDataSource ds = new PoolingDataSource();
ds.setUniqueName( "jdbc/BitronixJTADataSource" );
ds.setClassName( "org.h2.jdbcx.JdbcDataSource" );
ds.setMaxPoolSize( 3 );
ds.setAllowLocalTransactions( true );
ds.getDriverProperties().put( "user", "sa" );
ds.getDriverProperties().put( "password", "sasa" );
ds.getDriverProperties().put( "URL", "jdbc:h2:mem:mydb" );
ds.init();
Bitronix also provides a simple embedded JNDI service, ideal for testing. To use it, add a jndi.properties file to your META-INF folder and add the following line to it:
java.naming.factory.initial=bitronix.tm.jndi.BitronixInitialContextFactory
2.2.5. Installation and Deployment Cheat Sheets
2.2.6. Build, Deploy and Utilize Examples
The best way to learn the new build system is by example. The source project "drools-examples-api" contains a number of examples, and can be found at GitHub:
Each example is described below, the order starts with the simplest (most of the options are defaulted) and working its way up to more complex use cases.
The Deploy use cases shown below all involve mvn install
.
Remote deployment of JARs in Maven is well covered in Maven literature.
Utilize refers to the initial act of loading the resources and providing access to the KIE runtimes.
Where as Run refers to the act of interacting with those runtimes.
2.2.6.1. Default KieSession
-
Project: default-kesession.
-
Summary: Empty kmodule.xml KieModule on the classpath that includes all resources in a single default KieBase. The example shows the retrieval of the default KieSession from the classpath.
An empty kmodule.xml will produce a single KieBase that includes all files found under resources path, be it DRL, BPMN2, XLS etc. That single KieBase is the default and also includes a single default KieSession. Default means they can be created without knowing their names.
<kmodule xmlns="http://www.drools.org/xsd/kmodule"> </kmodule>
mvn install
ks.getKieClasspathContainer() returns the KieContainer that contains the KieBases deployed onto the environment classpath. kContainer.newKieSession() creates the default KieSession. Notice that you no longer need to look up the KieBase, in order to create the KieSession. The KieSession knows which KieBase it’s associated with, and use that, which in this case is the default KieBase.
KieServices ks = KieServices.Factory.get();
KieContainer kContainer = ks.getKieClasspathContainer();
KieSession kSession = kContainer.newKieSession();
kSession.setGlobal("out", out);
kSession.insert(new Message("Dave", "Hello, HAL. Do you read me, HAL?"));
kSession.fireAllRules();
2.2.6.2. Named KieSession
-
Project: named-kiesession.
-
Summary: kmodule.xml that has one named KieBase and one named KieSession. The examples shows the retrieval of the named KieSession from the classpath.
kmodule.xml will produce a single named KieBase, 'kbase1' that includes all files found under resources path, be it DRL, BPMN2, XLS etc. KieSession 'ksession1' is associated with that KieBase and can be created by name.
<kmodule xmlns="http://www.drools.org/xsd/kmodule">
<kbase name="kbase1">
<ksession name="ksession1"/>
</kbase>
</kmodule>
mvn install
ks.getKieClasspathContainer() returns the KieContainer that contains the KieBases deployed onto the environment classpath. This time the KieSession uses the name 'ksession1'. You do not need to lookup the KieBase first, as it knows which KieBase 'ksession1' is assocaited with.
KieServices ks = KieServices.Factory.get();
KieContainer kContainer = ks.getKieClasspathContainer();
KieSession kSession = kContainer.newKieSession("ksession1");
kSession.setGlobal("out", out);
kSession.insert(new Message("Dave", "Hello, HAL. Do you read me, HAL?"));
kSession.fireAllRules();
2.2.6.3. KieBase Inheritence
-
Project: kiebase-inclusion.
-
Summary: 'kmodule.xml' demonstrates that one KieBase can include the resources from another KieBase, from another KieModule. In this case it inherits the named KieBase from the 'name-kiesession' example. The included KieBase can be from the current KieModule or any other KieModule that is in the pom.xml dependency list.
kmodule.xml will produce a single named KieBase, 'kbase2' that includes all files found under resources path, be it DRL, BPMN2, XLS etc. Further it will include all the resources found from the KieBase 'kbase1', due to the use of the 'includes' attribute. KieSession 'ksession2' is associated with that KieBase and can be created by name.
<kbase name="kbase2" includes="kbase1">
<ksession name="ksession2"/>
</kbase>
This example requires that the previous example, 'named-kiesession', is built and installed to the local Maven repository first. Once installed it can be included as a dependency, using the standard Maven <dependencies> element.
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.drools</groupId>
<artifactId>drools-examples-api</artifactId>
<version>6.0.0/version>
</parent>
<artifactId>kiebase-inclusion</artifactId>
<name>Drools API examples - KieBase Inclusion</name>
<dependencies>
<dependency>
<groupId>org.drools</groupId>
<artifactId>drools-compiler</artifactId>
</dependency>
<dependency>
<groupId>org.drools</groupId>
<artifactId>named-kiesession</artifactId>
<version>6.0.0</version>
</dependency>
</dependencies>
</project>
Once 'named-kiesession' is built and installed this example can be built and installed as normal. Again the act of installing, will force the unit tests to run, demonstrating the use case.
mvn install
ks.getKieClasspathContainer() returns the KieContainer that contains the KieBases deployed onto the environment classpath. This time the KieSession uses the name 'ksession2'. You do not need to lookup the KieBase first, as it knows which KieBase 'ksession1' is assocaited with. Notice two rules fire this time, showing that KieBase 'kbase2' has included the resources from the dependency KieBase 'kbase1'.
KieServices ks = KieServices.Factory.get();
KieContainer kContainer = ks.getKieClasspathContainer();
KieSession kSession = kContainer.newKieSession("ksession2");
kSession.setGlobal("out", out);
kSession.insert(new Message("Dave", "Hello, HAL. Do you read me, HAL?"));
kSession.fireAllRules();
kSession.insert(new Message("Dave", "Open the pod bay doors, HAL."));
kSession.fireAllRules();
2.2.6.4. Multiple KieBases
-
Project: 'multiple-kbases.
-
Summary: Demonstrates that the 'kmodule.xml' can contain any number of KieBase or KieSession declarations. Introduces the 'packages' attribute to select the folders for the resources to be included in the KieBase.
kmodule.xml produces 6 different named KieBases. 'kbase1' includes all resources from the KieModule. The other KieBases include resources from other selected folders, via the 'packages' attribute. Note the use of wildcard '*', to select this package and all packages below it.
<kmodule xmlns="http://www.drools.org/xsd/kmodule">
<kbase name="kbase1">
<ksession name="ksession1"/>
</kbase>
<kbase name="kbase2" packages="org.some.pkg">
<ksession name="ksession2"/>
</kbase>
<kbase name="kbase3" includes="kbase2" packages="org.some.pkg2">
<ksession name="ksession3"/>
</kbase>
<kbase name="kbase4" packages="org.some.pkg, org.other.pkg">
<ksession name="ksession4"/>
</kbase>
<kbase name="kbase5" packages="org.*">
<ksession name="ksession5"/>
</kbase>
<kbase name="kbase6" packages="org.some.*">
<ksession name="ksession6"/>
</kbase>
</kmodule>
mvn install
Only part of the example is included below, as there is a test method per KieSession, but each one is a repetition of the other, with different list expectations.
@Test
public void testSimpleKieBase() {
List<Integer> list = useKieSession("ksession1");
// no packages imported means import everything
assertEquals(4, list.size());
assertTrue( list.containsAll( asList(0, 1, 2, 3) ) );
}
//.. other tests for ksession2 to ksession6 here
private List<Integer> useKieSession(String name) {
KieServices ks = KieServices.Factory.get();
KieContainer kContainer = ks.getKieClasspathContainer();
KieSession kSession = kContainer.newKieSession(name);
List<Integer> list = new ArrayList<Integer>();
kSession.setGlobal("list", list);
kSession.insert(1);
kSession.fireAllRules();
return list;
}
2.2.6.5. KieContainer from KieRepository
-
Project: kcontainer-from-repository
-
Summary: The project does not contain a kmodule.xml, nor does the pom.xml have any dependencies for other KieModules. Instead the Java code demonstrates the loading of a dynamic KieModule from a Maven repository.
The pom.xml must include kie-ci as a depdency, to ensure Maven is available at runtime. As this uses Maven under the hood you can also use the standard Maven settings.xml file.
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.drools</groupId>
<artifactId>drools-examples-api</artifactId>
<version>6.0.0</version>
</parent>
<artifactId>kiecontainer-from-kierepo</artifactId>
<name>Drools API examples - KieContainer from KieRepo</name>
<dependencies>
<dependency>
<groupId>org.kie</groupId>
<artifactId>kie-ci</artifactId>
</dependency>
</dependencies>
</project>
mvn install
In the previous examples the classpath KieContainer used. This example creates a dynamic KieContainer as specified by the ReleaseId. The ReleaseId uses Maven conventions for group id, artifact id and version. It also obeys LATEST and SNAPSHOT for versions.
KieServices ks = KieServices.Factory.get();
// Install example1 in the local Maven repo before to do this
KieContainer kContainer = ks.newKieContainer(ks.newReleaseId("org.drools", "named-kiesession", "6.0.0-SNAPSHOT"));
KieSession kSession = kContainer.newKieSession("ksession1");
kSession.setGlobal("out", out);
Object msg1 = createMessage(kContainer, "Dave", "Hello, HAL. Do you read me, HAL?");
kSession.insert(msg1);
kSession.fireAllRules();
2.2.6.6. Default KieSession from File
-
Project: default-kiesession-from-file
-
Summary: Dynamic KieModules can also be loaded from any Resource location. The loaded KieModule provides default KieBase and KieSession definitions.
No kmodue.xml file exists. The project 'default-kiesession' must be built first, so that the resulting JAR, in the target folder, can be referenced as a File.
mvn install
Any KieModule can be loaded from a Resource location and added to the KieRepository. Once deployed in the KieRepository it can be resolved via its ReleaseId. Note neither Maven or kie-ci are needed here. It will not set up a transitive dependency parent classloader.
KieServices ks = KieServices.Factory.get();
KieRepository kr = ks.getRepository();
KieModule kModule = kr.addKieModule(ks.getResources().newFileSystemResource(getFile("default-kiesession")));
KieContainer kContainer = ks.newKieContainer(kModule.getReleaseId());
KieSession kSession = kContainer.newKieSession();
kSession.setGlobal("out", out);
Object msg1 = createMessage(kContainer, "Dave", "Hello, HAL. Do you read me, HAL?");
kSession.insert(msg1);
kSession.fireAllRules();
2.2.6.7. Named KieSession from File
-
Project: named-kiesession-from-file
-
Summary: Dynamic KieModules can also be loaded from any Resource location. The loaded KieModule provides named KieBase and KieSession definitions.
No kmodue.xml file exists. The project 'named-kiesession' must be built first, so that the resulting JAR, in the target folder, can be referenced as a File.
mvn install
Any KieModule can be loaded from a Resource location and added to the KieRepository. Once in the KieRepository it can be resolved via its ReleaseId. Note neither Maven or kie-ci are needed here. It will not setup a transitive dependency parent classloader.
KieServices ks = KieServices.Factory.get();
KieRepository kr = ks.getRepository();
KieModule kModule = kr.addKieModule(ks.getResources().newFileSystemResource(getFile("named-kiesession")));
KieContainer kContainer = ks.newKieContainer(kModule.getReleaseId());
KieSession kSession = kContainer.newKieSession("ksession1");
kSession.setGlobal("out", out);
Object msg1 = createMessage(kContainer, "Dave", "Hello, HAL. Do you read me, HAL?");
kSession.insert(msg1);
kSession.fireAllRules();
2.2.6.8. KieModule with Dependent KieModule
-
Project: kie-module-form-multiple-files
-
Summary: Programmatically provide the list of dependant KieModules, without using Maven to resolve anything.
No kmodue.xml file exists. The projects 'named-kiesession' and 'kiebase-include' must be built first, so that the resulting JARs, in the target folders, can be referenced as Files.
mvn install
Creates two resources. One is for the main KieModule 'exRes1' the other is for the dependency 'exRes2'. Even though kie-ci is not present and thus Maven is not available to resolve the dependencies, this shows how you can manually specify the dependent KieModules, for the vararg.
KieServices ks = KieServices.Factory.get();
KieRepository kr = ks.getRepository();
Resource ex1Res = ks.getResources().newFileSystemResource(getFile("kiebase-inclusion"));
Resource ex2Res = ks.getResources().newFileSystemResource(getFile("named-kiesession"));
KieModule kModule = kr.addKieModule(ex1Res, ex2Res);
KieContainer kContainer = ks.newKieContainer(kModule.getReleaseId());
KieSession kSession = kContainer.newKieSession("ksession2");
kSession.setGlobal("out", out);
Object msg1 = createMessage(kContainer, "Dave", "Hello, HAL. Do you read me, HAL?");
kSession.insert(msg1);
kSession.fireAllRules();
Object msg2 = createMessage(kContainer, "Dave", "Open the pod bay doors, HAL.");
kSession.insert(msg2);
kSession.fireAllRules();
2.2.6.9. Programmaticaly build a Simple KieModule with Defaults
-
Project: kiemoduelmodel-example
-
Summary: Programmaticaly buid a KieModule from just a single file. The POM and models are all defaulted. This is the quickest out of the box approach, but should not be added to a Maven repository.
mvn install
This programmatically builds a KieModule. It populates the model that represents the ReleaseId and kmodule.xml, and it adds the relevant resources. A pom.xml is generated from the ReleaseId.
KieServices ks = KieServices.Factory.get();
KieRepository kr = ks.getRepository();
KieFileSystem kfs = ks.newKieFileSystem();
kfs.write("src/main/resources/org/kie/example5/HAL5.drl", getRule());
KieBuilder kb = ks.newKieBuilder(kfs);
kb.buildAll(); // kieModule is automatically deployed to KieRepository if successfully built.
if (kb.getResults().hasMessages(Level.ERROR)) {
throw new RuntimeException("Build Errors:\n" + kb.getResults().toString());
}
KieContainer kContainer = ks.newKieContainer(kr.getDefaultReleaseId());
KieSession kSession = kContainer.newKieSession();
kSession.setGlobal("out", out);
kSession.insert(new Message("Dave", "Hello, HAL. Do you read me, HAL?"));
kSession.fireAllRules();
2.2.6.10. Programmaticaly build a KieModule using Meta Models
-
Project: kiemoduelmodel-example
-
Summary: Programmaticaly build a KieModule, by creating its kmodule.xml meta model resources.
mvn install
This programmatically builds a KieModule. It populates the model that represents the ReleaseId and kmodule.xml, as well as add the relevant resources. A pom.xml is generated from the ReleaseId.
KieServices ks = KieServices.Factory.get();
KieFileSystem kfs = ks.newKieFileSystem();
Resource ex1Res = ks.getResources().newFileSystemResource(getFile("named-kiesession"));
Resource ex2Res = ks.getResources().newFileSystemResource(getFile("kiebase-inclusion"));
ReleaseId rid = ks.newReleaseId("org.drools", "kiemodulemodel-example", "6.0.0-SNAPSHOT");
kfs.generateAndWritePomXML(rid);
KieModuleModel kModuleModel = ks.newKieModuleModel();
kModuleModel.newKieBaseModel("kiemodulemodel")
.addInclude("kiebase1")
.addInclude("kiebase2")
.newKieSessionModel("ksession6");
kfs.writeKModuleXML(kModuleModel.toXML());
kfs.write("src/main/resources/kiemodulemodel/HAL6.drl", getRule());
KieBuilder kb = ks.newKieBuilder(kfs);
kb.setDependencies(ex1Res, ex2Res);
kb.buildAll(); // kieModule is automatically deployed to KieRepository if successfully built.
if (kb.getResults().hasMessages(Level.ERROR)) {
throw new RuntimeException("Build Errors:\n" + kb.getResults().toString());
}
KieContainer kContainer = ks.newKieContainer(rid);
KieSession kSession = kContainer.newKieSession("ksession6");
kSession.setGlobal("out", out);
Object msg1 = createMessage(kContainer, "Dave", "Hello, HAL. Do you read me, HAL?");
kSession.insert(msg1);
kSession.fireAllRules();
Object msg2 = createMessage(kContainer, "Dave", "Open the pod bay doors, HAL.");
kSession.insert(msg2);
kSession.fireAllRules();
Object msg3 = createMessage(kContainer, "Dave", "What's the problem?");
kSession.insert(msg3);
kSession.fireAllRules();
2.2.7. Using a KIE scanner to monitor and update KIE containers
The KIE scanner in Drools monitors your Maven repository for new SNAPSHOT
versions of your Drools project and then deploys the latest version of the project to a specified KIE container. You can use a KIE scanner in a development environment to maintain your Drools project deployments more efficiently as new versions become available.
For production environments, do not use a KIE scanner or |
-
The
kie-ci.jar
file is available on the class path of your Drools project.
-
In the relevant
.java
class in your project, register and start the KIE scanner as shown in the following example code:Registering and starting a KIE scanner for a KIE containerimport org.kie.api.KieServices; import org.kie.api.builder.ReleaseId; import org.kie.api.runtime.KieContainer; import org.kie.api.builder.KieScanner; ... KieServices kieServices = KieServices.Factory.get(); ReleaseId releaseId = kieServices .newReleaseId("com.sample", "my-app", "1.0-SNAPSHOT"); KieContainer kContainer = kieServices.newKieContainer(releaseId); KieScanner kScanner = kieServices.newKieScanner(kContainer); // Start KIE scanner for polling the Maven repository every 10 seconds (10000 ms) kScanner.start(10000L);
In this example, the KIE scanner is configured to run with a fixed time interval. The minimum KIE scanner polling interval is 1 millisecond (ms) and the maximum polling interval is the maximum value of the data type
long
. A polling interval of 0 or less results in ajava.lang.IllegalArgumentException: pollingInterval must be positive
error. You can also configure the KIE scanner to run on demand by invoking thescanNow()
method.The project group ID, artifact ID, and version (GAV) settings in the example are defined as
com.sample:my-app:1.0-SNAPSHOT
. The project version must contain the-SNAPSHOT
suffix to enable the KIE scanner to retrieve the latest build of the specified artifact version. If you change the snapshot project version number, such as increasing to1.0.1-SNAPSHOT
, then you must also update the version in the GAV definition in your KIE scanner configuration. The KIE scanner does not retrieve updates for projects with static versions, such ascom.sample:my-app:1.0
. -
In the
settings.xml
file of your Maven repository, set theupdatePolicy
configuration toalways
to enable the KIE scanner to function properly:<profile> <id>guvnor-m2-repo</id> <repositories> <repository> <id>guvnor-m2-repo</id> <name>BA Repository</name> <url>http://localhost:8080/business-central/maven2/</url> <layout>default</layout> <releases> <enabled>true</enabled> <updatePolicy>always</updatePolicy> </releases> <snapshots> <enabled>true</enabled> <updatePolicy>always</updatePolicy> </snapshots> </repository> </repositories> </profile>
After the KIE scanner starts polling, if the KIE scanner detects an updated version of the
SNAPSHOT
project in the specified KIE container, the KIE scanner automatically downloads the new project version and triggers an incremental build of the new project. From that moment, all of the newKieBase
andKieSession
objects that were created from the KIE container use the new project version.For information about starting or stopping a KIE scanner using KIE Server APIs, see KIE Server and KIE container commands in Drools.
2.3. Security
2.3.1. Security Manager
The KIE engine is a platform for the modelling and execution of business behavior, using a multitude of declarative abstractions and metaphores, like rules, processes, decision tables and etc.
Many times, the authoring of these metaphores is done by third party groups, be it a different group inside the same company, a group from a partner company, or even anonymous third parties on the internet.
Rules and Processes are designed to execute arbitrary code in order to do their job, but in such cases it might be necessary to constrain what they can do.
For instance, it is unlikely a rule should be allowed to create a classloader (what could open the system to an attack) and certainly it should not be allowed to make a call to System.exit()
.
The Java Platform provides a very comprehensive and well defined security framework that allows users to define policies for what a system can do. The KIE platform leverages that framework and allow application developers to define a specific policy to be applied to any execution of user provided code, be it in rules, processes, work item handlers and etc.
2.3.1.1. How to define a KIE Policy
Rules and processes can run with very restrict permissions, but the engine itself needs to perform many complex operations in order to work. Examples are: it needs to create classloaders, read system properties, access the file system, etc.
Once a security manager is installed, though, it will apply restrictions to all the code executing in the JVM according to the defined policy. For that reason, KIE allows the user to define two different policy files: one for the engine itself and one for the assets deployed into and executed by the engine.
One easy way to setup the enviroment is to give the engine itself a very permissive policy, while providing a constrained policy for rules and processes.
Policy files follow the standard policy file syntax as described in the Java documentation. For more details, see:
A permissive policy file for the engine can look like the following:
grant {
permission java.security.AllPermission;
}
An example security policy for rules could be:
grant {
permission java.util.PropertyPermission "*", "read";
permission java.lang.RuntimePermission "accessDeclaredMembers";
}
Please note that depending on what the rules and processes are supposed to do, many more permissions might need to be granted, like accessing files in the filesystem, databases, etc.
In order to use these policy files, all that is necessary is to execute the application with these files as parameters to the JVM. Three parameters are required:
Parameter | Meaning |
---|---|
-Djava.security.manager |
Enables the security manager |
-Djava.security.policy=<jvm_policy_file> |
Defines the global policy file to be applied to the whole application, including the engine |
-Dkie.security.policy=<kie_policy_file> |
Defines the policy file to be applied to rules and processes |
For instance:
java -Djava.security.manager -Djava.security.policy=global.policy -Dkie.security.policy=rules.policy
foo.bar.MyApp
When executing the engine inside a container, use your container’s documentation to find out how to configure the Security Manager and how to define the global security policy.
Define the kie security policy as described above and set the |
Please note that unless a Security Manager is configured, the |
A Security Manager has a high performance impact in the JVM. Applications with strict performance requirements are strongly discouraged of using a Security Manager. An alternative is the use of other security procedures like the auditing of rules/processes before testing and deployment to prevent malicious code from being deployed to the environment. |
Drools Runtime and Language
Drools is a powerful hybrid reasoning system that intelligently and efficiently processes rule data.
3. Drools engine
The Drools engine is the rules engine in Drools. The Drools engine stores, processes, and evaluates data to execute the business rules or decision models that you define. The basic function of the Drools engine is to match incoming data, or facts, to the conditions of rules and determine whether and how to execute the rules.
The Drools engine operates using the following basic components:
-
Rules: Business rules or DMN decisions that you define. All rules must contain at a minimum the conditions that trigger the rule and the actions that the rule dictates.
-
Facts: Data that enters or changes in the Drools engine that the Drools engine matches to rule conditions to execute applicable rules.
-
Production memory: Location where rules are stored in the Drools engine.
-
Working memory: Location where facts are stored in the Drools engine.
-
Agenda: Location where activated rules are registered and sorted (if applicable) in preparation for execution.
When a business user or an automated system adds or updates rule-related information in Drools, that information is inserted into the working memory of the Drools engine in the form of one or more facts. The Drools engine matches those facts to the conditions of the rules that are stored in the production memory to determine eligible rule executions. (This process of matching facts to rules is often referred to as pattern matching.) When rule conditions are met, the Drools engine activates and registers rules in the agenda, where the Drools engine then sorts prioritized or conflicting rules in preparation for execution.
The following diagram illustrates these basic components of the Drools engine:
These core concepts can help you to better understand other more advanced components, processes, and sub-processes of the Drools engine, and as a result, to design more effective business assets in Drools.
3.1. Phreak rule algorithm in the Drools engine
The Drools engine in Drools uses the Phreak algorithm for rule evaluation. Phreak evolved from the Rete algorithm, including the enhanced Rete algorithm ReteOO that was introduced in previous versions of Drools for object-oriented systems. Overall, Phreak is more scalable than Rete and ReteOO, and is faster in large systems.
While Rete is considered eager (immediate rule evaluation) and data oriented, Phreak is considered lazy (delayed rule evaluation) and goal oriented. The Rete algorithm performs many actions during the insert, update, and delete actions in order to find partial matches for all rules. This eagerness of the Rete algorithm during rule matching requires a lot of time before eventually executing rules, especially in large systems. With Phreak, this partial matching of rules is delayed deliberately to handle large amounts of data more efficiently.
The Phreak algorithm adds the following set of enhancements to previous Rete algorithms:
-
Three layers of contextual memory: Node, segment, and rule memory types
-
Rule-based, segment-based, and node-based linking
-
Lazy (delayed) rule evaluation
-
Stack-based evaluations with pause and resume
-
Isolated rule evaluation
-
Set-oriented propagations
3.1.1. Rule evaluation in Phreak
When the Drools engine starts, all rules are considered to be unlinked from pattern-matching data that can trigger the rules. At this stage, the Phreak algorithm in the Drools engine does not evaluate the rules. The insert
, update
, and delete
actions are queued, and Phreak uses a heuristic, based on the rule most likely to result in execution, to calculate and select the next rule for evaluation. When all the required input values are populated for a rule, the rule is considered to be linked to the relevant pattern-matching data. Phreak then creates a goal that represents this rule and places the goal into a priority queue that is ordered by rule salience. Only the rule for which the goal was created is evaluated, and other potential rule evaluations are delayed. While individual rules are evaluated, node sharing is still achieved through the process of segmentation.
Unlike the tuple-oriented Rete, the Phreak propagation is collection oriented. For the rule that is being evaluated, the Drools engine accesses the first node and processes all queued insert, update, and delete actions. The results are added to a set, and the set is propagated to the child node. In the child node, all queued insert, update, and delete actions are processed, adding the results to the same set. The set is then propagated to the next child node and the same process repeats until it reaches the terminal node. This cycle creates a batch process effect that can provide performance advantages for certain rule constructs.
The linking and unlinking of rules happens through a layered bit-mask system, based on network segmentation. When the rule network is built, segments are created for rule network nodes that are shared by the same set of rules. A rule is composed of a path of segments. In case a rule does not share any node with any other rule, it becomes a single segment.
A bit-mask offset is assigned to each node in the segment. Another bit mask is assigned to each segment in the path of the rule according to these requirements:
-
If at least one input for a node exists, the node bit is set to the
on
state. -
If each node in a segment has the bit set to the
on
state, the segment bit is also set to theon
state. -
If any node bit is set to the
off
state, the segment is also set to theoff
state. -
If each segment in the path of the rule is set to the
on
state, the rule is considered linked, and a goal is created to schedule the rule for evaluation.
The same bit-mask technique is used to track modified nodes, segments, and rules. This tracking ability enables an already linked rule to be unscheduled from evaluation if it has been modified since the evaluation goal for it was created. As a result, no rules can ever evaluate partial matches.
This process of rule evaluation is possible in Phreak because, as opposed to a single unit of memory in Rete, Phreak has three layers of contextual memory with node, segment, and rule memory types. This layering enables much more contextual understanding during the evaluation of a rule.
The following examples illustrate how rules are organized and evaluated in this three-layered memory system in Phreak.
Example 1: A single rule (R1) with three patterns: A, B and C. The rule forms a single segment, with bits 1, 2, and 4 for the nodes. The single segment has a bit offset of 1.
Example 2: Rule R2 is added and shares pattern A.
Pattern A is placed in its own segment, resulting in two segments for each rule. Those two segments form a path for their respective rules. The first segment is shared by both paths. When pattern A is linked, the segment becomes linked. The segment then iterates over each path that the segment is shared by, setting the bit 1 to on
. If patterns B and C are later turned on, the second segment for path R1 is linked, and this causes bit 2 to be turned on for R1. With bit 1 and bit 2 turned on for R1, the rule is now linked and a goal is created to schedule the rule for later evaluation and execution.
When a rule is evaluated, the segments enable the results of the matching to be shared. Each segment has a staging memory to queue all inserts, updates, and deletes for that segment. When R1 is evaluated, the rule processes pattern A, and this results in a set of tuples. The algorithm detects a segmentation split, creates peered tuples for each insert, update, and delete in the set, and adds them to the R2 staging memory. Those tuples are then merged with any existing staged tuples and are executed when R2 is eventually evaluated.
Example 3: Rules R3 and R4 are added and share patterns A and B.
Rules R3 and R4 have three segments and R1 has two segments. Patterns A and B are shared by R1, R3, and R4, while pattern D is shared by R3 and R4.
Example 4: A single rule (R1) with a subnetwork and no pattern sharing.
Subnetworks are formed when a Not
, Exists
, or Accumulate
node contains more than one element. In this example, the element B not( C )
forms the subnetwork. The element not( C )
is a single element that does not require a subnetwork and is therefore merged inside of the Not
node. The subnetwork uses a dedicated segment. Rule R1 still has a path of two segments and the subnetwork forms another inner path. When the subnetwork is linked, it is also linked in the outer segment.
Example 5: Rule R1 with a subnetwork that is shared by rule R2.
The subnetwork nodes in a rule can be shared by another rule that does not have a subnetwork. This sharing causes the subnetwork segment to be split into two segments.
Constrained Not
nodes and Accumulate
nodes can never unlink a segment, and are always considered to have their bits turned on.
The Phreak evaluation algorithm is stack based instead of method-recursion based. Rule evaluation can be paused and resumed at any time when a StackEntry
is used to represent the node currently being evaluated.
When a rule evaluation reaches a subnetwork, a StackEntry
object is created for the outer path segment and the subnetwork segment. The subnetwork segment is evaluated first, and when the set reaches the end of the subnetwork path, the segment is merged into a staging list for the outer node that the segment feeds into. The previous StackEntry
object is then resumed and can now process the results of the subnetwork. This process has the added benefit, especially for Accumulate
nodes, that all work is completed in a batch, before propagating to the child node.
The same stack system is used for efficient backward chaining. When a rule evaluation reaches a query node, the evaluation is paused and the query is added to the stack. The query is then evaluated to produce a result set, which is saved in a memory location for the resumed StackEntry
object to pick up and propagate to the child node. If the query itself called other queries, the process repeats, while the current query is paused and a new evaluation is set up for the current query node.
3.1.1.1. Rule evaluation with forward and backward chaining
The Drools engine in Drools is a hybrid reasoning system that uses both forward chaining and backward chaining to evaluate rules. A forward-chaining rule system is a data-driven system that starts with a fact in the working memory of the Drools engine and reacts to changes to that fact. When objects are inserted into working memory, any rule conditions that become true as a result of the change are scheduled for execution by the agenda.
In contrast, a backward-chaining rule system is a goal-driven system that starts with a conclusion that the Drools engine attempts to satisfy, often using recursion. If the system cannot reach the conclusion or goal, it searches for subgoals, which are conclusions that complete part of the current goal. The system continues this process until either the initial conclusion is satisfied or all subgoals are satisfied.
The following diagram illustrates how the Drools engine evaluates rules using forward chaining overall with a backward-chaining segment in the logic flow:
3.1.2. Rule base configuration
Drools contains a RuleBaseConfiguration.java
object that you can use to configure exception handler settings, multithreaded execution, and sequential mode in the Drools engine.
For the rule base configuration options, see the Drools RuleBaseConfiguration.java page on GitHub.
The following rule base configuration options are available for the Drools engine:
- drools.consequenceExceptionHandler
-
When configured, this system property defines the class that manages the exceptions thrown by rule consequences. You can use this property to specify a custom exception handler for rule evaluation in the Drools engine.
Default value:
org.drools.core.runtime.rule.impl.DefaultConsequenceExceptionHandler
You can specify the custom exception handler using one of the following options:
-
Specify the exception handler in a system property:
drools.consequenceExceptionHandler=org.drools.core.runtime.rule.impl.MyCustomConsequenceExceptionHandler
-
Specify the exception handler while creating the KIE base programatically:
KieServices ks = KieServices.Factory.get(); KieBaseConfiguration kieBaseConf = ks.newKieBaseConfiguration(); kieBaseConf.setOption(ConsequenceExceptionHandlerOption.get(MyCustomConsequenceExceptionHandler.class)); KieBase kieBase = kieContainer.newKieBase(kieBaseConf);
-
- drools.multithreadEvaluation
-
When enabled, this system property enables the Drools engine to evaluate rules in parallel by dividing the Phreak rule network into independent partitions. You can use this property to increase the speed of rule evaluation for specific rule bases.
Default value:
false
You can enable multithreaded evaluation using one of the following options:
-
Enable the multithreaded evaluation system property:
drools.multithreadEvaluation=true
-
Enable multithreaded evaluation while creating the KIE base programatically:
KieServices ks = KieServices.Factory.get(); KieBaseConfiguration kieBaseConf = ks.newKieBaseConfiguration(); kieBaseConf.setOption(MultithreadEvaluationOption.YES); KieBase kieBase = kieContainer.newKieBase(kieBaseConf);
Rules that use queries, salience, or agenda groups are currently not supported by the parallel Drools engine. If these rule elements are present in the KIE base, the compiler emits a warning and automatically switches back to single-threaded evaluation. However, in some cases, the Drools engine might not detect the unsupported rule elements and rules might be evaluated incorrectly. For example, the Drools engine might not detect when rules rely on implicit salience given by rule ordering inside the DRL file, resulting in incorrect evaluation due to the unsupported salience attribute.
-
- drools.sequential
-
When enabled, this system property enables sequential mode in the Drools engine. In sequential mode, the Drools engine evaluates rules one time in the order that they are listed in the Drools engine agenda without regard to changes in the working memory. This means that the Drools engine ignores any
insert
,modify
, orupdate
statements in rules and executes rules in a single sequence. As a result, rule execution may be faster in sequential mode, but important updates may not be applied to your rules. You can use this property if you use stateless KIE sessions and you do not want the execution of rules to influence subsequent rules in the agenda. Sequential mode applies to stateless KIE sessions only.Default value:
false
You can enable sequential mode using one of the following options:
-
Enable the sequential mode system property:
drools.sequential=true
-
Enable sequential mode while creating the KIE base programatically:
KieServices ks = KieServices.Factory.get(); KieBaseConfiguration kieBaseConf = ks.newKieBaseConfiguration(); kieBaseConf.setOption(SequentialOption.YES); KieBase kieBase = kieContainer.newKieBase(kieBaseConf);
-
3.1.3. Sequential mode in Phreak
Sequential mode is an advanced rule base configuration in the Drools engine, supported by Phreak, that enables the Drools engine to evaluate rules one time in the order that they are listed in the Drools engine agenda without regard to changes in the working memory. In sequential mode, the Drools engine ignores any insert
, modify
, or update
statements in rules and executes rules in a single sequence. As a result, rule execution may be faster in sequential mode, but important updates may not be applied to your rules.
Sequential mode applies to only stateless KIE sessions because stateful KIE sessions inherently use data from previously invoked KIE sessions. If you use a stateless KIE session and you want the execution of rules to influence subsequent rules in the agenda, then do not enable sequential mode. Sequential mode is disabled by default in the Drools engine.
To enable sequential mode, use one of the following options:
-
Set the system property
drools.sequential
totrue
. -
Enable sequential mode while creating the KIE base programatically:
KieServices ks = KieServices.Factory.get(); KieBaseConfiguration kieBaseConf = ks.newKieBaseConfiguration(); kieBaseConf.setOption(SequentialOption.YES); KieBase kieBase = kieContainer.newKieBase(kieBaseConf);
To configure sequential mode to use a dynamic agenda, use one of the following options:
-
Set the system property
drools.sequential.agenda
todynamic
. -
Set the sequential agenda option while creating the KIE base programatically:
KieServices ks = KieServices.Factory.get(); KieBaseConfiguration kieBaseConf = ks.newKieBaseConfiguration(); kieBaseConf.setOption(SequentialAgendaOption.DYNAMIC); KieBase kieBase = kieContainer.newKieBase(kieBaseConf);
When you enable sequential mode, the Drools engine evaluates rules in the following way:
-
Rules are ordered by salience and position in the rule set.
-
An element for each possible rule match is created. The element position indicates the execution order.
-
Node memory is disabled, with the exception of the right-input object memory.
-
The left-input adapter node propagation is disconnected and the object with the node is referenced in a
Command
object. TheCommand
object is added to a list in the working memory for later execution. -
All objects are asserted, and then the list of
Command
objects is checked and executed. -
All matches that result from executing the list are added to elements based on the sequence number of the rule.
-
The elements that contain matches are executed in a sequence. If you set a maximum number of rule executions, the Drools engine activates no more than that number of rules in the agenda for execution.
In sequential mode, the LeftInputAdapterNode
node creates a Command
object and adds it to a list in the working memory of the Drools engine. This Command
object contains references to the LeftInputAdapterNode
node and the propagated object. These references stop any left-input propagations at insertion time so that the right-input propagation never needs to attempt to join the left inputs. The references also avoid the need for the left-input memory.
All nodes have their memory turned off, including the left-input tuple memory, but excluding the right-input object memory. After all the assertions are finished and the right-input memory of all the objects is populated, the Drools engine iterates over the list of LeftInputAdatperNode
Command
objects. The objects propagate down the network, attempting to join the right-input objects, but they are not retained in the left input.
The agenda with a priority queue to schedule the tuples is replaced by an element for each rule. The sequence number of the RuleTerminalNode
node indicates the element where to place the match. After all Command
objects have finished, the elements are checked and existing matches are executed. To improve performance, the first and the last populated cell in the elements are retained.
When the network is constructed, each RuleTerminalNode
node receives a sequence number based on its salience number and the order in which it was added to the network.
The right-input node memories are typically hash maps for fast object deletion. Because object deletions are not supported, Phreak uses an object list when the values of the object are not indexed. For a large number of objects, indexed hash maps provide a performance increase. If an object has only a few instances, Phreak uses an object list instead of an index.
3.2. KIE sessions
In Drools, a KIE session stores and executes runtime data. The KIE session is created from a KIE base or directly from a KIE container if you have defined the KIE session in the KIE module descriptor file (kmodule.xml
) for your project.
kmodule.xml
file<kmodule>
...
<kbase>
...
<ksession name="KSession2_1" type="stateless" default="true" clockType="realtime">
...
</kbase>
...
</kmodule>
A KIE base is a repository that you define in the KIE module descriptor file (kmodule.xml
) for your project and contains all
in Drools, but does not contain any runtime data.
kmodule.xml
file<kmodule>
...
<kbase name="KBase2" default="false" eventProcessingMode="stream" equalsBehavior="equality" declarativeAgenda="enabled" packages="org.domain.pkg2, org.domain.pkg3" includes="KBase1">
...
</kbase>
...
</kmodule>
A KIE session can be stateless or stateful. In a stateless KIE session, data from a previous invocation of the KIE session (the previous session state) is discarded between session invocations. In a stateful KIE session, that data is retained. The type of KIE session you use depends on your project requirements and how you want data from different asset invocations to be persisted.
3.2.1. Stateless KIE sessions
A stateless KIE session is a session that does not use inference to make iterative changes to facts over time. In a stateless KIE session, data from a previous invocation of the KIE session (the previous session state) is discarded between session invocations, whereas in a stateful KIE session, that data is retained. A stateless KIE session behaves similarly to a function in that the results that it produces are determined by the contents of the KIE base and by the data that is passed into the KIE session for execution at a specific point in time. The KIE session has no memory of any data that was passed into the KIE session previously.
Stateless KIE sessions are commonly used for the following use cases:
-
Validation, such as validating that a person is eligible for a mortgage
-
Calculation, such as computing a mortgage premium
-
Routing and filtering, such as sorting incoming emails into folders or sending incoming emails to a destination
For example, consider the following driver’s license data model and sample DRL rule:
public class Applicant {
private String name;
private int age;
private boolean valid;
// Getter and setter methods
}
package com.company.license
rule "Is of valid age"
when
$a : Applicant(age < 18)
then
$a.setValid(false);
end
The Is of valid age
rule disqualifies any applicant younger than 18 years old. When the Applicant
object is inserted into the Drools engine, the Drools engine evaluates the constraints for each rule and searches for a match. The "objectType"
constraint is always implied, after which any number of explicit field constraints are evaluated. The variable $a
is a binding variable that references the matched object in the rule consequence.
The dollar sign ( |
In this example, the sample rule and all other files in the ~/resources
folder of the Drools project are built with the following code:
KieServices kieServices = KieServices.Factory.get();
KieContainer kContainer = kieServices.getKieClasspathContainer();
This code compiles all the rule files found on the class path and adds the result of this compilation, a KieModule
object, in the KieContainer
.
Finally, the StatelessKieSession
object is instantiated from the KieContainer
and is executed against specified data:
StatelessKieSession kSession = kContainer.newStatelessKieSession();
Applicant applicant = new Applicant("Mr John Smith", 16);
assertTrue(applicant.isValid());
ksession.execute(applicant);
assertFalse(applicant.isValid());
In a stateless KIE session configuration, the execute()
call acts as a combination method that instantiates the KieSession
object, adds all the user data and executes user commands, calls fireAllRules()
, and then calls dispose()
. Therefore, with a stateless KIE session, you do not need to call fireAllRules()
or call dispose()
after session invocation as you do with a stateful KIE session.
In this case, the specified applicant is under the age of 18, so the application is declined.
For a more complex use case, see the following example. This example uses a stateless KIE session and executes rules against an iterable list of objects, such as a collection.
public class Applicant {
private String name;
private int age;
// Getter and setter methods
}
public class Application {
private Date dateApplied;
private boolean valid;
// Getter and setter methods
}
package com.company.license
rule "Is of valid age"
when
Applicant(age < 18)
$a : Application()
then
$a.setValid(false);
end
rule "Application was made this year"
when
$a : Application(dateApplied > "01-jan-2009")
then
$a.setValid(false);
end
StatelessKieSession ksession = kbase.newStatelessKnowledgeSession();
Applicant applicant = new Applicant("Mr John Smith", 16);
Application application = new Application();
assertTrue(application.isValid());
ksession.execute(Arrays.asList(new Object[] { application, applicant })); (1)
assertFalse(application.isValid());
ksession.execute
(CommandFactory.newInsertIterable(new Object[] { application, applicant })); (2)
List<Command> cmds = new ArrayList<Command>(); (3)
cmds.add(CommandFactory.newInsert(new Person("Mr John Smith"), "mrSmith"));
cmds.add(CommandFactory.newInsert(new Person("Mr John Doe"), "mrDoe"));
BatchExecutionResults results = ksession.execute(CommandFactory.newBatchExecution(cmds));
assertEquals(new Person("Mr John Smith"), results.getValue("mrSmith"));
1 | Method for executing rules against an iterable collection of objects produced by the Arrays.asList() method. Every collection element is inserted before any matched rules are executed. The execute(Object object) and execute(Iterable objects) methods are wrappers around the execute(Command command) method that comes from the BatchExecutor interface. |
2 | Execution of the iterable collection of objects using the CommandFactory interface. |
3 | BatchExecutor and CommandFactory configurations for working with many different commands or result output identifiers. The CommandFactory interface supports other commands that you can use in the BatchExecutor , such as StartProcess , Query , and SetGlobal . |
3.2.1.1. Global variables in stateless KIE sessions
The StatelessKieSession
object supports global variables (globals) that you can configure to be resolved as session-scoped globals, delegate globals, or execution-scoped globals.
-
Session-scoped globals: For session-scoped globals, you can use the method
getGlobals()
to return aGlobals
instance that provides access to the KIE session globals. These globals are used for all execution calls. Use caution with mutable globals because execution calls can be executing simultaneously in different threads.Session-scoped globalimport org.kie.api.runtime.StatelessKieSession; StatelessKieSession ksession = kbase.newStatelessKieSession(); // Set a global `myGlobal` that can be used in the rules. ksession.setGlobal("myGlobal", "I am a global"); // Execute while resolving the `myGlobal` identifier. ksession.execute(collection);
-
Delegate globals: For delegate globals, you can assign a value to a global (with
setGlobal(String, Object)
) so that the value is stored in an internal collection that maps identifiers to values. Identifiers in this internal collection have priority over any supplied delegate. If an identifier cannot be found in this internal collection, the delegate global (if any) is used. -
Execution-scoped globals: For execution-scoped globals, you can use the
Command
object to set a global that is passed to theCommandExecutor
interface for execution-specific global resolution.
The CommandExecutor
interface also enables you to export data using out identifiers for globals, inserted facts, and query results:
import org.kie.api.runtime.ExecutionResults;
// Set up a list of commands.
List cmds = new ArrayList();
cmds.add(CommandFactory.newSetGlobal("list1", new ArrayList(), true));
cmds.add(CommandFactory.newInsert(new Person("jon", 102), "person"));
cmds.add(CommandFactory.newQuery("Get People" "getPeople"));
// Execute the list.
ExecutionResults results = ksession.execute(CommandFactory.newBatchExecution(cmds));
// Retrieve the `ArrayList`.
results.getValue("list1");
// Retrieve the inserted `Person` fact.
results.getValue("person");
// Retrieve the query as a `QueryResults` instance.
results.getValue("Get People");
3.2.2. Stateful KIE sessions
A stateful KIE session is a session that uses inference to make iterative changes to facts over time. In a stateful KIE session, data from a previous invocation of the KIE session (the previous session state) is retained between session invocations, whereas in a stateless KIE session, that data is discarded.
Ensure that you call the dispose() method after running a stateful KIE session so that no memory leaks occur between session invocations.
|
Stateful KIE sessions are commonly used for the following use cases:
-
Monitoring, such as monitoring a stock market and automating the buying process
-
Diagnostics, such as running fault-finding processes or medical diagnostic processes
-
Logistics, such as parcel tracking and delivery provisioning
-
Ensuring compliance, such as verifying the legality of market trades
For example, consider the following fire alarm data model and sample DRL rules:
public class Room {
private String name;
// Getter and setter methods
}
public class Sprinkler {
private Room room;
private boolean on;
// Getter and setter methods
}
public class Fire {
private Room room;
// Getter and setter methods
}
public class Alarm { }
rule "When there is a fire turn on the sprinkler"
when
Fire($room : room)
$sprinkler : Sprinkler(room == $room, on == false)
then
modify($sprinkler) { setOn(true) };
System.out.println("Turn on the sprinkler for room "+$room.getName());
end
rule "Raise the alarm when we have one or more fires"
when
exists Fire()
then
insert( new Alarm() );
System.out.println( "Raise the alarm" );
end
rule "Cancel the alarm when all the fires have gone"
when
not Fire()
$alarm : Alarm()
then
delete( $alarm );
System.out.println( "Cancel the alarm" );
end
rule "Status output when things are ok"
when
not Alarm()
not Sprinkler( on == true )
then
System.out.println( "Everything is ok" );
end
For the When there is a fire turn on the sprinkler
rule, when a fire occurs, the instances of the Fire
class are created for that room and inserted into the KIE session. The rule adds a constraint for the specific room
matched in the Fire
instance so that only the sprinkler for that room is checked. When this rule is executed, the sprinkler activates. The other sample rules determine when the alarm is activated or deactivated accordingly.
Whereas a stateless KIE session relies on standard Java syntax to modify a field, a stateful KIE session relies on the modify
statement in rules to notify the Drools engine of changes. The Drools engine then reasons over the changes and assesses impact on subsequent rule executions. This process is part of the Drools engine ability to use inference and truth maintenance and is essential in stateful KIE sessions.
In this example, the sample rules and all other files in the ~/resources
folder of the Drools project are built with the following code:
KieServices kieServices = KieServices.Factory.get();
KieContainer kContainer = kieServices.getKieClasspathContainer();
This code compiles all the rule files found on the class path and adds the result of this compilation, a KieModule
object, in the KieContainer
.
Finally, the KieSession
object is instantiated from the KieContainer
and is executed against specified data:
KieSession ksession = kContainer.newKieSession();
String[] names = new String[]{"kitchen", "bedroom", "office", "livingroom"};
Map<String,Room> name2room = new HashMap<String,Room>();
for( String name: names ){
Room room = new Room( name );
name2room.put( name, room );
ksession.insert( room );
Sprinkler sprinkler = new Sprinkler( room );
ksession.insert( sprinkler );
}
ksession.fireAllRules();
> Everything is ok
With the data added, the Drools engine completes all pattern matching but no rules have been executed, so the configured verification message appears. As new data triggers rule conditions, the Drools engine executes rules to activate the alarm and later to cancel the alarm that has been activated:
Fire kitchenFire = new Fire( name2room.get( "kitchen" ) );
Fire officeFire = new Fire( name2room.get( "office" ) );
FactHandle kitchenFireHandle = ksession.insert( kitchenFire );
FactHandle officeFireHandle = ksession.insert( officeFire );
ksession.fireAllRules();
> Raise the alarm
> Turn on the sprinkler for room kitchen
> Turn on the sprinkler for room office
ksession.delete( kitchenFireHandle );
ksession.delete( officeFireHandle );
ksession.fireAllRules();
> Cancel the alarm
> Turn off the sprinkler for room office
> Turn off the sprinkler for room kitchen
> Everything is ok
In this case, a reference is kept for the returned FactHandle
object. A fact handle is an internal engine reference to the inserted instance and enables instances to be retracted or modified later.
As this example illustrates, the data and results from previous stateful KIE sessions (the activated alarm) affect the invocation of subsequent sessions (alarm cancellation).
3.3. Inference and truth maintenance in the Drools engine
The basic function of the Drools engine is to match data to business rules and determine whether and how to execute rules. To ensure that relevant data is applied to the appropriate rules, the Drools engine makes inferences based on existing knowledge and performs the actions based on the inferred information.
For example, the following DRL rule determines the age requirements for adults, such as in a bus pass policy:
rule "Infer Adult"
when
$p : Person(age >= 18)
then
insert(new IsAdult($p))
end
Based on this rule, the Drools engine infers whether a person is an adult or a child and performs the specified action (the then
consequence). Every person who is 18 years old or older has an instance of IsAdult
inserted for them in the working memory. This inferred relation of age and bus pass can then be invoked in any rule, such as in the following rule segment:
$p : Person()
IsAdult(person == $p)
In many cases, new data in a rule system is the result of other rule executions, and this new data can affect the execution of other rules. If the Drools engine asserts data as a result of executing a rule, the Drools engine uses truth maintenance to justify the assertion and enforce truthfulness when applying inferred information to other rules. Truth maintenance also helps to identify inconsistencies and to handle contradictions. For example, if two rules are executed and result in a contradictory action, the Drools engine chooses the action based on assumptions from previously calculated conclusions.
The Drools engine inserts facts using either stated or logical insertions:
-
Stated insertions: Defined with
insert()
. After stated insertions, facts are generally retracted explicitly. (The term insertion, when used generically, refers to stated insertion.) -
Logical insertions: Defined with
insertLogical()
. After logical insertions, the facts that were inserted are automatically retracted when the conditions that inserted the facts are no longer true. The facts are retracted when no condition supports the logical insertion. A fact that is logically inserted is considered to be justified by the Drools engine.
For example, the following sample DRL rules use stated fact insertion to determine the age requirements for issuing a child bus pass or an adult bus pass:
rule "Issue Child Bus Pass"
when
$p : Person(age < 18)
then
insert(new ChildBusPass($p));
end
rule "Issue Adult Bus Pass"
when
$p : Person(age >= 18)
then
insert(new AdultBusPass($p));
end
These rules are not easily maintained in the Drools engine as bus riders increase in age and move from child to adult bus pass. As an alternative, these rules can be separated into rules for bus rider age and rules for bus pass type using logical fact insertion. The logical insertion of the fact makes the fact dependent on the truth of the when
clause.
The following DRL rules use logical insertion to determine the age requirements for children and adults:
rule "Infer Child"
when
$p : Person(age < 18)
then
insertLogical(new IsChild($p))
end
rule "Infer Adult"
when
$p : Person(age >= 18)
then
insertLogical(new IsAdult($p))
end
For logical insertions, your fact objects must override the equals and hashCode methods from the java.lang.Object object according to the Java standard. Two objects are equal if their equals methods return true for each other and if their hashCode methods return the same values. For more information, see the Java API documentation for your Java version.
|
When the condition in the rule is false, the fact is automatically retracted. This behavior is helpful in this example because the two rules are mutually exclusive. In this example, if the person is younger than 18 years old, the rule logically inserts an IsChild
fact. After the person is 18 years old or older, the IsChild
fact is automatically retracted and the IsAdult
fact is inserted.
The following DRL rules then determine whether to issue a child bus pass or an adult bus pass and logically insert the ChildBusPass
and AdultBusPass
facts. This rule configuration is possible because the truth maintenance system in the Drools engine supports chaining of logical insertions for a cascading set of retracts.
rule "Issue Child Bus Pass"
when
$p : Person()
IsChild(person == $p)
then
insertLogical(new ChildBusPass($p));
end
rule "Issue Adult Bus Pass"
when
$p : Person()
IsAdult(person =$p)
then
insertLogical(new AdultBusPass($p));
end
When a person turns 18 years old, the IsChild
fact and the person’s ChildBusPass
fact is retracted. To these set of conditions, you can relate another rule that states that a person must return the child pass after turning 18 years old. When the Drools engine automatically retracts the ChildBusPass
object, the following rule is executed to send a request to the person:
rule "Return ChildBusPass Request"
when
$p : Person()
not(ChildBusPass(person == $p))
then
requestChildBusPass($p);
end
The following flowcharts illustrate the life cycle of stated and logical insertions:
The Drools engine uses a HashMap
and a counter to track how many times an object equality is stated (how many different instances are equal). When the Drools engine logically inserts an object during a rule execution, the Drools engine justifies the object by executing the rule. For each logical insertion, only one equal object can exist, and each subsequent equal logical insertion increases the justification counter for that logical insertion. A justification is removed when the conditions of the rule become untrue, and the counter is decreased accordingly. When no more justifications exist, the logical object is automatically retracted.
3.3.1. Government ID example
So now we know what inference is, and have a basic example, how does this facilitate good rule design and maintenance?
Consider a government ID department that is responsible for issuing ID cards when children become adults. They might have a decision table that includes logic like this, which says when an adult living in London is 18 or over, issue the card:
RuleTable ID Card |
|||
CONDITION |
CONDITION |
ACTION |
|
p : Person |
|||
location |
age >= $1 |
issueIdCard($1) |
|
Select Person |
Select Adults |
Issue ID Card |
|
Issue ID Card to Adults |
London |
18 |
p |
However the ID department does not set the policy on who an adult is. That’s done at a central government level. If the central government were to change that age to 21, this would initiate a change management process. Someone would have to liaise with the ID department and make sure their systems are updated, in time for the law going live.
This change management process and communication between departments is not ideal for an agile environment, and change becomes costly and error prone. Also the card department is managing more information than it needs to be aware of with its "monolithic" approach to rules management which is "leaking" information better placed elsewhere. By this I mean that it doesn’t care what explicit "age >= 18" information determines whether someone is an adult, only that they are an adult.
In contrast to this, let’s pursue an approach where we split (de-couple) the authoring responsibilities, so that both the central government and the ID department maintain their own rules.
It’s the central government’s job to determine who is an adult. If they change the law they just update their central repository with the new rules, which others use:
RuleTable Age Policy |
||
CONDITION |
ACTION |
|
p : Person |
||
age >= $1 |
insert($1) |
|
Adult Age Policy |
Add Adult Relation |
|
Infer Adult |
18 |
new IsAdult( p ) |
The IsAdult fact, as discussed previously, is inferred from the policy rules. It encapsulates the seemingly arbitrary piece of logic "age >= 18" and provides semantic abstractions for its meaning. Now if anyone uses the above rules, they no longer need to be aware of explicit information that determines whether someone is an adult or not. They can just use the inferred fact:
RuleTable ID Card |
|||
CONDITION |
CONDITION |
ACTION |
|
p : Person |
isAdult |
||
location |
person == $1 |
issueIdCard($1) |
|
Select Person |
Select Adults |
Issue ID Card |
|
Issue ID Card to Adults |
London |
p |
p |
While the example is very minimal and trivial it illustrates some important points. We started with a monolithic and leaky approach to our knowledge engineering. We created a single decision table that had all possible information in it and that leaks information from central government that the ID department did not care about and did not want to manage.
We first de-coupled the knowledge process so each department was responsible for only what it needed to know. We then encapsulated this leaky knowledge using an inferred fact IsAdult. The use of the term IsAdult also gave a semantic abstraction to the previously arbitrary logic "age >= 18".
So a general rule of thumb when doing your knowledge engineering is:
-
Bad
-
Monolithic
-
Leaky
-
-
Good
-
De-couple knowledge responsibilities
-
Encapsulate knowledge
-
Provide semantic abstractions for those encapsulations
-
3.4. Execution control in the Drools engine
When new rule data enters the working memory of the Drools engine, rules may become fully matched and eligible for execution. A single working memory action can result in multiple eligible rule executions. When a rule is fully matched, the Drools engine creates an activation instance, referencing the rule and the matched facts, and adds the activation onto the Drools engine agenda. The agenda controls the execution order of these rule activations using a conflict resolution strategy.
After the first call of fireAllRules()
in the Java application, the Drools engine cycles repeatedly through two phases:
-
Agenda evaluation. In this phase, the Drools engine selects all rules that can be executed. If no executable rules exist, the execution cycle ends. If an executable rule is found, the Drools engine registers the activation in the agenda and then moves on to the working memory actions phase to perform rule consequence actions.
-
Working memory actions. In this phase, the Drools engine performs the rule consequence actions (the
then
portion of each rule) for all activated rules previously registered in the agenda. After all the consequence actions are complete or the main Java application process callsfireAllRules()
again, the Drools engine returns to the agenda evaluation phase to reassess rules.
When multiple rules exist on the agenda, the execution of one rule may cause another rule to be removed from the agenda. To avoid this, you can define how and when rules are executed in the Drools engine. Some common methods for defining rule execution order are by using rule salience, agenda groups, and activation groups for rules.
3.4.1. Salience for rules
Each rule has an integer salience
attribute that determines the order of execution. Rules with a higher salience value are given higher priority when ordered in the activation queue. The default salience value for rules is zero, but the salience can be negative or positive.
For example, the following sample DRL rules are listed in the Drools engine stack in the order shown:
rule "RuleA"
salience 95
when
$fact : MyFact( field1 == true )
then
System.out.println("Rule2 : " + $fact);
update($fact);
end
rule "RuleB"
salience 100
when
$fact : MyFact( field1 == false )
then
System.out.println("Rule1 : " + $fact);
$fact.setField1(true);
update($fact);
end
The RuleB
rule is listed second, but it has a higher salience value than the RuleA
rule and is therefore executed first.
3.4.2. Agenda groups for rules
An agenda group is a set of rules bound together by the same agenda-group
rule attribute. Agenda groups partition rules on the Drools engine agenda. At any one time, only one group has a focus that gives that group of rules priority for execution before rules in other agenda groups. You determine the focus with a setFocus()
call for the agenda group. You can also define rules with an auto-focus
attribute so that the next time the rule is activated, the focus is automatically given to the entire agenda group to which the rule is assigned.
Each time the setFocus()
call is made in a Java application, the Drools engine adds the specified agenda group to the top of the rule stack. The default agenda group "MAIN"
contains all rules that do not belong to a specified agenda group and is executed first in the stack unless another group has the focus.
For example, the following sample DRL rules belong to specified agenda groups and are listed in the Drools engine stack in the order shown:
rule "Increase balance for credits"
agenda-group "calculation"
when
ap : AccountPeriod()
acc : Account( $accountNo : accountNo )
CashFlow( type == CREDIT,
accountNo == $accountNo,
date >= ap.start && <= ap.end,
$amount : amount )
then
acc.balance += $amount;
end
rule "Print balance for AccountPeriod"
agenda-group "report"
when
ap : AccountPeriod()
acc : Account()
then
System.out.println( acc.accountNo +
" : " + acc.balance );
end
For this example, the rules in the "report"
agenda group must always be executed first and the rules in the "calculation"
agenda group must always be executed second. Any remaining rules in other agenda groups can then be executed. Therefore, the "report"
and "calculation"
groups must receive the focus to be executed in that order, before other rules can be executed:
Agenda agenda = ksession.getAgenda();
agenda.getAgendaGroup( "report" ).setFocus();
agenda.getAgendaGroup( "calculation" ).setFocus();
ksession.fireAllRules();
You can also use the clear()
method to cancel all the activations generated by the rules belonging to a given agenda group before each has had a chance to be executed:
ksession.getAgenda().getAgendaGroup( "Group A" ).clear();
3.4.3. Activation groups for rules
An activation group is a set of rules bound together by the same activation-group
rule attribute. In this group, only one rule can be executed. After conditions are met for a rule in that group to be executed, all other pending rule executions from that activation group are removed from the agenda.
For example, the following sample DRL rules belong to the specified activation group and are listed in the Drools engine stack in the order shown:
rule "Print balance for AccountPeriod1"
activation-group "report"
when
ap : AccountPeriod1()
acc : Account()
then
System.out.println( acc.accountNo +
" : " + acc.balance );
end
rule "Print balance for AccountPeriod2"
activation-group "report"
when
ap : AccountPeriod2()
acc : Account()
then
System.out.println( acc.accountNo +
" : " + acc.balance );
end
For this example, if the first rule in the "report"
activation group is executed, the second rule in the group and all other executable rules on the agenda are removed from the agenda.
3.5. Configuring logging in the Drools engine
The Drools engine uses the Java logging API SLF4J for system logging. You can use one of the following logging utilities with the Drools engine to investigate Drools engine activity, such as for troubleshooting or data gathering:
-
Logback
-
Apache Commons Logging
-
Apache Log4j
-
java.util.logging
package
For the logging utility that you want to use, add the relevant dependency to your Maven project or save the relevant XML configuration file in the org.drools
package of your Drools distribution:
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
<version>${logback.version}</version>
</dependency>
<configuration>
<logger name="org.drools" level="debug"/>
...
<configuration>
<log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/">
<category name="org.drools">
<priority value="debug" />
</category>
...
</log4j:configuration>
If you are developing for an ultra light environment, use the slf4j-nop or slf4j-simple logger.
|
3.6. Complex event processing (CEP)
In Drools, an event is a record of a significant change of state in the application domain at a point in time. Depending on how the domain is modeled, the change of state may be represented by a single event, multiple atomic events, or hierarchies of correlated events. From a complex event processing (CEP) perspective, an event is a type of fact or object that occurs at a specific point in time, and a business rule is a definition of how to react to the data from that fact or object. For example, in a stock broker application, a change in security prices, a change in ownership from seller to buyer, or a change in an account holder’s balance are all considered to be events because a change has occurred in the state of the application domain at a given time.
The Drools engine in Drools uses complex event processing (CEP) to detect and process multiple events within a collection of events, to uncover relationships that exist between events, and to infer new data from the events and their relationships.
CEP use cases share several requirements and goals with business rule use cases.
From a business perspective, business rule definitions are often defined based on the occurrence of scenarios triggered by events. In the following examples, events form the basis of business rules:
-
In an algorithmic trading application, a rule performs an action if the security price increases by X percent above the day opening price. The price increases are denoted by events on a stock trading application.
-
In a monitoring application, a rule performs an action if the temperature in the server room increases X degrees in Y minutes. The sensor readings are denoted by events.
From a technical perspective, business rule evaluation and CEP have the following key similarities:
-
Both business rule evaluation and CEP require seamless integration with the enterprise infrastructure and applications. This is particularly important with life-cycle management, auditing, and security.
-
Both business rule evaluation and CEP have functional requirements such as pattern matching, and non-functional requirements such as response time limits and query-rule explanations.
CEP scenarios have the following key characteristics:
-
Scenarios usually process large numbers of events, but only a small percentage of the events are relevant.
-
Events are usually immutable and represent a record of change in state.
-
Rules and queries run against events and must react to detected event patterns.
-
Related events usually have a strong temporal relationship.
-
Individual events are not prioritized. The CEP system prioritizes patterns of related events and the relationships between them.
-
Events usually need to be composed and aggregated.
Given these common CEP scenario characteristics, the CEP system in Drools supports the following features and functions to optimize event processing:
-
Event processing with proper semantics
-
Event detection, correlation, aggregation, and composition
-
Event stream processing
-
Temporal constraints to model the temporal relationships between events
-
Sliding windows of significant events
-
Session-scoped unified clock
-
Required volumes of events for CEP use cases
-
Reactive rules
-
Adapters for event input into the Drools engine (pipeline)
3.6.1. Events in complex event processing
In Drools, an event is a record of a significant change of state in the application domain at a point in time. Depending on how the domain is modeled, the change of state may be represented by a single event, multiple atomic events, or hierarchies of correlated events. From a complex event processing (CEP) perspective, an event is a type of fact or object that occurs at a specific point in time, and a business rule is a definition of how to react to the data from that fact or object. For example, in a stock broker application, a change in security prices, a change in ownership from seller to buyer, or a change in an account holder’s balance are all considered to be events because a change has occurred in the state of the application domain at a given time.
Events have the following key characteristics:
-
Are immutable: An event is a record of change that has occurred at some time in the past and cannot be changed.
The Drools engine does not enforce immutability on the Java objects that represent events. This behavior makes event data enrichment possible. Your application should be able to populate unpopulated event attributes, and these attributes are used by the Drools engine to enrich the event with inferred data. However, you should not change event attributes that have already been populated.
-
Have strong temporal constraints: Rules involving events usually require the correlation of multiple events that occur at different points in time relative to each other.
-
Have managed life cycles: Because events are immutable and have temporal constraints, they are usually only relevant for a specified period of time. This means that the Drools engine can automatically manage the life cycle of events.
-
Can use sliding windows: You can define sliding windows of time or length with events. A sliding time window is a specified period of time during which events can be processed. A sliding length window is a specified number of events that can be processed.
3.6.2. Declaring facts as events
You can declare facts as events in your Java class or DRL rule file so that the Drools engine handles the facts as events during complex event processing. You can declare the facts as interval-based events or point-in-time events. Interval-based events have a duration time and persist in the working memory of the Drools engine until their duration time has lapsed. Point-in-time events have no duration and are essentially interval-based events with a duration of zero.
For the relevant fact type in your Java class or DRL rule file, enter the @role( event )
metadata tag and parameter. The @role
metadata tag accepts the following two values:
-
fact
: (Default) Declares the type as a regular fact -
event
: Declares the type as an event
For example, the following snippet declares that the StockPoint
fact type in a stock broker application must be handled as an event:
import some.package.StockPoint
declare StockPoint
@role( event )
end
If StockPoint
is a fact type declared in the DRL rule file instead of in a pre-existing class, you can declare the event in-line in your application code:
declare StockPoint
@role( event )
datetime : java.util.Date
symbol : String
price : double
end
3.6.3. Event processing modes in the Drools engine
The Drools engine runs in either cloud mode or stream mode. In cloud mode, the Drools engine processes facts as facts with no temporal constraints, independent of time, and in no particular order. In stream mode, the Drools engine processes facts as events with strong temporal constraints, in real time or near real time. Stream mode uses synchronization to make event processing possible in Drools.
- Cloud mode
-
Cloud mode is the default operating mode of the Drools engine. In cloud mode, the Drools engine treats events as an unordered cloud. Events still have time stamps, but the Drools engine running in cloud mode cannot draw relevance from the time stamp because cloud mode ignores the present time. This mode uses the rule constraints to find the matching tuples to activate and execute rules.
Cloud mode does not impose any kind of additional requirements on facts. However, because the Drools engine in this mode has no concept of time, it cannot use temporal features such as sliding windows or automatic life-cycle management. In cloud mode, events must be explicitly retracted when they are no longer needed.
The following requirements are not imposed in cloud mode:
-
No clock synchronization because the Drools engine has no notion of time
-
No ordering of events because the Drools engine processes events as an unordered cloud, against which the Drools engine match rules
You can specify cloud mode either by setting the system property in the relevant configuration files or by using the Java client API:
Set cloud mode using system propertydrools.eventProcessingMode=cloud
Set cloud mode using Java client APIimport org.kie.api.conf.EventProcessingOption; import org.kie.api.KieBaseConfiguration; import org.kie.api.KieServices.Factory; KieBaseConfiguration config = KieServices.Factory.get().newKieBaseConfiguration(); config.setOption(EventProcessingOption.CLOUD);
-
- Stream mode
-
Stream mode enables the Drools engine to process events chronologically and in real time as they are inserted into the Drools engine. In stream mode, the Drools engine synchronizes streams of events (so that events in different streams can be processed in chronological order), implements sliding windows of time or length, and enables automatic life-cycle management.
The following requirements apply to stream mode:
-
Events in each stream must be ordered chronologically.
-
A session clock must be present to synchronize event streams.
Your application does not need to enforce ordering events between streams, but using event streams that have not been synchronized may cause unexpected results. You can specify stream mode either by setting the system property in the relevant configuration files or by using the Java client API:
Set stream mode using system propertydrools.eventProcessingMode=stream
Set stream mode using Java client APIimport org.kie.api.conf.EventProcessingOption; import org.kie.api.KieBaseConfiguration; import org.kie.api.KieServices.Factory; KieBaseConfiguration config = KieServices.Factory.get().newKieBaseConfiguration(); config.setOption(EventProcessingOption.STREAM);
-
3.6.3.1. Negative patterns in Drools engine stream mode
A negative pattern is a pattern for conditions that are not met. For example, the following DRL rule activates a fire alarm if a fire is detected and the sprinkler is not activated:
rule "Sound the alarm"
when
$f : FireDetected()
not(SprinklerActivated())
then
// Sound the alarm.
end
In cloud mode, the Drools engine assumes all facts (regular facts and events) are known in advance and evaluates negative patterns immediately. In stream mode, the Drools engine can support temporal constraints on facts to wait for a set time before activating a rule.
The same example rule in stream mode activates the fire alarm as usual, but applies a 10-second delay.
rule "Sound the alarm"
when
$f : FireDetected()
not(SprinklerActivated(this after[0s,10s] $f))
then
// Sound the alarm.
end
The following modified fire alarm rule expects one Heartbeat
event to occur every 10 seconds. If the expected event does not occur, the rule is executed. This rule uses the same type of object in both the first pattern and in the negative pattern. The negative pattern has the temporal constraint to wait 0 to 10 seconds before executing and excludes the Heartbeat
event bound to $h
so that the rule can be executed. The bound event $h
must be explicitly excluded in order for the rule to be executed because the temporal constraint [0s, …]
does not inherently exclude that event from being matched again.
rule "Sound the alarm"
when
$h: Heartbeat() from entry-point "MonitoringStream"
not(Heartbeat(this != $h, this after[0s,10s] $h) from entry-point "MonitoringStream")
then
// Sound the alarm.
end
3.6.4. Metadata tags for events
The Drools engine uses the following metadata tags for events that are inserted into the working memory of the Drools engine. You can change the default metadata tag values in your Java class or DRL rule file as needed.
The examples in this section assume that the application domain model includes the following class: The VoiceCall fact class
|
- @role
-
This tag determines whether a given fact type is handled as a regular fact or an event.
Default parameter:
fact
Supported parameters:
fact
,event
@role( <fact_or_event> )
Example: Declare VoiceCall as event typedeclare VoiceCall @role( event ) end
- @timestamp
-
This tag is automatically assigned to every event in the Drools engine. By default, the time is provided by the session clock and assigned to the event when it is inserted into the working memory of the Drools engine. You can specify a custom time stamp attribute instead of the default time stamp added by the session clock.
Default parameter: The time added by the Drools engine session clock
Supported parameters: Session clock time or custom time stamp attribute
@timestamp( <attributeName> )
Example: Declare VoiceCall timestamp attributedeclare VoiceCall @role( event ) @timestamp( callDateTime ) end
- @duration
-
This tag determines the duration time for events in the Drools engine. Events can be interval-based events or point-in-time events. Interval-based events have a duration time and persist in the working memory of the Drools engine until their duration time has lapsed. Point-in-time events have no duration and are essentially interval-based events with a duration of zero. By default, every event in the Drools engine has a duration of zero. You can specify a custom duration attribute instead of the default.
Default parameter: null (zero)
Supported parameters: Custom duration attribute
@duration( <attributeName> )
Example: Declare VoiceCall duration attributedeclare VoiceCall @role( event ) @timestamp( callDateTime ) @duration( callDuration ) end
- @expires
-
This tag determines the time duration before an event expires in the working memory of the Drools engine. By default, an event expires when the event can no longer match and activate any of the current rules. You can define an amount of time after which an event should expire. This tag is available only when the Drools engine is running in stream mode.
Default parameter: null (event expires after event can no longer match and activate rules)
Supported parameters: Custom
timeOffset
attribute in the format[#d][#h][#m][#s][[ms]]
@expires( <timeOffset> )
Example: Declare expiration offset for VoiceCall eventsdeclare VoiceCall @role( event ) @timestamp( callDateTime ) @duration( callDuration ) @expires( 1h35m ) end
3.6.5. Temporal operators for events
In stream mode, the Drools engine supports the following temporal operators for events that are inserted into the working memory of the Drools engine. You can use these operators to define the temporal reasoning behavior of the events that you declare in your Java class or DRL rule file. Temporal operators are not supported when the Drools engine is running in cloud mode.
-
after
-
before
-
coincides
-
during
-
includes
-
finishes
-
finished by
-
meets
-
met by
-
overlaps
-
overlapped by
-
starts
-
started by
- after
-
This operator specifies if the current event occurs after the correlated event. This operator can also define an amount of time after which the current event can follow the correlated event, or a delimiting time range during which the current event can follow the correlated event.
For example, the following pattern matches if
$eventA
starts between 3 minutes and 30 seconds and 4 minutes after$eventB
finishes. If$eventA
starts earlier than 3 minutes and 30 seconds after$eventB
finishes, or later than 4 minutes after$eventB
finishes, then the pattern is not matched.$eventA : EventA(this after[3m30s, 4m] $eventB)
You can also express this operator in the following way:
3m30s <= $eventA.startTimestamp - $eventB.endTimeStamp <= 4m
The
after
operator supports up to two parameter values:-
If two values are defined, the interval starts on the first value (3 minutes and 30 seconds in the example) and ends on the second value (4 minutes in the example).
-
If only one value is defined, the interval starts on the provided value and runs indefinitely with no end time.
-
If no value is defined, the interval starts at 1 millisecond and runs indefinitely with no end time.
The
after
operator also supports negative time ranges:$eventA : EventA(this after[-3m30s, -2m] $eventB)
If the first value is greater than the second value, the Drools engine automatically reverses them. For example, the following two patterns are interpreted by the Drools engine in the same way:
$eventA : EventA(this after[-3m30s, -2m] $eventB) $eventA : EventA(this after[-2m, -3m30s] $eventB)
-
- before
-
This operator specifies if the current event occurs before the correlated event. This operator can also define an amount of time before which the current event can precede the correlated event, or a delimiting time range during which the current event can precede the correlated event.
For example, the following pattern matches if
$eventA
finishes between 3 minutes and 30 seconds and 4 minutes before$eventB
starts. If$eventA
finishes earlier than 3 minutes and 30 seconds before$eventB
starts, or later than 4 minutes before$eventB
starts, then the pattern is not matched.$eventA : EventA(this before[3m30s, 4m] $eventB)
You can also express this operator in the following way:
3m30s <= $eventB.startTimestamp - $eventA.endTimeStamp <= 4m
The
before
operator supports up to two parameter values:-
If two values are defined, the interval starts on the first value (3 minutes and 30 seconds in the example) and ends on the second value (4 minutes in the example).
-
If only one value is defined, the interval starts on the provided value and runs indefinitely with no end time.
-
If no value is defined, the interval starts at 1 millisecond and runs indefinitely with no end time.
The
before
operator also supports negative time ranges:$eventA : EventA(this before[-3m30s, -2m] $eventB)
If the first value is greater than the second value, the Drools engine automatically reverses them. For example, the following two patterns are interpreted by the Drools engine in the same way:
$eventA : EventA(this before[-3m30s, -2m] $eventB) $eventA : EventA(this before[-2m, -3m30s] $eventB)
-
- coincides
-
This operator specifies if the two events occur at the same time, with the same start and end times.
For example, the following pattern matches if both the start and end time stamps of
$eventA
and$eventB
are identical:$eventA : EventA(this coincides $eventB)
The
coincides
operator supports up to two parameter values for the distance between the event start and end times, if they are not identical:-
If only one parameter is given, the parameter is used to set the threshold for both the start and end times of both events.
-
If two parameters are given, the first is used as a threshold for the start time and the second is used as a threshold for the end time.
The following pattern uses start and end time thresholds:
$eventA : EventA(this coincides[15s, 10s] $eventB)
The pattern matches if the following conditions are met:
abs($eventA.startTimestamp - $eventB.startTimestamp) <= 15s && abs($eventA.endTimestamp - $eventB.endTimestamp) <= 10s
The Drools engine does not support negative intervals for the coincides
operator. If you use negative intervals, the Drools engine generates an error. -
- during
-
This operator specifies if the current event occurs within the time frame of when the correlated event starts and ends. The current event must start after the correlated event starts and must end before the correlated event ends. (With the
coincides
operator, the start and end times are the same or nearly the same.)For example, the following pattern matches if
$eventA
starts after$eventB
starts and ends before$eventB
ends:$eventA : EventA(this during $eventB)
You can also express this operator in the following way:
$eventB.startTimestamp < $eventA.startTimestamp <= $eventA.endTimestamp < $eventB.endTimestamp
The
during
operator supports one, two, or four optional parameters:-
If one value is defined, this value is the maximum distance between the start times of the two events and the maximum distance between the end times of the two events.
-
If two values are defined, these values are a threshold between which the current event start time and end time must occur in relation to the correlated event start and end times.
For example, if the values are
5s
and10s
, the current event must start between 5 and 10 seconds after the correlated event starts and must end between 5 and 10 seconds before the correlated event ends. -
If four values are defined, the first and second values are the minimum and maximum distances between the start times of the events, and the third and fourth values are the minimum and maximum distances between the end times of the two events.
-
- includes
-
This operator specifies if the correlated event occurs within the time frame of when the current event occurs. The correlated event must start after the current event starts and must end before the current event ends. (The behavior of this operator is the reverse of the
during
operator behavior.)For example, the following pattern matches if
$eventB
starts after$eventA
starts and ends before$eventA
ends:$eventA : EventA(this includes $eventB)
You can also express this operator in the following way:
$eventA.startTimestamp < $eventB.startTimestamp <= $eventB.endTimestamp < $eventA.endTimestamp
The
includes
operator supports one, two, or four optional parameters:-
If one value is defined, this value is the maximum distance between the start times of the two events and the maximum distance between the end times of the two events.
-
If two values are defined, these values are a threshold between which the correlated event start time and end time must occur in relation to the current event start and end times.
For example, if the values are
5s
and10s
, the correlated event must start between 5 and 10 seconds after the current event starts and must end between 5 and 10 seconds before the current event ends. -
If four values are defined, the first and second values are the minimum and maximum distances between the start times of the events, and the third and fourth values are the minimum and maximum distances between the end times of the two events.
-
- finishes
-
This operator specifies if the current event starts after the correlated event but both events end at the same time.
For example, the following pattern matches if
$eventA
starts after$eventB
starts and ends at the same time when$eventB
ends:$eventA : EventA(this finishes $eventB)
You can also express this operator in the following way:
$eventB.startTimestamp < $eventA.startTimestamp && $eventA.endTimestamp == $eventB.endTimestamp
The
finishes
operator supports one optional parameter that sets the maximum time allowed between the end times of the two events:$eventA : EventA(this finishes[5s] $eventB)
This pattern matches if these conditions are met:
$eventB.startTimestamp < $eventA.startTimestamp && abs($eventA.endTimestamp - $eventB.endTimestamp) <= 5s
The Drools engine does not support negative intervals for the finishes
operator. If you use negative intervals, the Drools engine generates an error. - finished by
-
This operator specifies if the correlated event starts after the current event but both events end at the same time. (The behavior of this operator is the reverse of the
finishes
operator behavior.)For example, the following pattern matches if
$eventB
starts after$eventA
starts and ends at the same time when$eventA
ends:$eventA : EventA(this finishedby $eventB)
You can also express this operator in the following way:
$eventA.startTimestamp < $eventB.startTimestamp && $eventA.endTimestamp == $eventB.endTimestamp
The
finished by
operator supports one optional parameter that sets the maximum time allowed between the end times of the two events:$eventA : EventA(this finishedby[5s] $eventB)
This pattern matches if these conditions are met:
$eventA.startTimestamp < $eventB.startTimestamp && abs($eventA.endTimestamp - $eventB.endTimestamp) <= 5s
The Drools engine does not support negative intervals for the finished by
operator. If you use negative intervals, the Drools engine generates an error. - meets
-
This operator specifies if the current event ends at the same time when the correlated event starts.
For example, the following pattern matches if
$eventA
ends at the same time when$eventB
starts:$eventA : EventA(this meets $eventB)
You can also express this operator in the following way:
abs($eventB.startTimestamp - $eventA.endTimestamp) == 0
The
meets
operator supports one optional parameter that sets the maximum time allowed between the end time of the current event and the start time of the correlated event:$eventA : EventA(this meets[5s] $eventB)
This pattern matches if these conditions are met:
abs($eventB.startTimestamp - $eventA.endTimestamp) <= 5s
The Drools engine does not support negative intervals for the meets
operator. If you use negative intervals, the Drools engine generates an error. - met by
-
This operator specifies if the correlated event ends at the same time when the current event starts. (The behavior of this operator is the reverse of the
meets
operator behavior.)For example, the following pattern matches if
$eventB
ends at the same time when$eventA
starts:$eventA : EventA(this metby $eventB)
You can also express this operator in the following way:
abs($eventA.startTimestamp - $eventB.endTimestamp) == 0
The
met by
operator supports one optional parameter that sets the maximum distance between the end time of the correlated event and the start time of the current event:$eventA : EventA(this metby[5s] $eventB)
This pattern matches if these conditions are met:
abs($eventA.startTimestamp - $eventB.endTimestamp) <= 5s
The Drools engine does not support negative intervals for the met by
operator. If you use negative intervals, the Drools engine generates an error. - overlaps
-
This operator specifies if the current event starts before the correlated event starts and it ends during the time frame that the correlated event occurs. The current event must end between the start and end times of the correlated event.
For example, the following pattern matches if
$eventA
starts before$eventB
starts and then ends while$eventB
occurs, before$eventB
ends:$eventA : EventA(this overlaps $eventB)
The
overlaps
operator supports up to two parameters:-
If one parameter is defined, the value is the maximum distance between the start time of the correlated event and the end time of the current event.
-
If two parameters are defined, the values are the minimum distance (first value) and the maximum distance (second value) between the start time of the correlated event and the end time of the current event.
-
- overlapped by
-
This operator specifies if the correlated event starts before the current event starts and it ends during the time frame that the current event occurs. The correlated event must end between the start and end times of the current event. (The behavior of this operator is the reverse of the
overlaps
operator behavior.)For example, the following pattern matches if
$eventB
starts before$eventA
starts and then ends while$eventA
occurs, before$eventA
ends:$eventA : EventA(this overlappedby $eventB)
The
overlapped by
operator supports up to two parameters:-
If one parameter is defined, the value is the maximum distance between the start time of the current event and the end time of the correlated event.
-
If two parameters are defined, the values are the minimum distance (first value) and the maximum distance (second value) between the start time of the current event and the end time of the correlated event.
-
- starts
-
This operator specifies if the two events start at the same time but the current event ends before the correlated event ends.
For example, the following pattern matches if
$eventA
and$eventB
start at the same time, and$eventA
ends before$eventB
ends:$eventA : EventA(this starts $eventB)
You can also express this operator in the following way:
$eventA.startTimestamp == $eventB.startTimestamp && $eventA.endTimestamp < $eventB.endTimestamp
The
starts
operator supports one optional parameter that sets the maximum distance between the start times of the two events:$eventA : EventA(this starts[5s] $eventB)
This pattern matches if these conditions are met:
abs($eventA.startTimestamp - $eventB.startTimestamp) <= 5s && $eventA.endTimestamp < $eventB.endTimestamp
The Drools engine does not support negative intervals for the starts
operator. If you use negative intervals, the Drools engine generates an error. - started by
-
This operator specifies if the two events start at the same time but the correlated event ends before the current event ends. (The behavior of this operator is the reverse of the
starts
operator behavior.)For example, the following pattern matches if
$eventA
and$eventB
start at the same time, and$eventB
ends before$eventA
ends:$eventA : EventA(this startedby $eventB)
You can also express this operator in the following way:
$eventA.startTimestamp == $eventB.startTimestamp && $eventA.endTimestamp > $eventB.endTimestamp
The
started by
operator supports one optional parameter that sets the maximum distance between the start times of the two events:$eventA : EventA( this starts[5s] $eventB)
This pattern matches if these conditions are met:
abs( $eventA.startTimestamp - $eventB.startTimestamp ) <= 5s && $eventA.endTimestamp > $eventB.endTimestamp
The Drools engine does not support negative intervals for the started by
operator. If you use negative intervals, the Drools engine generates an error.
3.6.6. Session clock implementations in the Drools engine
During complex event processing, events in the Drools engine may have temporal constraints and therefore require a session clock that provides the current time. For example, if a rule needs to determine the average price of a given stock over the last 60 minutes, the Drools engine must be able to compare the stock price event time stamp with the current time in the session clock.
The Drools engine supports a real-time clock and a pseudo clock. You can use one or both clock types depending on the scenario:
-
Rules testing: Testing requires a controlled environment, and when the tests include rules with temporal constraints, you must be able to control the input rules and facts and the flow of time.
-
Regular execution: The Drools engine reacts to events in real time and therefore requires a real-time clock.
-
Special environments: Specific environments may have specific time control requirements. For example, clustered environments may require clock synchronization or Java Enterprise Edition (JEE) environments may require a clock provided by the application server.
-
Rules replay or simulation: In order to replay or simulate scenarios, the application must be able to control the flow of time.
Consider your environment requirements as you decide whether to use a real-time clock or pseudo clock in the Drools engine.
- Real-time clock
-
The real-time clock is the default clock implementation in the Drools engine and uses the system clock to determine the current time for time stamps. To configure the Drools engine to use the real-time clock, set the KIE session configuration parameter to
realtime
:Configure real-time clock in KIE sessionimport org.kie.api.KieServices.Factory; import org.kie.api.runtime.conf.ClockTypeOption; import org.kie.api.runtime.KieSessionConfiguration; KieSessionConfiguration config = KieServices.Factory.get().newKieSessionConfiguration(); config.setOption(ClockTypeOption.get("realtime"));
- Pseudo clock
-
The pseudo clock implementation in the Drools engine is helpful for testing temporal rules and it can be controlled by the application. To configure the Drools engine to use the pseudo clock, set the KIE session configuration parameter to
pseudo
:Configure pseduo clock in KIE sessionimport org.kie.api.runtime.conf.ClockTypeOption; import org.kie.api.runtime.KieSessionConfiguration; import org.kie.api.KieServices.Factory; KieSessionConfiguration config = KieServices.Factory.get().newKieSessionConfiguration(); config.setOption(ClockTypeOption.get("pseudo"));
You can also use additional configurations and fact handlers to control the pseudo clock:
Control pseudo clock behavior in KIE sessionimport java.util.concurrent.TimeUnit; import org.kie.api.runtime.KieSessionConfiguration; import org.kie.api.KieServices.Factory; import org.kie.api.runtime.KieSession; import org.drools.core.time.SessionPseudoClock; import org.kie.api.runtime.rule.FactHandle; import org.kie.api.runtime.conf.ClockTypeOption; KieSessionConfiguration conf = KieServices.Factory.get().newKieSessionConfiguration(); conf.setOption( ClockTypeOption.get("pseudo")); KieSession session = kbase.newKieSession(conf, null); SessionPseudoClock clock = session.getSessionClock(); // While inserting facts, advance the clock as necessary. FactHandle handle1 = session.insert(tick1); clock.advanceTime(10, TimeUnit.SECONDS); FactHandle handle2 = session.insert(tick2); clock.advanceTime(30, TimeUnit.SECONDS); FactHandle handle3 = session.insert(tick3);
3.6.7. Event streams and entry points
The Drools engine can process high volumes of events in the form of event streams. In DRL rule declarations, a stream is also known as an entry point. When you declare an entry point in a DRL rule or Java application, the Drools engine, at compile time, identifies and creates the proper internal structures to use data from only that entry point to evaluate that rule.
Facts from one entry point, or stream, can join facts from any other entry point in addition to facts already in the working memory of the Drools engine. Facts always remain associated with the entry point through which they entered the Drools engine. Facts of the same type can enter the Drools engine through several entry points, but facts that enter the Drools engine through entry point A can never match a pattern from entry point B.
Event streams have the following characteristics:
-
Events in the stream are ordered by time stamp. The time stamps may have different semantics for different streams, but they are always ordered internally.
-
Event streams usually have a high volume of events.
-
Atomic events in streams are usually not useful individually, only collectively in a stream.
-
Event streams can be homogeneous and contain a single type of event, or heterogeneous and contain events of different types.
3.6.7.1. Declaring entry points for rule data
You can declare an entry point (event stream) for events so that the Drools engine uses data from only that entry point to evaluate the rules. You can declare an entry point either implicitly by referencing it in DRL rules or explicitly in your Java application.
Use one of the following methods to declare the entry point:
-
In the DRL rule file, specify
from entry-point "<name>"
for the inserted fact:Authorize withdrawal rule with "ATM Stream" entry pointrule "Authorize withdrawal" when WithdrawRequest($ai : accountId, $am : amount) from entry-point "ATM Stream" CheckingAccount(accountId == $ai, balance > $am) then // Authorize withdrawal. end
Apply fee rule with "Branch Stream" entry pointrule "Apply fee on withdraws on branches" when WithdrawRequest($ai : accountId, processed == true) from entry-point "Branch Stream" CheckingAccount(accountId == $ai) then // Apply a $2 fee on the account. end
Both example DRL rules from a banking application insert the event
WithdrawalRequest
with the factCheckingAccount
, but from different entry points. At run time, the Drools engine evaluates theAuthorize withdrawal
rule using data from only the"ATM Stream"
entry point, and evaluates theApply fee
rule using data from only the"Branch Stream"
entry point. Any events inserted into the"ATM Stream"
can never match patterns for the"Apply fee"
rule, and any events inserted into the"Branch Stream"
can never match patterns for the"Authorize withdrawal rule"
. -
In the Java application code, use the
getEntryPoint()
method to specify and obtain anEntryPoint
object and insert facts into that entry point accordingly:Java application code with EntryPoint object and inserted factsimport org.kie.api.runtime.KieSession; import org.kie.api.runtime.rule.EntryPoint; // Create your KIE base and KIE session as usual. KieSession session = ... // Create a reference to the entry point. EntryPoint atmStream = session.getEntryPoint("ATM Stream"); // Start inserting your facts into the entry point. atmStream.insert(aWithdrawRequest);
Any DRL rules that specify
from entry-point "ATM Stream"
are then evaluated based on the data in this entry point only.
3.6.8. Sliding windows of time or length
In stream mode, the Drools engine can process events from a specified sliding window of time or length. A sliding time window is a specified period of time during which events can be processed. A sliding length window is a specified number of events that can be processed. When you declare a sliding window in a DRL rule or Java application, the Drools engine, at compile time, identifies and creates the proper internal structures to use data from only that sliding window to evaluate that rule.
For example, the following DRL rule snippets instruct the Drools engine to process only the stock points from the last 2 minutes (sliding time window) or to process only the last 10 stock points (sliding length window):
StockPoint() over window:time(2m)
StockPoint() over window:length(10)
3.6.8.1. Declaring sliding windows for rule data
You can declare a sliding window of time (flow of time) or length (number of occurrences) for events so that the Drools engine uses data from only that window to evaluate the rules.
In the DRL rule file, specify over window:<time_or_length>(<value>)
for the inserted fact.
For example, the following two DRL rules activate a fire alarm based on an average temperature. However, the first rule uses a sliding time window to calculate the average over the last 10 minutes while the second rule uses a sliding length window to calculate the average over the last one hundred temperature readings.
rule "Sound the alarm if temperature rises above threshold"
when
TemperatureThreshold($max : max)
Number(doubleValue > $max) from accumulate(
SensorReading($temp : temperature) over window:time(10m),
average($temp))
then
// Sound the alarm.
end
rule "Sound the alarm if temperature rises above threshold"
when
TemperatureThreshold($max : max)
Number(doubleValue > $max) from accumulate(
SensorReading($temp : temperature) over window:length(100),
average($temp))
then
// Sound the alarm.
end
The Drools engine discards any SensorReading
events that are more than 10 minutes old or that are not part of the last one hundred readings, and continues recalculating the average as the minutes or readings "slide" forward in real time.
The Drools engine does not automatically remove outdated events from the KIE session because other rules without sliding window declarations might depend on those events. The Drools engine stores events in the KIE session until the events expire either by explicit rule declarations or by implicit reasoning within the Drools engine based on inferred data in the KIE base.
3.6.9. Memory management for events
In stream mode, the Drools engine uses automatic memory management to maintain events that are stored in KIE sessions. The Drools engine can retract from a KIE session any events that no longer match any rule due to their temporal constraints and release any resources held by the retracted events.
The Drools engine uses either explicit or inferred expiration to retract outdated events:
-
Explicit expiration: The Drools engine removes events that are explicitly set to expire in rules that declare the
@expires
tag:DRL rule snippet with explicit expirationdeclare StockPoint @expires( 30m ) end
This example rule sets any
StockPoint
events to expire after 30 minutes and to be removed from the KIE session if no other rules use the events. -
Inferred expiration: The Drools engine can calculate the expiration offset for a given event implicitly by analyzing the temporal constraints in the rules:
DRL rule with temporal constraintsrule "Correlate orders" when $bo : BuyOrder($id : id) $ae : AckOrder(id == $id, this after[0,10s] $bo) then // Perform an action. end
For this example rule, the Drools engine automatically calculates that whenever a
BuyOrder
event occurs, the Drools engine needs to store the event for up to 10 seconds and wait for the matchingAckOrder
event. After 10 seconds, the Drools engine infers the expiration and removes the event from the KIE session. AnAckOrder
event can only match an existingBuyOrder
event, so the Drools engine infers the expiration if no match occurs and removes the event immediately.The Drools engine analyzes the entire KIE base to find the offset for every event type and to ensure that no other rules use the events that are pending removal. Whenever an implicit expiration clashes with an explicit expiration value, the Drools engine uses the greater time frame of the two to store the event longer.
4. Running
This section extends the KIE Running section, which should be read first, with specifics for the Drools run time.
4.1. KieRuntime
4.1.1. EntryPoint
The EntryPoint
provides the methods around inserting, updating and deleting facts.
The term "entry point" is related to the fact that we have multiple partitions in a Working Memory and you can choose which one you are inserting into.
The use of multiple entry points is more common in event processing use cases, but they can be used by pure rule applications as well.
The KieRuntime
interface provides the main interaction with the Drools engine.
It is available in rule consequences and process actions.
In this manual the focus is on the methods and interfaces related to rules, and the methods pertaining to processes will be ignored for now.
But you’ll notice that the KieRuntime
inherits methods from both the WorkingMemory
and the ProcessRuntime
, thereby providing a unified API to work with processes and rules.
When working with rules, three interfaces form the KieRuntime
: EntryPoint
, WorkingMemory
and the KieRuntime
itself.
4.1.1.1. Insert
In order for a fact to be evaluated against the rules in a KieBase
, it has to be inserted into the session.
This is done by calling the method insert(yourObject)
.
When a fact is inserted into the session, some of its properties might be immediately evaluated (eager evaluation) and some might be deferred for later evaluation (lazy evaluation). The exact behaviour depends on the Drools engine algorithm being used.
Expert systems typically use the term assert or assertion to refer to facts made available to the system.
However, due to "assert" being a keyword in most languages, we have decided to use the |
When an Object is inserted it returns a FactHandle
.
This FactHandle
is the token used to represent your inserted object within the WorkingMemory
.
It is also used for interactions with the WorkingMemory
when you wish to delete or modify an object.
Cheese stilton = new Cheese("stilton");
FactHandle stiltonHandle = ksession.insert( stilton );
As mentioned in the KieBase section, a Working Memory may operate in two assertion modes: either equality or identity. Identity is the default.
Identity means that the Working Memory uses an IdentityHashMap
to store all asserted objects.
New instance assertions always result in the return of new FactHandle
, but if an instance is asserted again then it returns the original fact handle, i.e., it ignores repeated insertions for the same object.
Equality means that the Working Memory uses a HashMap
to store all asserted objects.
An object instance assertion will only return a new FactHandle
if the inserted object is not equal (according to its equal()/hashcode()
methods) to an already existing fact.
4.1.1.2. Delete
In order to remove a fact from the session, the method delete()
is used.
When a fact is deleted, any matches that are active and depend on that fact will be cancelled.
Note that it is possible to have rules that depend on the nonexistence of a fact, in which case deleting a fact may cause a rule to activate.
(See the not
and exists
keywords).
Expert systems typically use the term retract or retraction to refer to the operation of removing facts from the Working Memory.
Drools prefers the keyword |
Retraction may be done using the FactHandle
that was returned by the insert call.
On the right hand side of a rule the delete
statement is used, which works with a simple object reference.
Cheese stilton = new Cheese("stilton");
FactHandle stiltonHandle = ksession.insert( stilton );
....
ksession.delete( stiltonHandle );
4.1.1.3. Update
The Drools engine must be notified of modified facts, so that they can be reprocessed.
You must use the update()
method to notify the WorkingMemory
of changed objects for those objects that are not able to notify the WorkingMemory
themselves.
Notice that update()
always takes the modified object as a second parameter, which allows you to specify new instances for immutable objects.
On the right hand side of a rule the modify
statement is recommended, as it makes the changes and notifies the Drools engine in a single statement.
Alternatively, after changing a fact object’s field values through calls of setter methods you must invoke update
immediately, event before changing another fact, or you will cause problems with the indexing within the Drools engine.
The modify statement avoids this problem.
Cheese stilton = new Cheese("stilton");
FactHandle stiltonHandle = workingMemory.insert( stilton );
...
stilton.setPrice( 100 );
workingMemory.update( stiltonHandle, stilton );
4.1.2. RuleRuntime
The RuleRuntime provides access to the Agenda, permits query executions, and lets you access named Entry Points.
4.1.2.1. Query
Queries are used to retrieve fact sets based on patterns, as they are used in rules.
Patterns may make use of optional parameters.
Queries can be defined in the KIE base, from where they are called up to return the matching results.
While iterating over the result collection, any identifier bound in the query can be used to access the corresponding fact or fact field by calling the get
method with the binding variable’s name as its argument.
If the binding refers to a fact object, its FactHandle can be retrieved by calling getFactHandle
, again with the variable’s name as the parameter.
QueryResults results =
ksession.getQueryResults( "my query", new Object[] { "string" } );
for ( QueryResultsRow row : results ) {
System.out.println( row.get( "varName" ) );
}
4.1.2.2. Live Queries
Invoking queries and processing the results by iterating over the returned set is not a good way to monitor changes over time.
To alleviate this, Drools provides Live Queries, which have a listener attached instead of returning an iterable result set.
These live queries stay open by creating a view and publishing change events for the contents of this view.
To activate, you start your query with parameters and listen to changes in the resulting view.
The dispose
method terminates the query and discontinues this reactive scenario.
final List updated = new ArrayList();
final List removed = new ArrayList();
final List added = new ArrayList();
ViewChangedEventListener listener = new ViewChangedEventListener() {
public void rowUpdated(Row row) {
updated.add( row.get( "$price" ) );
}
public void rowRemoved(Row row) {
removed.add( row.get( "$price" ) );
}
public void rowAdded(Row row) {
added.add( row.get( "$price" ) );
}
};
// Open the LiveQuery
LiveQuery query = ksession.openLiveQuery( "cheeses",
new Object[] { "cheddar", "stilton" },
listener );
...
...
query.dispose() // calling dispose to terminate the live query
A Drools blog article contains an example of Glazed Lists integration for live queries:
4.1.3. StatefulRuleSession
The StatefulRuleSession
is inherited by the KieSession
and provides the rule related methods that are relevant from outside of the Drools engine.
4.1.3.1. Agenda Filters
` AgendaFilter` objects are optional implementations of the filter interface which are used to allow or deny the firing of a match. What you filter on is entirely up to the implementation. Drools 4.0 used to supply some out of the box filters, which have not be exposed in drools 5.0 knowledge-api, but they are simple to implement and the Drools 4.0 code base can be referred to.
To use a filter specify it while calling fireAllRules()
.
The following example permits only rules ending in the string "Test"
.
All others will be filtered out.
ksession.fireAllRules( new RuleNameEndsWithAgendaFilter( "Test" ) );
4.2. Event Model
The event package provides means to be notified of Drools engine events, including rules firing, objects being asserted, etc. This allows you, for instance, to separate logging and auditing activities from the main part of your application (and the rules).
The WorkingMemoryEventManager
allows for listeners to be added and removed, so that events for the working memory and the agenda can be listened to.
The following code snippet shows how a simple agenda listener is declared and attached to a session. It will print matches after they have fired.
ksession.addEventListener( new DefaultAgendaEventListener() {
public void afterMatchFired(AfterMatchFiredEvent event) {
super.afterMatchFired( event );
System.out.println( event );
}
});
Drools also provides DebugRuleRuntimeEventListener
and DebugAgendaEventListener
which implement each method with a debug print statement.
To print all Working Memory events, you add a listener like this:
ksession.addEventListener( new DebugRuleRuntimeEventListener() );
The events currently supported are:
-
MatchCreatedEvent
-
MatchCancelledEvent
-
BeforeMatchFiredEvent
-
AfterMatchFiredEvent
-
AgendaGroupPushedEvent
-
AgendaGroupPoppedEvent
-
ObjectInsertEvent
-
ObjectDeletedEvent
-
ObjectUpdatedEvent
-
ProcessCompletedEvent
-
ProcessNodeLeftEvent
-
ProcessNodeTriggeredEvent
-
ProcessStartEvent
4.3. Rule Execution Modes
Drools provides two modes for rule execution - passive and active.
As a general guideline, Passive Mode is most suitable for Drools engine applications which need to explicitly control when the Drools engine shall evaluate and fire the rules, or for CEP applications making use of the Pseudo Clock. Active Mode is most effective for Drools engine applications which delegate control of when rules are evaluated and fired to the Drools engine, or for typical CEP application making use of the Real Time Clock.
4.3.1. Passive Mode
With Passive mode not only is the user responsible for working memory operations, such as insert()
, but also for when the rules are to evaluate the data and fire the resulting rule instantiations - using fireAllRules()
.
An example outline of Drools code for a CEP application making use of Passive Mode:
KieSessionConfiguration config = KieServices.Factory.get().newKieSessionConfiguration();
config.setOption( ClockTypeOption.get("pseudo") );
KieSession session = kbase.newKieSession( conf, null );
SessionPseudoClock clock = session.getSessionClock();
session.insert( tick1 );
session.fireAllRules();
clock.advanceTime(1, TimeUnit.SECONDS);
session.insert( tick2 );
session.fireAllRules();
clock.advanceTime(1, TimeUnit.SECONDS);
session.insert( tick3 );
session.fireAllRules();
session.dispose();
4.3.2. Active Mode
Drools offers a fireUntilHalt()
method, that starts the Drools engine in Active Mode, which is asynchronous in behavior, where rules will be continually evaluated and fired, until a halt()
call is made.
This is specially useful for CEP scenarios that require what is commonly known as "active queries".
Please note calling fireUntilHalt()
blocks the current thread, while the Drools engine will start and continue running asynchronously until the halt()
is called on the KieSession. It is suggested therefore to call fireUntilHalt()
from a dedicated thread, so the current thread does not get blocked indefinitely; this also enable the current thread to call halt()
at a later stage, ref. examples below.
An example outline of Drools code for a CEP application making use of Active Mode:
KieSessionConfiguration config = KieServices.Factory.get().newKieSessionConfiguration();
config.setOption( ClockTypeOption.get("realtime") );
KieSession session = kbase.newKieSession( conf, null );
new Thread( new Runnable() {
@Override
public void run() {
session.fireUntilHalt();
}
} ).start();
session.insert( tick1 );
... Thread.sleep( 1000L ); ...
session.insert( tick2 );
... Thread.sleep( 1000L ); ...
session.insert( tick3 );
session.halt();
session.dispose();
Generally, it is not recommended mixing fireAllRules() and fireUntilHalt() , especially from different threads. However the Drools engine is able to handle such situations safely, thanks to the internal state machine. If fireAllRules() is running and a call fireUntilHalt() is made, the Drools engine will wait until the fireAllRules() is finished and then start fireUntilHalt() . However if fireUntilHalt() is running and fireAllRules() is called, the later is ignored and will just return directly. For more details about thread-safety and the internal state machine, reference section "Improved multi-threading behaviour".
|
4.3.2.1. Performing KieSession operations atomically when in Active Mode
When in Active Mode, the Drools engine is in control of when the rule shall be evaluated and fired; therefore it is important that operations on the KieSession are performed in a thread-safe manner. Additionally, from a client-side perspective, there might be the need for more than one operations to be called on the KieSession in between rule evaluations, but for engine to consider these as an atomic operation: for example, inserting more than one Fact at a given time, but for the Drools engine to await until all the inserts are done, before evaluating the rules again.
Drools offers a submit()
method to group and perform operations on the KieSession as a thread-safe atomic action, while in Active Mode.
An example outline of Drools code to perform KieSession operations atomically when in Active Mode:
KieSession session = ...;
new Thread( new Runnable() {
@Override
public void run() {
session.fireUntilHalt();
}
} ).start();
final FactHandle fh = session.insert( fact_a );
... Thread.sleep( 1000L ); ...
session.submit( new KieSession.AtomicAction() {
@Override
public void execute( KieSession kieSession ) {
fact_a.setField("value");
kieSession.update( fh, fact_a );
kieSession.insert( fact_1 );
kieSession.insert( fact_2 );
kieSession.insert( fact_3 );
}
} );
... Thread.sleep( 1000L ); ...
session.insert( fact_z );
session.halt();
session.dispose();
As a reminder example, the fact handle could also be retrieved from the KieSession:
...
session.insert( fact_a );
... Thread.sleep( 1000L ); ...
session.submit( new KieSession.AtomicAction() {
@Override
public void execute( KieSession kieSession ) {
final FactHandle fh = kieSession.getFactHandle( fact_a );
fact_a.setField("value");
kieSession.update( fh, fact_a );
kieSession.insert( fact_1 );
kieSession.insert( fact_2 );
kieSession.insert( fact_3 );
}
} );
...
4.4. Propagation modes
The introduction of PHREAK as default algorithm for the Drools engine made the rules' evaluation lazy. This new Drools lazy behavior allowed a relevant performance boost but, in some very specific cases, breaks the semantic of a few Drools features.
More precisely in some circumstances it is necessary to propagate the insertion of new fact into th session immediately. For instance Drools allows a query to be executed in pull only (or passive) mode by prepending a '?' symbol to its invocation as in the following example:
query Q (Integer i)
String( this == i.toString() )
end
rule R when
$i : Integer()
?Q( $i; )
then
System.out.println( $i );
end
In this case, since the query is passive, it shouldn’t react to the insertion of a String matching the join condition in the query itself. In other words this sequence of commands
KieSession ksession = ...
ksession.insert(1);
ksession.insert("1");
ksession.fireAllRules();
shouldn’t cause the rule R to fire because the String satisfying the query condition has been inserted after the Integer and the passive query shouldn’t react to this insertion. Conversely the rule should fire if the insertion sequence is inverted because the insertion of the Integer, when the passive query can be satisfied by the presence of an already existing String, will trigger it.
Unfortunately the lazy nature of PHREAK doesn’t allow the Drools engine to make any distinction regarding the insertion sequence of the two facts, so the rule will fire in both cases. In circumstances like this it is necessary to evaluate the rule eagerly as done by the old RETEOO-based engine.
In other cases it is required that the propagation is eager, meaning that it is not immedate, but anyway has to happen before the Drools engine/agenda starts scheduled evaluations. For instance this is necessary when a rule has the no-loop or the lock-on-active attribute and in fact when this happens this propagation mode is automatically enforced by the Drools engine.
To cover these use cases, and in all other situations where an immediate or eager rule evaluation is required, it is possible to declaratively specify so by annotating the rule itself with @Propagation(Propagation.Type), where Propagation.Type is an enumeration with 3 possible values:
-
IMMEDIATE means that the propagation is performed immediately.
-
EAGER means that the propagation is performed lazily but eagerly evaluated before scheduled evaluations.
-
LAZY means that the propagation is totally lazy and this is default PHREAK behaviour
This means that the following drl:
query Q (Integer i)
String( this == i.toString() )
end
rule R @Propagation(IMMEDIATE) when
$i : Integer()
?Q( $i; )
then
System.out.println( $i );
end
will make the rule R to fire if and only if the Integer is inserted after the String, thus behaving in accordance with the semantic of the passive query.
4.5. Commands and the CommandExecutor
The CommandFactory
allows for commands to be executed on those sessions, the only difference being that the Stateless KIE session executes fireAllRules()
at the end before disposing the session.
The currently supported commands are:
-
FireAllRules
-
GetGlobal
-
SetGlobal
-
InsertObject
-
InsertElements
-
Query
-
StartProcess
-
BatchExecution
`
InsertObject` will insert a single object, with an optional "out" identifier. InsertElements
will iterate an Iterable, inserting each of the elements.
What this means is that a Stateless KIE session is no longer limited to just inserting objects, it can now start processes or execute queries, and do this in any order.
StatelessKieSession ksession = kbase.newStatelessKieSession();
ExecutionResults bresults =
ksession.execute( CommandFactory.newInsert( new Cheese( "stilton" ), "stilton_id" ) );
Stilton stilton = bresults.getValue( "stilton_id" );
The execute method always returns an ExecutionResults
instance, which allows access to any command results if they specify an out identifier such as the "stilton_id" above.
StatelessKieSession ksession = kbase.newStatelessKieSession();
Command cmd = CommandFactory.newInsertElements( Arrays.asList( Object[] {
new Cheese( "stilton" ),
new Cheese( "brie" ),
new Cheese( "cheddar" ),
});
ExecutionResults bresults = ksession.execute( cmd );
The execute method only allows for a single command.
That’s where BatchExecution
comes in, which represents a composite command, created from a list of commands.
Now, execute will iterate over the list and execute each command in turn.
This means you can insert some objects, start a process, call fireAllRules and execute a query, all in a single execute(…)
call, which is quite powerful.
As mentioned previosly, the StatelessKieSession will execute fireAllRules()
automatically at the end.
However the keen-eyed reader probably has already noticed the FireAllRules
command and wondered how that works with a StatelessKieSession.
The FireAllRules
command is allowed, and using it will disable the automatic execution at the end; think of using it as a sort of manual override function.
A custom XStream marshaller can be used with the Drools Pipeline to achieve XML scripting, which is perfect for services.
Here are two simple XML samples, one for the BatchExecution and one for the ExecutionResults
.
<batch-execution>
<insert out-identifier='outStilton'>
<org.drools.compiler.Cheese>
<type>stilton</type>
<price>25</price>
<oldPrice>0</oldPrice>
</org.drools.compiler.Cheese>
</insert>
</batch-execution>
<execution-results>
<result identifier='outStilton'>
<org.drools.compiler.Cheese>
<type>stilton</type>
<oldPrice>25</oldPrice>
<price>30</price>
</org.drools.compiler.Cheese>
</result>
</execution-results>
Spring and Camel, covered in the integrations book, facilitate declarative services.
<batch-execution>
<insert out-identifier="stilton">
<org.drools.compiler.Cheese>
<type>stilton</type>
<price>1</price>
<oldPrice>0</oldPrice>
</org.drools.compiler.Cheese>
</insert>
<query out-identifier='cheeses2' name='cheesesWithParams'>
<string>stilton</string>
<string>cheddar</string>
</query>
</batch-execution>
The CommandExecutor
returns an ExecutionResults
, and this is handled by the pipeline code snippet as well.
A similar output for the <batch-execution> XML sample above would be:
<execution-results>
<result identifier="stilton">
<org.drools.compiler.Cheese>
<type>stilton</type>
<price>2</price>
</org.drools.compiler.Cheese>
</result>
<result identifier='cheeses2'>
<query-results>
<identifiers>
<identifier>cheese</identifier>
</identifiers>
<row>
<org.drools.compiler.Cheese>
<type>cheddar</type>
<price>2</price>
<oldPrice>0</oldPrice>
</org.drools.compiler.Cheese>
</row>
<row>
<org.drools.compiler.Cheese>
<type>cheddar</type>
<price>1</price>
<oldPrice>0</oldPrice>
</org.drools.compiler.Cheese>
</row>
</query-results>
</result>
</execution-results>
The BatchExecutionHelper
provides a configured XStream instance to support the marshalling of Batch Executions, where the resulting XML can be used as a message format, as shown above.
Configured converters only exist for the commands supported via the Command Factory.
The user may add other converters for their user objects.
This is very useful for scripting stateless or stateful KIE sessions, especially when services are involved.
There is currently no XML schema to support schema validation.
The basic format is outlined here, and the drools-pipeline module has an illustrative unit test in the XStreamBatchExecutionTest
unit test.
The root element is <batch-execution> and it can contain zero or more commands elements.
<batch-execution>
...
</batch-execution>
This contains a list of elements that represent commands, the supported commands is limited to those Commands provided by the Command Factory. The most basic of these is the <insert> element, which inserts objects. The contents of the insert element is the user object, as dictated by XStream.
<batch-execution>
<insert>
...<!-- any user object -->
</insert>
</batch-execution>
The insert element features an "out-identifier" attribute, demanding that the inserted object will also be returned as part of the result payload.
<batch-execution>
<insert out-identifier='userVar'>
...
</insert>
</batch-execution>
It’s also possible to insert a collection of objects using the <insert-elements> element.
This command does not support an out-identifier.
The org.domain.UserClass
is just an illustrative user object that XStream would serialize.
<batch-execution>
<insert-elements>
<org.domain.UserClass>
...
</org.domain.UserClass>
<org.domain.UserClass>
...
</org.domain.UserClass>
<org.domain.UserClass>
...
</org.domain.UserClass>
</insert-elements>
</batch-execution>
While the out
attribute is useful in returning specific instances as a result payload, we often wish to run actual queries.
Both parameter and parameterless queries are supported.
The name
attribute is the name of the query to be called, and the out-identifier
is the identifier to be used for the query results in the <execution-results>
payload.
<batch-execution>
<query out-identifier='cheeses' name='cheeses'/>
<query out-identifier='cheeses2' name='cheesesWithParams'>
<string>stilton</string>
<string>cheddar</string>
</query>
</batch-execution>
4.6. KieSessions pool
In high volume use cases KieSession`s get created and disposed with a very high frequency. In general this operation is not
extremely time consuming, but when repeated millions of times can become a bottleneck and also requires a huge GC effort.
It is possible to greatly alleviate this problem by using a pool of `KieSession`s. To obtain such a pool from a `KieContainer
is enough to invoke on it the method
KieContainerSessionsPool KieContainer.newKieSessionsPool(int initialSize)
where initialSize
is the number of the KieSession`s that will be initially created in the pool. However, if required by the running
application, the number of `KieSession`s in the pool will dynamically grow beyond that value. At this point you can create
`KieSession`s from that pool as you would normally do from a `KieContainer
:
KieContainerSessionsPool pool = kContainer.newKieSessionsPool(10);
KieSession kSession = pool.newKieSession();
Now you can use the KieSession as per normal, and when you call dispose()
on it, instead of being destroyed, it just
gets resetted and pushed back into the pool. Note that using this pool will also affect the case when you have one or more
StatelessKieSession`s and you keep reusing them with multiple call to the `execute()
method. In fact a StatelessKieSession
created directly from a KieContainer
will keep to internally create a new KieSession
for each execute()
invocation.
Conversely if you create the StatelessKieSession
from the pool it will internally uses the KieSession`s provided by the
pool itself. In other words even if you asked a `StatelessKieSession
to the pool, what is actually pooled are the `KieSession`s
that are wrapped by it.
Once you’re done with the pool it is required that you call the shutdown()
method on it to avoid memory leaks. Alternatively
calling dispose()
on the whole KieContainer
will also automatically shutdown all the pools eventually created from it.
4.7. Rule units
You can use rule units to partition a rule set into smaller units, bind different data sources to those units, and then execute the individual unit. A rule unit consists of data sources, global variables, and rules.
You can define a rule unit by implementing the RuleUnit
interface as shown in the following example:
package org.mypackage.myunit;
public static class AdultUnit implements RuleUnit {
private int adultAge;
private DataSource<Person> persons;
public AdultUnit( ) { }
public AdultUnit( DataSource<Person> persons, int age ) {
this.persons = persons;
this.age = age;
}
// A DataSource of Persons in this rule unit
public DataSource<Person> getPersons() {
return persons;
}
// A global variable valid in this rule unit
public int getAdultAge() {
return adultAge;
}
// --- life cycle methods
@Override
public void onStart() {
System.out.println("AdultUnit started.");
}
@Override
public void onEnd() {
System.out.println("AdultUnit ended.");
}
}
In this example, persons
is a source of facts of type Person
. It represents that part of the working memory, which is related to that specific
entry-point used when the rule unit is evaluated. The adultAge
global variable is accessible from all the rules belonging to this rule unit. The last two methods are part of the rule unit life cycle and are invoked by the Drools engine.
The lifecycle of a rule unit can be monitored overriding the following methods:
Method | Invoked when |
---|---|
onStart() |
the Drools engine starts evaluating the unit |
onEnd() |
the evaluation of this unit terminates |
onSuspend() |
the execution of unit is suspended (only for runUntilHalt) |
onResume() |
the execution of unit is resumed (only for runUntilHalt) |
onYield(RuleUnit other) |
the consequence of rule in a given rule unit triggers the execution of a different unit |
All of these methods have an empty default implementation inside the RuleUnit
interface, so their implementation is optional. At this point, it is possible to add one or more rules to this rule unit. By default all the rules in a DRL file are automatically associated to a rule unit following a naming convention of the name of the DRL file itself. If the DRL file is in the same package and has the same name as a class implementing the RuleUnit
interface, then all of the rules in that DRL file will implicitly belong to that unit.
For example, all the rules in the DRL file named AdultUnit.drl
in the package org.mypackage.myunit
are automatically part of the rule unit org.mypackage.myunit.AdultUnit
.
You can avoid using this naming convention and explicitly declare the unit to which the rules in a DRL file belong by using the new unit
keyword. The unit declaration must always immediately follow the package declaration and contain the name of the class in that package of which the rules in the DRL file are part.
package org.mypackage.myunit
unit AdultUnit
rule Adult when
$p : Person(age >= adultAge) from persons
then
System.out.println($p.getName() + " is adult and greater than " + adultAge);
end
You cannot mix rules with and without a rule unit in the same KIE base. |
Additionally, you can rewrite the same pattern in a more convenient way using the oopath notation introduced in drools 6, as shown in the following example:
package org.mypackage.myunit
unit AdultUnit
rule Adult when
$p : /persons[age >= adultAge]
then
System.out.println($p.getName() + " is adult and greater than " + adultAge);
end
In this example, the persons matched by the left-hand side of the rule is retrieved from the DataSource
contained in the rule unit class with
the same name. The adultAge
variable is used in both the left-hand side and right-hand side of the rule in the same way that a global variable is defined at the DRL level. The persons
DataSource acts as a specific entry point, feeding the working memory.
The easiest way to create a DataSource
is using a fixed set of data, as shown in the following example:
DataSource<Person> persons = DataSource.create( new Person( "Mario", 42 ),
new Person( "Marilena", 44 ),
new Person( "Sofia", 4 ) );
To execute one or more rule units defined in a given KIE base, create a new RuleUnitExecutor
and bind it to the KIE base:
KieBase kbase = kieContainer.getKieBase();
RuleUnitExecutor executor = RuleUnitExecutor.create().bind( kbase );
At this point, create the AdultUnit
by passing the persons DataSource
to it and running it on the RuleUnitExecutor
:
RuleUnit adultUnit = new AdultUnit(persons, 18);
executor.run( adultUnit );
This produces the following output:
org.mypackage.myunit.AdultUnit started.
Marilena is adult and greater than 18
Mario is adult and greater than 18
org.mypackage.myunit.AdultUnit ended.
Instead of explicitly creating the rule unit instance, you can pass to the executor the rule unit class that you want to run and let the executor create an instance of it. You can then set the DataSource
and other variables before running it. In order to do so, ensure that you have previously registered those variables on the executor so that the following code produces exactly the same result as the former example:
executor.bindVariable( "persons", persons );
.bindVariable( "adultAge", 18 );
executor.run( AdultUnit.class );
The name passed to the RuleUnitExecutor.bindVariable()
method is used at run time to bind the variable to the field of the rule unit class with the same name. For example, in the previous example the RuleUnitExecutor
inserts into the new rule unit the data source formerly bound to the "persons"
name and the value 18
bound to the String "adultAge"
to the fields with the corresponding names inside the AdultUnit
class.
You can override this default and explicitly define a logical binding name for each field of the rule unit class using the @UnitVar
annotation. For example, the field binding in the following class can be redefined with alternative names:
package org.mypackage.myunit;
public static class AdultUnit implements RuleUnit {
@UnitVar("minAge")
private int adultAge = 18;
@UnitVar("data")
private DataSource<Person> persons;
}
You can then bind the variables to the executor using those alternative names and run the unit:
executor.bindVariable( "data", persons );
.bindVariable( "minAge", 18 );
executor.run( AdultUnit.class );
You can execute a rule unit in passive mode as shown in the previous example (equivalent to invoking fireAllRules
on an entire KIE session)
or in active mode using the runUntilHalt
(equivalent to the KIE session fireUntilHalt
).
As for the fireUntilHalt
, the runUntilHalt
is blocking and therefore has to be issued on a separated thread:
new Thread( () -> executor.runUntilHalt( adultUnit ) ).start();
4.7.1. Data sources
A DataSource
is a source of the data processed by a given rule unit. A rule unit can have zero or more data sources and
to each DataSource declared inside a rule unit corresponds a different entry-point into the rule unit executor. A DataSource
can be shared by different units, but in this case there will be many different entry-points, one for each unit, through which
the same objects will be inserted.
In other terms the DataSource
represents the entry-point of the rule unit, so it is possible to insert a new fact into it:
Person mario = new Person( "Mario", 42 );
FactHandle marioFh = persons.insert( mario );
Modify the fact, optionally specifying the set of properties that have been modified in order to leverage property reactivity:
mario.setAge( 43 );
persons.update( marioFh, mario, "age" );
or delete it
persons.delete( marioFh );
4.7.2. Imperatively running and declaratively guarding a RuleUnit
As anticipated, you can define multiple rule units in the same KIE base and these units can work in a coordinated way by invoking or guarding the execution of each other. To demonstrate this let’s suppose having the following 2 drl files each of them containing a rule belonging to a distinct rule unit.
package org.mypackage.myunit
unit AdultUnit
rule Adult when
Person(age >= 18, $name : name) from persons
then
System.out.println($name + " is adult");
end
package org.mypackage.myunit
unit NotAdultUnit
rule NotAdult when
$p : Person(age < 18, $name : name) from persons
then
System.out.println($name + " is NOT adult");
modify($p) { setAge(18); }
drools.run( AdultUnit.class );
end
Also suppose to have a RuleUnitExecutor
created from the KieBase
built out of these rules and a DataSource
of Persons
bound to it.
RuleUnitExecutor executor = RuleUnitExecutor.create().bind( kbase );
DataSource<Person> persons = executor.newDataSource( "persons",
new Person( "Mario", 42 ),
new Person( "Marilena", 44 ),
new Person( "Sofia", 4 ) );
Note that in this case we are creating the DataSource
directly out of the RuleUnitExecutor
and binding it to the
"persons" variable in a single statement.
At this point trying to execute the NotAdultUnit unit we obtain the following output:
Sofia is NOT adult
Mario is adult
Marilena is adult
Sofia is adult
In fact the NotAdult rule finds a match when evaluating the person "Sofia" who has an age lower than 18. Then it modifies
her age to 18 and with the statement drools.run( AdultUnit.class )
triggers the execution of the other unit which has a
rule that now can fire for all the 3 persons in the DataSource
. This means that the drools.run()
statement inside a
consequence is the way to imperatively interrupt the execution of a rule unit and cede the control to a different rule unit.
Conversely the drools.guard()
statement allows to declaratively schedule the execution of another rule unit when the
condition in the LHS of the rule containing that statement is met. More precisely, using this mechanism a rule in a given
rule unit acts as a guard for a different unit. This means that, when the Drools engine produces at least one match for the LHS
of the guarding rule, the guarded RuleUnit is considered active. Of course a RuleUnit can have more than one guarding rule.
Let’s see how this works with another practical example. Suppose of having a simple BoxOffice
class
public class BoxOffice {
private boolean open;
public BoxOffice( boolean open ) {
this.open = open;
}
public boolean isOpen() {
return open;
}
public void setOpen( boolean open ) {
this.open = open;
}
}
and a BoxOfficeUnit
with a data source of box offices.
public class BoxOfficeUnit implements RuleUnit {
private DataSource<BoxOffice> boxOffices;
public DataSource<BoxOffice> getBoxOffices() {
return boxOffices;
}
}
We introduce now the requirement to keep selling tickets for the event as long as there is at least one opened box office.
To achieve this let’s define a second unit with a DataSource
of person and a second one of tickets.
public class TicketIssuerUnit implements RuleUnit {
private DataSource<Person> persons;
private DataSource<AdultTicket> tickets;
private List<String> results;
public TicketIssuerUnit() { }
public TicketIssuerUnit( DataSource<Person> persons, DataSource<AdultTicket> tickets ) {
this.persons = persons;
this.tickets = tickets;
}
public DataSource<Person> getPersons() {
return persons;
}
public DataSource<AdultTicket> getTickets() {
return tickets;
}
public List<String> getResults() {
return results;
}
}
Then we can define a first rule in the BoxOfficeUnit that guards for this second unit.
package org.mypackage.myunit;
unit BoxOfficeUnit;
rule BoxOfficeIsOpen when
$box: /boxOffices[ open ]
then
drools.guard( TicketIssuerUnit.class );
end
In this way we achieved what we have anticipated: by running the BoxOfficeUnit at some point it will also evaluates the rules in the TicketIssuerUnit defined as
package org.mypackage.myunit;
unit TicketIssuerUnit;
rule IssueAdultTicket when
$p: /persons[ age >= 18 ]
then
tickets.insert(new AdultTicket($p));
end
rule RegisterAdultTicket when
$t: /tickets
then
results.add( $t.getPerson().getName() );
end
that is guarded by the BoxOfficeIsOpen rule, until there will exist at least a set of facts satisfying the LHS patterns of that rule. In other terms the existence of at least one open box office will keep the guarding rule and in turn its guarded unit active as it is evident in the following use case.
DataSource<Person> persons = executor.newDataSource( "persons" );
DataSource<BoxOffice> boxOffices = executor.newDataSource( "boxOffices" );
DataSource<AdultTicket> tickets = executor.newDataSource( "tickets" );
List<String> list = new ArrayList<>();
executor.bindVariable( "results", list );
// two open box offices
BoxOffice office1 = new BoxOffice(true);
FactHandle officeFH1 = boxOffices.insert( office1 );
BoxOffice office2 = new BoxOffice(true);
FactHandle officeFH2 = boxOffices.insert( office2 );
persons.insert(new Person("Mario", 40));
// fire BoxOfficeIsOpen -> run TicketIssuerUnit -> fire RegisterAdultTicket
executor.run(BoxOfficeUnit.class);
assertEquals( 1, list.size() );
assertEquals( "Mario", list.get(0) );
list.clear();
persons.insert(new Person("Matteo", 30));
executor.run(BoxOfficeUnit.class); // fire RegisterAdultTicket
assertEquals( 1, list.size() );
assertEquals( "Matteo", list.get(0) );
list.clear();
// close one box office, the other is still open
office1.setOpen(false);
boxOffices.update(officeFH1, office1);
persons.insert(new Person("Mark", 35));
executor.run(BoxOfficeUnit.class);
assertEquals( 1, list.size() );
assertEquals( "Mark", list.get(0) );
list.clear();
// all box offices, are now closed
office2.setOpen(false);
boxOffices.update(officeFH2, office2); // guarding rule no longer true
persons.insert(new Person("Edson", 35));
executor.run(BoxOfficeUnit.class); // no fire
assertEquals( 0, list.size() );
4.7.3. RuleUnit identity
Since a rule can guard multiple rule units and at the same time a unit can be guarded and then activated by multiple rules,
it is necessary to clearly define what is the identity of a given unit. By the default the identity of a unit is simply the
rule unit class. This is encoded in the getUnitIdentity()
default method of the RuleUnit
interface
default Identity getUnitIdentity() {
return new Identity( getClass() );
}
and implies that each unit is threated as a singleton by the RuleUnitExecutor
. To demonstrate this let’s suppose of
having a simple RuleUnit
class with only a DataSource
accepting any kind of object
public class Unit0 implements RuleUnit {
private DataSource<Object> input;
public DataSource<Object> getInput() {
return input;
}
}
together with a rule belonging to this unit that guards another unit using 2 different conditions.
package org.mypackage.myunit
unit Unit0
rule GuardAgeCheck when
$i: /input#Integer
$s: /input#String
then
drools.guard( new AgeCheckUnit($i) );
drools.guard( new AgeCheckUnit($s.length()) );
end
This second RuleUnit
is intended to check the age of a set of persons. Then it has a DataSource
of the persons to check,
a minAge variable against which doing this check and a list were accumulating the results
public class AgeCheckUnit implements RuleUnit {
private final int minAge;
private DataSource<Person> persons;
private List<String> results;
public AgeCheckUnit( int minAge ) {
this.minAge = minAge;
}
public DataSource<Person> getPersons() {
return persons;
}
public int getMinAge() {
return minAge;
}
public List<String> getResults() {
return results;
}
}
while the corresponding rule actually performing the check of the persons in the DataSource
is the following:
package org.mypackage.myunit
unit AgeCheckUnit
rule CheckAge when
$p : /persons{ age > minAge }
then
results.add($p.getName() + ">" + minAge);
end
At this point we can create a RuleUnitExecutor
, bind it to the KIE base containing these 2 units and also create
the 2 DataSource
s to feed the same units.
RuleUnitExecutor executor = RuleUnitExecutor.create().bind( kbase );
DataSource<Object> input = executor.newDataSource( "input" );
DataSource<Person> persons = executor.newDataSource( "persons",
new Person( "Mario", 42 ),
new Person( "Sofia", 4 ) );
List<String> results = new ArrayList<>();
executor.bindVariable( "results", results );
We are now ready to insert some objects into the input data source and execute the Unit0.
ds.insert("test");
ds.insert(3);
ds.insert(4);
executor.run(Unit0.class);
As outcome of this execution the results list will contain the following:
[Sofia>3, Mario>3]
As anticipated the rule unit named AgeCheckUnit is seen as a singleton and then executed only once, this time with minAge
equals to 3 (but this is not deterministic). Both the String "test" and the Integer 4 inserted into the input data source
could also trigger a second execution with minAge
set to 4, but this is not happening because another unit with the same
identity has been already evaluated. To fix this problem it is enough to override the getUnitIdentity()
method in the
AgeCheckUnit
class to also include the variable minAge in its identity.
public class AgeCheckUnit implements RuleUnit {
...
@Override
public Identity getUnitIdentity() {
return new Identity(getClass(), minAge);
}
}
Having done so, the units with minAge 3 and 4 are considered two different units and then both evaluated, so trying to rerun the former example the result list will now contain
[Mario>4, Sofia>3, Mario>3]
5. Rule Language Reference
5.1. Overview
Drools has a "native" rule language. This format is very light in terms of punctuation, and supports natural and domain specific languages via "expanders" that allow the language to morph to your problem domain. This chapter is mostly concerted with this native rule format. The diagrams used to present the syntax are known as "railroad" diagrams, and they are basically flow charts for the language terms. The technically very keen may also refer to DRL.g which is the Antlr3 grammar for the rule language. If you use Business Central, a lot of the rule structure is done for you with content assistance, for example, type "ru" and press ctrl+space, and it will build the rule structure for you.
5.1.1. A rule file
A rule file is typically a file with a .drl extension. In a DRL file you can have multiple rules, queries and functions, as well as some resource declarations like imports, globals and attributes that are assigned and used by your rules and queries. However, you are also able to spread your rules across multiple rule files (in that case, the extension .rule is suggested, but not required) - spreading rules across files can help with managing large numbers of rules. A DRL file is simply a text file.
The overall structure of a rule file is:
package package-name
imports
globals
functions
queries
rules
The order in which the elements are declared is not important, except for the package name that, if declared, must be the first element in the rules file. All elements are optional, so you will use only those you need. We will discuss each of them in the following sections.
5.1.2. What makes a rule
For the impatient, just as an early view, a rule has the following rough structure:
rule "name"
attributes
when
LHS
then
RHS
end
It’s really that simple. Mostly punctuation is not needed, even the double quotes for "name" are optional, as are newlines. Attributes are simple (always optional) hints to how the rule should behave. LHS is the conditional parts of the rule, which follows a certain syntax which is covered below. RHS is basically a block that allows dialect specific semantic code to be executed.
It is important to note that white space is not important, except in the case of domain specific languages, where lines are processed one by one and spaces may be significant to the domain language.
5.2. Keywords
Drools 5 introduces the concept of hard and soft keywords.
Hard keywords are reserved, you cannot use any hard keyword when naming your domain objects, properties, methods, functions and other elements that are used in the rule text.
Here is the list of hard keywords that must be avoided as identifiers when writing rules:
-
true
-
false
-
null
Soft keywords are just recognized in their context, enabling you to use these words in any other place if you wish, although, it is still recommended to avoid them, to avoid confusions, if possible. Here is a list of the soft keywords:
-
lock-on-active
-
date-effective
-
date-expires
-
no-loop
-
auto-focus
-
activation-group
-
agenda-group
-
ruleflow-group
-
entry-point
-
duration
-
package
-
import
-
dialect
-
salience
-
enabled
-
attributes
-
rule
-
extend
-
when
-
then
-
template
-
query
-
declare
-
function
-
global
-
eval
-
not
-
in
-
or
-
and
-
exists
-
forall
-
accumulate
-
collect
-
from
-
action
-
reverse
-
result
-
end
-
over
-
init
Of course, you can have these (hard and soft) words as part of a method name in camel case, like notSomething() or accumulateSomething() - there are no issues with that scenario.
Although the 3 hard keywords above are unlikely to be used in your existing domain models, if you absolutely need to use them as identifiers instead of keywords, the DRL language provides the ability to escape hard keywords on rule text. To escape a word, simply enclose it in grave accents, like this:
Holiday( `true` == "yes" ) // please note that Drools will resolve that reference to the method Holiday.isTrue()
5.3. Comments
Comments are sections of text that are ignored by the Drools engine. They are stripped out when they are encountered, except inside semantic code blocks, like the RHS of a rule.
5.3.1. Single line comment
To create single line comments, you can use '//'. The parser will ignore anything in the line after the comment symbol. Example:
rule "Testing Comments"
when
// this is a single line comment
eval( true ) // this is a comment in the same line of a pattern
then
// this is a comment inside a semantic code block
end
'#' for comments has been removed. |
5.3.2. Multi-line comment
Multi-line comments are used to comment blocks of text, both in and outside semantic code blocks. Example:
rule "Test Multi-line Comments"
when
/* this is a multi-line comment
in the left hand side of a rule */
eval( true )
then
/* and this is a multi-line comment
in the right hand side of a rule */
end
5.4. Error Messages
Drools 5 introduces standardized error messages. This standardization aims to help users to find and resolve problems in a easier and faster way. In this section you will learn how to identify and interpret those error messages, and you will also receive some tips on how to solve the problems associated with them.
5.4.1. Message format
The standardization includes the error message format and to better explain this format, let’s use the following example:
1st Block: This area identifies the error code.
2nd Block: Line and column information.
3rd Block: Some text describing the problem.
4th Block: This is the first context. Usually indicates the rule, function, template or query where the error occurred. This block is not mandatory.
5th Block: Identifies the pattern where the error occurred. This block is not mandatory.
5.4.2. Error Messages Description
5.4.2.1. 101: No viable alternative
Indicates the most common errors, where the parser came to a decision point but couldn’t identify an alternative. Here are some examples:
1: rule one
2: when
3: exists Foo()
4: exits Bar() // "exits"
5: then
6: end
The above example generates this message:
-
[ERR 101] Line 4:4 no viable alternative at input 'exits' in rule one
At first glance this seems to be valid syntax, but it is not (exits != exists). Let’s take a look at next example:
1: package org.drools.examples;
2: rule
3: when
4: Object()
5: then
6: System.out.println("A RHS");
7: end
Now the above code generates this message:
-
[ERR 101] Line 3:2 no viable alternative at input 'WHEN'
This message means that the parser encountered the token WHEN, actually a hard keyword, but it’s in the wrong place since the rule name is missing.
The error "no viable alternative" also occurs when you make a simple lexical mistake. Here is a sample of a lexical problem:
1: rule simple_rule
2: when
3: Student( name == "Andy )
4: then
5: end
The above code misses to close the quotes and because of this the parser generates this error message:
-
[ERR 101] Line 0:-1 no viable alternative at input '<eof>' in rule simple_rule in pattern Student
Usually the Line and Column information are accurate, but in some cases (like unclosed quotes), the parser generates a 0:-1 position. In this case you should check whether you didn’t forget to close quotes, apostrophes or parentheses. |
5.4.2.2. 102: Mismatched input
This error indicates that the parser was looking for a particular symbol that it didn’t find at the current input position. Here are some samples:
1: rule simple_rule
2: when
3: foo3 : Bar(
The above example generates this message:
-
[ERR 102] Line 0:-1 mismatched input '<eof>' expecting ')' in rule simple_rule in pattern Bar
To fix this problem, it is necessary to complete the rule statement.
Usually when you get a 0:-1 position, it means that parser reached the end of source. |
The following code generates more than one error message:
1: package org.drools.examples;
2:
3: rule "Avoid NPE on wrong syntax"
4: when
5: not( Cheese( ( type == "stilton", price == 10 ) || ( type == "brie", price == 15 ) ) from $cheeseList )
6: then
7: System.out.println("OK");
8: end
These are the errors associated with this source:
-
[ERR 102] Line 5:36 mismatched input ',' expecting ')' in rule "Avoid NPE on wrong syntax" in pattern Cheese
-
[ERR 101] Line 5:57 no viable alternative at input 'type' in rule "Avoid NPE on wrong syntax"
-
[ERR 102] Line 5:106 mismatched input ')' expecting 'then' in rule "Avoid NPE on wrong syntax"
Note that the second problem is related to the first. To fix it, just replace the commas (',') by AND operator ('&&').
In some situations you can get more than one error message. Try to fix one by one, starting at the first one. Some error messages are generated merely as consequences of other errors. |
5.4.2.3. 103: Failed predicate
A validating semantic predicate evaluated to false. Usually these semantic predicates are used to identify soft keywords. This sample shows exactly this situation:
1: package nesting;
2: dialect "mvel"
3:
4: import org.drools.compiler.Person
5: import org.drools.compiler.Address
6:
7: fdsfdsfds
8:
9: rule "test something"
10: when
11: p: Person( name=="Michael" )
12: then
13: p.name = "other";
14: System.out.println(p.name);
15: end
With this sample, we get this error message:
-
[ERR 103] Line 7:0 rule 'rule_key' failed predicate: {(validateIdentifierKey(DroolsSoftKeywords.RULE))}? in rule
The fdsfdsfds text is invalid and the parser couldn’t identify it as the soft keyword rule
.
This error is very similar to 102: Mismatched input, but usually involves soft keywords. |
5.4.2.4. 104: Trailing semi-colon not allowed
This error is associated with the eval
clause, where its expression may not be terminated with a semicolon.
Check this example:
1: rule simple_rule
2: when
3: eval(abc();)
4: then
5: end
Due to the trailing semicolon within eval, we get this error message:
-
[ERR 104] Line 3:4 trailing semi-colon not allowed in rule simple_rule
This problem is simple to fix: just remove the semi-colon.
5.4.2.5. 105: Early Exit
The recognizer came to a subrule in the grammar that must match an alternative at least once, but the subrule did not match anything. Simply put: the parser has entered a branch from where there is no way out. This example illustrates it:
1: template test_error
2: aa s 11;
3: end
This is the message associated to the above sample:
-
[ERR 105] Line 2:2 required (…)+ loop did not match anything at input 'aa' in template test_error
To fix this problem it is necessary to remove the numeric value as it is neither a valid data type which might begin a new template slot nor a possible start for any other rule file construct.
5.4.3. Other Messages
Any other message means that something bad has happened, so please contact the development team.
5.5. Package
A package is a collection of rules and other related constructs, such as imports and globals. The package members are typically related to each other - perhaps HR rules, for instance. A package represents a namespace, which ideally is kept unique for a given grouping of rules. The package name itself is the namespace, and is not related to files or folders in any way.
It is possible to assemble rules from multiple rule sources, and have one top level package configuration that all the rules are kept under (when the rules are assembled). Although, it is not possible to merge into the same package resources declared under different names. A single Rulebase may, however, contain multiple packages built on it. A common structure is to have all the rules for a package in the same file as the package declaration (so that is it entirely self-contained).
The following railroad diagram shows all the components that may make up a package.
Note that a package must have a namespace and be declared using standard Java conventions for package names; i.e., no spaces, unlike rule names which allow spaces.
In terms of the order of elements, they can appear in any order in the rule file, with the exception of the package
statement, which must be at the top of the file.
In all cases, the semicolons are optional.
Notice that any rule attribute (as described the section Rule Attributes) may also be written at package level, superseding the attribute’s default value. The modified default may still be replaced by an attribute setting within a rule.
5.5.1. import
Import statements work like import statements in Java.
You need to specify the fully qualified paths and type names for any objects you want to use in the rules.
Drools automatically imports classes from the Java package of the same name, and also from the package java.lang
.
5.5.2. global
With global
you define global variables.
They are used to make application objects available to the rules.
Typically, they are used to provide data or services that the rules use, especially application services used in rule consequences, and to return data from the rules, like logs or values added in rule consequences, or for the rules to interact with the application, doing callbacks.
Globals are not inserted into the Working Memory, and therefore a global should never be used to establish conditions in rules except when it has a constant immutable value.
The Drools engine cannot be notified about value changes of globals and does not track their changes.
Incorrect use of globals in constraints may yield surprising results - surprising in a bad way.
If multiple packages declare globals with the same identifier they must be of the same type and all of them will reference the same global value.
In order to use globals you must:
-
Declare your global variable in your rules file and use it in rules. For example:
global java.util.List myGlobalList; rule "Using a global" when eval( true ) then myGlobalList.add( "Hello World" ); end
-
Set the global value on your working memory. It is a best practice to set all global values before asserting any fact to the working memory. Example:
List list = new ArrayList(); KieSession kieSession = kiebase.newKieSession(); kieSession.setGlobal( "myGlobalList", list );
Note that these are just named instances of objects that you pass in from your application to the working memory.
This means you can pass in any object you want: you could pass in a service locator, or perhaps a service itself.
With the new from
element it is now common to pass a Hibernate session as a global, to allow from
to pull data from a named Hibernate query.
One example may be an instance of a Email service. In your integration code that is calling the Drools engine, you obtain your emailService object, and then set it in the working memory. In the DRL, you declare that you have a global of type EmailService, and give it the name "email". Then in your rule consequences, you can use things like email.sendSMS(number, message).
Globals are not designed to share data between rules and they should never be used for that purpose. Rules always reason and react to the working memory state, so if you want to pass data from rule to rule, assert the data as facts into the working memory.
Care must be taken when changing data held by globals because the Drools engine is not aware of those changes, hence cannot react to them.
5.6. Function
Functions are a way to put semantic code in your rule source file, as opposed to in normal Java classes.
They can’t do anything more than what you can do with helper classes.
(In fact, the compiler generates the helper class for you behind the scenes.) The main advantage of using functions in a rule is that you can keep the logic all in one place, and you can change the functions as needed (which can be a good or a bad thing). Functions are most useful for invoking actions on the consequence (then
) part of a rule, especially if that particular action is used over and over again, perhaps with only differing parameters for each rule.
A typical function declaration looks like:
function String hello(String name) {
return "Hello "+name+"!";
}
Note that the function
keyword is used, even though its not really part of Java.
Parameters to the function are defined as for a method, and you don’t have to have parameters if they are not needed.
The return type is defined just like in a regular method.
Alternatively, you could use a static method in a helper class, e.g., Foo.hello()
.
Drools supports the use of function imports, so all you would need to do is:
import function my.package.Foo.hello
Irrespective of the way the function is defined or imported, you use a function by calling it by its name, in the consequence or inside a semantic code block. Example:
rule "using a static function"
when
eval( true )
then
System.out.println( hello( "Bob" ) );
end
5.7. Type Declaration
Type declarations have two main goals in the Drools engine: to allow the declaration of new types, and to allow the declaration of metadata for types.
-
Declaring new types: Drools works out of the box with plain Java objects as facts. Sometimes, however, users may want to define the model directly to the Drools engine, without worrying about creating models in a lower level language like Java. At other times, there is a domain model already built, but eventually the user wants or needs to complement this model with additional entities that are used mainly during the reasoning process.
-
Declaring metadata: facts may have meta information associated to them. Examples of meta information include any kind of data that is not represented by the fact attributes and is consistent among all instances of that fact type. This meta information may be queried at runtime by the Drools engine and used in the reasoning process.
5.7.1. Declaring New Types
To declare a new type, all you need to do is use the keyword declare
, followed by the list of fields, and the keyword end
.
A new fact must have a list of fields, otherwise the Drools engine will look for an existing fact class in the classpath and raise an error if not found.
declare Address
number : int
streetName : String
city : String
end
The previous example declares a new fact type called Address
.
This fact type will have three attributes: number
, streetName
and city
.
Each attribute has a type that can be any valid Java type, including any other class created by the user or even other fact types previously declared.
For instance, we may want to declare another fact type Person
:
declare Person
name : String
dateOfBirth : java.util.Date
address : Address
end
As we can see on the previous example, dateOfBirth
is of type java.util.Date
, from the Java API, while address
is of the previously defined fact type Address.
You may avoid having to write the fully qualified name of a class every time you write it by using the import
clause, as previously discussed.
import java.util.Date
declare Person
name : String
dateOfBirth : Date
address : Address
end
When you declare a new fact type, Drools will, at compile time, generate bytecode that implements a Java class representing the fact type. The generated Java class will be a one-to-one Java Bean mapping of the type definition. So, for the previous example, the generated Java class would be:
public class Person implements Serializable {
private String name;
private java.util.Date dateOfBirth;
private Address address;
// empty constructor
public Person() {...}
// constructor with all fields
public Person( String name, Date dateOfBirth, Address address ) {...}
// if keys are defined, constructor with keys
public Person( ...keys... ) {...}
// getters and setters
// equals/hashCode
// toString
}
Since the generated class is a simple Java class, it can be used transparently in the rules, like any other fact.
rule "Using a declared Type"
when<
$p : Person( name == "Bob" )
then
// Insert Mark, who is Bob's mate.
Person mark = new Person();
mark.setName( "Mark" );
insert( mark );
end
5.7.1.1. Declaring enumerative types
DRL also supports the declaration of enumerative types. Such type declarations require the additional keyword enum, followed by a comma separated list of admissible values terminated by a semicolon.
rule "Using a declared Type"
when
$p : Person( name == "Bob" )
then
// Insert Mark, who is Bob's mate.
Person mark = new Person();
mark.setName( "Mark" );
insert( mark );
end
The compiler will generate a valid Java enum, with static methods valueOf() and values(), as well as instance methods ordinal(), compareTo() and name().
Complex enums are also partially supported, declaring the internal fields similarly to a regular type declaration. Notice that as of version 6.x, enum fields do NOT support other declared types or enums
declare enum DaysOfWeek
SUN("Sunday"),MON("Monday"),TUE("Tuesday"),WED("Wednesday"),THU("Thursday"),FRI("Friday"),SAT("Saturday");
fullName : String
end
Enumeratives can then be used in rules
rule "Using a declared Enum"
when
$p : Employee( dayOff == DaysOfWeek.MONDAY )
then
...
end
5.7.2. Declaring Metadata
Metadata may be assigned to several different constructions in Drools: fact types, fact attributes and rules. Drools uses the at sign ('@') to introduce metadata, and it always uses the form:
@metadata_key( metadata_value )
The parenthesized metadata_value is optional.
For instance, if you want to declare a metadata attribute like author
, whose value is Bob, you could simply write:
@author( Bob )
Drools allows the declaration of any arbitrary metadata attribute, but some will have special meaning to the Drools engine, while others are simply available for querying at runtime. Drools allows the declaration of metadata both for fact types and for fact attributes. Any metadata that is declared before the attributes of a fact type are assigned to the fact type, while metadata declared after an attribute are assigned to that particular attribute.
import java.util.Date
declare Person
@author( Bob )
@dateOfCreation( 01-Feb-2009 )
name : String @key @maxLength( 30 )
dateOfBirth : Date
address : Address
end
In the previous example, there are two metadata items declared for the fact type (@author
and @dateOfCreation
) and two more defined for the name attribute (@key
and @maxLength
). Please note that the @key
metadata has no required value, and so the parentheses and the value were omitted.:
5.7.2.1. Predefined class level annotations
Some annotations have predefined semantics that are interpreted by the Drools engine. The following is a list of some of these predefined annotations and their meaning.
@role( <fact | event> )
The @role annotation defines how the Drools engine should handle instances of that type: either as regular facts or as events. It accepts two possible values:
-
fact : this is the default, declares that the type is to be handled as a regular fact.
-
event : declares that the type is to be handled as an event.
The following example declares that the fact type StockTick in a stock broker application is to be handled as an event.
import some.package.StockTick
declare StockTick
@role( event )
end
The same applies to facts declared inline. If StockTick was a fact type declared in the DRL itself, instead of a previously existing class, the code would be:
declare StockTick
@role( event )
datetime : java.util.Date
symbol : String
price : double
end
@typesafe( <boolean> )
By default all type declarations are compiled with type safety enabled; @typesafe( false ) provides a means to override this behaviour by permitting a fall-back, to type unsafe evaluation where all constraints are generated as MVEL constraints and executed dynamically. This can be important when dealing with collections that do not have any generics or mixed type collections.
@timestamp( <attribute name> )
Every event has an associated timestamp assigned to it. By default, the timestamp for a given event is read from the Session Clock and assigned to the event at the time the event is inserted into the working memory. Although, sometimes, the event has the timestamp as one of its own attributes. In this case, the user may tell the Drools engine to use the timestamp from the event’s attribute instead of reading it from the Session Clock.
@timestamp( <attributeName> )
To tell the Drools engine what attribute to use as the source of the event’s timestamp, just list the attribute name as a parameter to the @timestamp tag.
declare VoiceCall
@role( event )
@timestamp( callDateTime )
end
@duration( <attribute name> )
Drools supports both event semantics: point-in-time events and interval-based events. A point-in-time event is represented as an interval-based event whose duration is zero. By default, all events have duration zero. The user may attribute a different duration for an event by declaring which attribute in the event type contains the duration of the event.
@duration( <attributeName> )
So, for our VoiceCall fact type, the declaration would be:
declare VoiceCall
@role( event )
@timestamp( callDateTime )
@duration( callDuration )
end
@expires( <time interval> )
This tag is only considered when running the Drools engine in STREAM mode. Also, additional discussion on the effects of using this tag is made on the Memory Management section. It is included here for completeness. |
Events may be automatically expired after some time in the working memory. Typically this happens when, based on the existing rules in the KIE base, the event can no longer match and activate any rules. Although, it is possible to explicitly define when an event should expire.
@expires( <timeOffset> )
The value of timeOffset is a temporal interval in the form:
[#d][#h][#m][#s][#[ms]]
Where [ ] means an optional parameter and \# means a numeric value.
So, to declare that the VoiceCall facts should be expired after 1 hour and 35 minutes after they are inserted into the working memory, the user would write:
declare VoiceCall
@role( event )
@timestamp( callDateTime )
@duration( callDuration )
@expires( 1h35m )
end
The @expires policy will take precedence and override the implicit expiration offset calculated from temporal constraints and sliding windows in the KIE base.
@propertyChangeSupport
Facts that implement support for property changes as defined in the Javabean(tm) spec, now can be annotated so that the Drools engine register itself to listen for changes on fact properties. The boolean parameter that was used in the insert() method in the Drools 4 API is deprecated and does not exist in the drools-api module.
declare Person
@propertyChangeSupport
end
@propertyReactive
Make this type property reactive. See Fine grained property change listeners section for details.
@serialVersionUID
To improve the compatibility of serialized KieSession, it has been introduced the possibility to specify the serialVersionUID on the classes generated from the declared types through an annotation like the following:
declare MyClass
@serialVersionUID( 42 )
name : String
end
5.7.2.2. Predefined attribute level annotations
As noted before, Drools also supports annotations in type attributes. Here is a list of predefined attribute annotations.
@key
Declaring an attribute as a key attribute has 2 major effects on generated types:
-
The attribute will be used as a key identifier for the type, and as so, the generated class will implement the equals() and hashCode() methods taking the attribute into account when comparing instances of this type.
-
Drools will generate a constructor using all the key attributes as parameters.
For instance:
declare Person
firstName : String @key
lastName : String @key
age : int
end
For the previous example, Drools will generate equals() and hashCode() methods that will check the firstName and lastName attributes to determine if two instances of Person are equal to each other, but will not check the age attribute. It will also generate a constructor taking firstName and lastName as parameters, allowing one to create instances with a code like this:
Person person = new Person( "John", "Doe" );
@position
Patterns support positional arguments on type declarations.
Positional arguments are ones where you don’t need to specify the field name, as the position maps to a known named field. i.e. Person( name == "mark" ) can be rewritten as Person( "mark"; ). The semicolon ';' is important so that the Drools engine knows that everything before it is a positional argument. Otherwise we might assume it was a boolean expression, which is how it could be interpreted after the semicolon. You can mix positional and named arguments on a pattern by using the semicolon ';' to separate them. Any variables used in a positional that have not yet been bound will be bound to the field that maps to that position.
declare Cheese
name : String
shop : String
price : int
end
The default order is the declared order, but this can be overridden using @position
declare Cheese
name : String @position(1)
shop : String @position(2)
price : int @position(0)
end
The @Position annotation, in the org.drools.definition.type package, can be used to annotate original pojos on the classpath. Currently only fields on classes can be annotated. Inheritance of classes is supported, but not interfaces of methods yet.
Example patterns, with two constraints and a binding. Remember semicolon ';' is used to differentiate the positional section from the named argument section. Variables and literals and expressions using just literals are supported in positional arguments, but not variables.
Cheese( "stilton", "Cheese Shop", p; )
Cheese( "stilton", "Cheese Shop"; p : price )
Cheese( "stilton"; shop == "Cheese Shop", p : price )
Cheese( name == "stilton"; shop == "Cheese Shop", p : price )
@Position is inherited when beans extend each other; while not recommended, two fields may have the same @position value, and not all consecutive values need be declared. If a @position is repeated, the conflict is solved using inheritance (fields in the superclass have the precedence) and the declaration order. If a @position value is missing, the first field without an explicit @position (if any) is selected to fill the gap. As always, conflicts are resolved by inheritance and declaration order.
declare Cheese
name : String
shop : String @position(2)
price : int @position(0)
end
declare SeasonedCheese extends Cheese
year : Date @position(0)
origin : String @position(6)
country : String
end
In the example, the field order would be : price (@position 0 in the superclass), year (@position 0 in the subclass), name (first field with no @position), shop (@position 2), country (second field without @position), origin.
5.7.3. Declaring Metadata for Existing Types
Drools allows the declaration of metadata attributes for existing types in the same way as when declaring metadata attributes for new fact types. The only difference is that there are no fields in that declaration.
For instance, if there is a class org.drools.examples.Person, and one wants to declare metadata for it, it’s possible to write the following code:
import org.drools.examples.Person
declare Person
@author( Bob )
@dateOfCreation( 01-Feb-2009 )
end
Instead of using the import, it is also possible to reference the class by its fully qualified name, but since the class will also be referenced in the rules, it is usually shorter to add the import and use the short class name everywhere.
declare org.drools.examples.Person
@author( Bob )
@dateOfCreation( 01-Feb-2009 )
end
5.7.4. Parametrized constructors for declared types
Generate constructors with parameters for declared types.
Example: for a declared type like the following:
declare Person
firstName : String @key
lastName : String @key
age : int
end
The compiler will implicitly generate 3 constructors: one without parameters, one with the @key fields, and one with all fields.
Person() // parameterless constructor
Person( String firstName, String lastName )
Person( String firstName, String lastName, int age )
5.7.5. Non Typesafe Classes
@typesafe( <boolean>) has been added to type declarations. By default all type declarations are compiled with type safety enabled; @typesafe( false ) provides a means to override this behaviour by permitting a fall-back, to type unsafe evaluation where all constraints are generated as MVEL constraints and executed dynamically. This can be important when dealing with collections that do not have any generics or mixed type collections.
5.7.6. Accessing Declared Types from the Application Code
Declared types are usually used inside rules files, while Java models are used when sharing the model between rules and applications. Although, sometimes, the application may need to access and handle facts from the declared types, especially when the application is wrapping the Drools engine and providing higher level, domain specific user interfaces for rules management.
In such cases, the generated classes can be handled as usual with the Java Reflection API, but, as we know, that usually requires a lot of work for small results. Therefore, Drools provides a simplified API for the most common fact handling the application may want to do.
The first important thing to realize is that a declared fact will belong to the package where it was declared.
So, for instance, in the example below, Person
will belong to the org.drools.examples
package, and so the fully qualified name of the generated class will be org.drools.examples.Person
.
package org.drools.examples
import java.util.Date
declare Person
name : String
dateOfBirth : Date
address : Address
end
Declared types, as discussed previously, are generated at KIE base compilation time, i.e., the application will only have access to them at application run time. Therefore, these classes are not available for direct reference from the application.
Drools then provides an interface through which users can handle declared types from the application code: org.drools.definition.type.FactType
.
Through this interface, the user can instantiate, read and write fields in the declared fact types.
// get a reference to a KIE base with a declared type:
KieBase kbase = ...
// get the declared FactType
FactType personType = kbase.getFactType( "org.drools.examples",
"Person" );
// handle the type as necessary:
// create instances:
Object bob = personType.newInstance();
// set attributes values
personType.set( bob,
"name",
"Bob" );
personType.set( bob,
"age",
42 );
// insert fact into a session
KieSession ksession = ...
ksession.insert( bob );
ksession.fireAllRules();
// read attributes
String name = personType.get( bob, "name" );
int age = personType.get( bob, "age" );
The API also includes other helpful methods, like setting all the attributes at once, reading values from a Map, or reading all attributes at once, into a Map.
Although the API is similar to Java reflection (yet much simpler to use), it does not use reflection underneath, relying on much more performant accessors implemented with generated bytecode.
5.7.7. Type Declaration 'extends'
Type declarations now support 'extends' keyword for inheritance
In order to extend a type declared in Java by a DRL declared subtype, repeat the supertype in a declare statement without any fields.
import org.people.Person
declare Person end
declare Student extends Person
school : String
end
declare LongTermStudent extends Student
years : int
course : String
end
5.8. Rule
A rule specifies that when a particular set of conditions occur, specified in the Left Hand Side (LHS), then do what queryis specified as a list of actions in the Right Hand Side (RHS). A common question from users is "Why use when instead of if?" "When" was chosen over "if" because "if" is normally part of a procedural execution flow, where, at a specific point in time, a condition is to be checked.
In contrast, "when" indicates that the condition evaluation is not tied to a specific evaluation sequence or point in time, but that it happens continually, at any time during the life time of the Drools engine; whenever the condition is met, the actions are executed.
A rule must have a name, unique within its rule package. If you define a rule twice in the same DRL it produces an error while loading. If you add a DRL that includes a rule name already in the package, it replaces the previous rule. If a rule name is to have spaces, then it will need to be enclosed in double quotes (it is best to always use double quotes).
Attributes - described below - are optional. They are best written one per line.
The LHS of the rule follows the when
keyword (ideally on a new line), similarly the RHS follows
the then
keyword (again, ideally on a newline). The rule is terminated by the keyword end
.
Rules cannot be nested.
rule "<name>"
<attribute>*
when
<conditional element>*
then
<action>*
end
rule "Approve if not rejected"
salience -100
agenda-group "approval"
when
not Rejection()
p : Policy(approved == false, policyState:status)
exists Driver(age > 25)
Process(status == policyState)
then
log("APPROVED: due to no objections.");
p.setApproved(true);
end
5.8.1. Rule Attributes
Rule attributes provide a declarative way to influence the behavior of the rule. Some are quite simple, while others are part of complex subsystems such as ruleflow. To get the most from Drools you should make sure you have a proper understanding of each attribute.
no-loop
-
default value:
false
type: Boolean
When a rule’s consequence modifies a fact it may cause the rule to activate again, causing an infinite loop. Setting no-loop to true will skip the creation of another Activation for the rule with the current set of facts.
ruleflow-group
-
default value: N/A
type: String
Ruleflow is a Drools feature that lets you exercise control over the firing of rules. Rules that are assembled by the same ruleflow-group identifier fire only when their group is active.
lock-on-active
-
default value:
false
type: Boolean
Whenever a ruleflow-group becomes active or an agenda-group receives the focus, any rule within that group that has lock-on-active set to true will not be activated any more; irrespective of the origin of the update, the activation of a matching rule is discarded. This is a stronger version of
no-loop
, because the change could now be caused not only by the rule itself. It’s ideal for calculation rules where you have a number of rules that modify a fact and you don’t want any rule re-matching and firing again. Only when the ruleflow-group is no longer active or the agenda-group loses the focus those rules with lock-on-active set to true become eligible again for their activations to be placed onto the agenda. salience
-
default value:
0
type: integer
Each rule has an integer salience attribute which defaults to zero and can be negative or positive. Salience is a form of priority where rules with higher salience values are given higher priority when ordered in the Activation queue.
Drools also supports dynamic salience where you can use an expression involving bound variables.
rule "Fire in rank order 1,2,.."
salience( -$rank )
when
Element( $rank : rank,... )
then
...
end
agenda-group
-
default value: MAIN
type: String
Agenda groups allow the user to partition the Agenda providing more execution control. Only rules in the agenda group that has acquired the focus are allowed to fire.
auto-focus
-
default value:
false
type: Boolean
When a rule is activated where the
auto-focus
value is true and the rule’s agenda group does not have focus yet, then it is given focus, allowing the rule to potentially fire. activation-group
-
default value: N/A
type: String
Rules that belong to the same activation-group, identified by this attribute’s string value, will only fire exclusively. More precisely, the first rule in an activation-group to fire will cancel all pending activations of all rules in the group, i.e., stop them from firing.
Note: This used to be called Xor group, but technically it’s not quite an Xor. You may still hear people mention Xor group; just swap that term in your mind with activation-group.
dialect
-
default value: as specified by the package
type: String
possible values: "java" or "mvel"
The dialect species the language to be used for any code expressions in the LHS or the RHS code block. Currently two dialects are available, Java and MVEL. While the dialect can be specified at the package level, this attribute allows the package definition to be overridden for a rule.
date-effective
-
default value: N/A
type: String, containing a date and time definition
A rule can only activate if the current date and time is after date-effective attribute.
date-expires
-
default value: N/A
type: String, containing a date and time definition
A rule cannot activate if the current date and time is after the date-expires attribute.
duration
-
default value: no default value
type: long
The duration dictates that the rule will fire after a specified duration, if it is still true.
enabled
-
default value:
true
type: boolean
If false, the rule is not fired even if the condition is met. Note that the rule is still evaluated so it could affect performance even if it’s false.
rule "my rule"
salience 42
agenda-group "number 1"
when ...
5.8.2. Timers and Calendars
Rules now support both interval and cron based timers, which replace the now deprecated duration attribute.
timer ( int: <initial delay> <repeat interval>? )
timer ( int: 30s )
timer ( int: 30s 5m )
timer ( cron: <cron expression> )
timer ( cron:* 0/15 * * * ? )
Interval (indicated by "int:") timers follow the semantics of java.util.Timer objects, with an initial delay and an optional repeat interval. Cron (indicated by "cron:") timers follow standard Unix cron expressions:
rule "Send SMS every 15 minutes"
timer (cron:* 0/15 * * * ?)
when
$a : Alarm( on == true )
then
channels[ "sms" ].insert( new Sms( $a.mobileNumber, "The alarm is still on" );
end
A rule controlled by a timer becomes active when it matches, and once for each individual match. Its consequence is executed repeatedly, according to the timer’s settings. This stops as soon as the condition doesn’t match any more.
Consequences are executed even after control returns from a call to fireUntilHalt. Moreover, the Drools engine remains reactive to any changes made to the Working Memory. For instance, removing a fact that was involved in triggering the timer rule’s execution causes the repeated execution to terminate, or inserting a fact so that some rule matches will cause that rule to fire. But the Drools engine is not continually active, only after a rule fires, for whatever reason. Thus, reactions to an insertion done asynchronously will not happen until the next execution of a timer-controlled rule. Disposing a session puts an end to all timer activity.
Conversely when the Drools engine runs in passive mode (i.e.: using fireAllRules instead of fireUntilHalt) by default it doesn’t fire consequences of timed rules unless fireAllRules isn’t invoked again.
However it is possible to change this default behavior by configuring the KieSession with a TimedRuleExecutionOption
as shown in the following example.
KieSessionConfiguration ksconf = KieServices.Factory.get().newKieSessionConfiguration();
ksconf.setOption( TimedRuleExecutionOption.YES );
KSession ksession = kbase.newKieSession(ksconf, null);
It is also possible to have a finer grained control on the timed rules that have to be automatically executed.
To do this it is necessary to set a FILTERED
TimedRuleExecutionOption
that allows to define a
callback to filter those rules, as done in the next example.
KieSessionConfiguration ksconf = KieServices.Factory.get().newKieSessionConfiguration();
conf.setOption( new TimedRuleExecutionOption.FILTERED(new TimedRuleExecutionFilter() {
public boolean accept(Rule[] rules) {
return rules[0].getName().equals("MyRule");
}
}) );
For what regards interval timers it is also possible to define both the delay and interval as an expression instead of a fixed value. To do that it is necessary to use an expression timer (indicated by "expr:") as in the following example:
declare Bean
delay : String = "30s"
period : long = 60000
end
rule "Expression timer"
timer( expr: $d, $p )
when
Bean( $d : delay, $p : period )
then
end
The expressions, $d
and $p
in this case, can use any variable defined in the pattern matching part
of the rule and can be any String that can be parsed in a time duration or any numeric value that
will be internally converted in a long representing a duration expressed in milliseconds.
Both interval and expression timers can have 3 optional parameters named "start", "end" and "repeat-limit". When one or more of these parameters are used the first part of the timer definition must be followed by a semicolon ';' and the parameters have to be separated by a comma ',' as in the following example:
timer (int: 30s 10s; start=3-JAN-2010, end=5-JAN-2010)
The value for start and end parameters can be a Date, a String representing a Date or a long, or more in general any Number, that will be transformed in a Java Date applying the following conversion:
new Date( ((Number) n).longValue() )
Conversely the repeat-limit can be only an integer and it defines the maximum number of repetitions allowed by the timer. If both the end and the repeat-limit parameters are set the timer will stop when the first of the two will be matched.
The using of the start parameter implies the definition of a phase for the timer, where the beginning of the phase is given by the start itself plus the eventual delay. In other words in this case the timed rule will then be scheduled at times:
start + delay + n*period
for up to repeat-limit times and no later than the end timestamp (whichever first). For instance the rule having the following interval timer
timer ( int: 30s 1m; start="3-JAN-2010" )
will be scheduled at the 30th second of every minute after the midnight of the 3-JAN-2010. This also means that if for example you turn the system on at midnight of the 3-FEB-2010 it won’t be scheduled immediately but will preserve the phase defined by the timer and so it will be scheduled for the first time 30 seconds after the midnight.
If for some reason the system is paused (e.g. the session is serialized and then deserialized after a while) the rule will be scheduled only once to recover from missing activations (regardless of how many activations we missed) and subsequently it will be scheduled again in phase with the timer.
Calendars are used to control when rules can fire. The Calendar API is modelled on Quartz:
Calendar weekDayCal = QuartzHelper.quartzCalendarAdapter(org.quartz.Calendar quartzCal)
Calendars are registered with the KieSession
:
ksession.getCalendars().set( "weekday", weekDayCal );
They can be used in conjunction with normal rules and rules including timers. The rule attribute "calendars" may contain one or more comma-separated calendar names written as string literals.
rule "weekdays are high priority"
calendars "weekday"
timer (int:0 1h)
when
Alarm()
then
send( "priority high - we have an alarm" );
end
rule "weekend are low priority"
calendars "weekend"
timer (int:0 4h)
when
Alarm()
then
send( "priority low - we have an alarm" );
end
5.8.3. Left Hand Side (when) syntax
5.8.3.1. What is the Left Hand Side?
The Left Hand Side (LHS) is a common name for the conditional part of the rule. It consists of zero or more Conditional Elements. If the LHS is empty, it will be considered as a condition element that is always true and it will be activated once, when a new WorkingMemory session is created.
rule "no CEs"
when
// empty
then
... // actions (executed once)
end
// The above rule is internally rewritten as:
rule "eval(true)"
when
eval( true )
then
... // actions (executed once)
end
Conditional elements work on one or more patterns (which are described below). The most common
conditional element is " and"
. Therefore it is implicit when you have multiple patterns in the
LHS of a rule that are not connected in any way:
rule "2 unconnected patterns"
when
Pattern1()
Pattern2()
then
... // actions
end
// The above rule is internally rewritten as:
rule "2 and connected patterns"
when
Pattern1()
and Pattern2()
then
... // actions
end
An “and” cannot have a leading declaration binding (unlike for example
|
5.8.3.2. Pattern (conditional element)
What is a pattern?
A pattern element is the most important Conditional Element. It can potentially match on each fact that is inserted in the working memory.
A pattern contains of zero or more constraints and has an optional pattern binding. The railroad diagram below shows the syntax for this.
In its simplest form, with no constraints, a pattern matches against a fact of the given type.
In the following case the type is Cheese
, which means that the pattern will match against all Person
objects in the Working Memory:
Person()
The type need not be the actual class of some fact object. Patterns may refer to superclasses or even interfaces, thereby potentially matching facts from many different classes.
Object() // matches all objects in the working memory
Inside of the pattern parenthesis is where all the action happens: it defines the constraints for that pattern. For example, with a age related constraint:
Person( age == 100 )
For backwards compatibility reasons it’s allowed to suffix patterns with the |
Pattern binding
For referring to the matched object, use a pattern binding variable such as $p
.
rule ...
when
$p : Person()
then
System.out.println( "Person " + $p );
end
The prefixed dollar symbol ($
) is just a convention; it can be useful in complex rules where it helps to easily differentiate between variables and fields, but it is not mandatory.
5.8.3.3. Constraint (part of a pattern)
What is a constraint?
A constraint is an expression that returns true
or false
.
This example has a constraint that states 5 is smaller than
6:
Person( 5 < 6 ) // just an example, as constraints like this would be useless in a real pattern
In essence, it’s a Java expression with some enhancements (such as property access) and a few differences (such as equals()
semantics for ==
). Let’s take a deeper look.
Property access on Java Beans (POJO’s)
Any bean property can be used directly.
A bean property is exposed using a standard Java bean getter: a method getMyProperty()
(or isMyProperty()
for a primitive boolean) which takes no arguments and return something.
For example: the age property is written as age
in DRL instead of the getter getAge()
:
Person( age == 50 )
// this is the same as:
Person( getAge() == 50 )
Drools uses the standard JDK Introspector
class to do this mapping, so it follows the standard Java bean specification.
We recommend using property access ( |
Property accessors must not change the state of the object in a way that may effect the rules. Remember that the Drools engine effectively caches the results of its matching in between invocations to make it faster.
To solve this latter case, insert a fact that wraps the current date into working memory and update that fact between |
The following fallback applies: if the getter of a property cannot be found, the compiler will resort to using the property name as a method name and without arguments:
|
Nested property access is also supported:
Person( address.houseNumber == 50 )
// this is the same as:
Person( getAddress().getHouseNumber() == 50 )
Nested properties are also indexed.
In a stateful session, care should be taken when using nested accessors as the Working Memory is not aware of any of the nested values, and does not know when they change.
Either consider them immutable while any of their parent references are inserted into the Working Memory.
Or, instead, if you wish to modify a nested value you should mark all of the outer facts as updated.
In the above example, when the |
Java expression
You can use any Java expression that returns a boolean
as a constraint inside the parentheses of a pattern.
Java expressions can be mixed with other expression enhancements, such as property access:
Person( age == 50 )
It is possible to change the evaluation priority by using parentheses, as in any logic or mathematical expression:
Person( age > 100 && ( age % 10 == 0 ) )
It is possible to reuse Java methods:
Person( Math.round( weight / ( height * height ) ) < 25.0 )
As for property accessors, methods must not change the state of the object in a way that may affect the rules. Any method executed on a fact in the LHS should be a read only method.
|
The state of a fact should not change between rule invocations (unless those facts are marked as updated to the working memory on every change):
|
Normal Java operator precedence applies, see the operator precedence list below.
All operators have normal Java semantics except for The
The
|
Type coercion is always attempted if the field and the value are of different types; exceptions will be thrown if a bad coercion is attempted. For instance, if "ten" is provided as a string in a numeric evaluator, an exception is thrown, whereas "10" would coerce to a numeric 10. Coercion is always in favor of the field type and not the value type:
Person( age == "10" ) // "10" is coerced to 10
Comma separated AND
The comma character (‘`,`’) is used to separate constraint groups. It has implicit AND connective semantics.
// Person is at least 50 and weighs at least 80 kg
Person( age > 50, weight > 80 )
// Person is at least 50, weighs at least 80 kg and is taller than 2 meter.
Person( age > 50, weight > 80, height > 2 )
Although the The comma operator should be preferred at the top level constraint, as it makes constraints easier to read and the Drools engine will often be able to optimize them better. |
The comma (,
) operator cannot be embedded in a composite constraint expression, such as parentheses:
Person( ( age > 50, weight > 80 ) || height > 2 ) // Do NOT do this: compile error
// Use this instead
Person( ( age > 50 && weight > 80 ) || height > 2 )
Binding variables
A property can be bound to a variable:
// 2 persons of the same age
Person( $firstAge : age ) // binding
Person( age == $firstAge ) // constraint expression
The prefixed dollar symbol ($
) is just a convention; it can be useful in complex rules where it helps to easily differentiate between variables and fields.
For backwards compatibility reasons, It’s allowed (but not recommended) to mix a constraint binding and constraint expressions as such:
|
Bound variable restrictions using the operator ==
provide for very fast execution as it use hash indexing to improve performance.
Unification
Drools does not allow bindings to the same declaration. However this is an important aspect to derivation query unification. While positional arguments are always processed with unification a special unification symbol, ':=', was introduced for named arguments named arguments. The following "unifies" the age argument across two people.
Person( $age := age )
Person( $age := age)
In essence unification will declare a binding for the first occurrence and constrain to the same value of the bound field for sequence occurrences.
Grouped accessors for nested objects
Often it happens that it is necessary to access multiple properties of a nested object as in the following example
Person( name == "mark", address.city == "london", address.country == "uk" )
These accessors to nested objects can be grouped with a '.(…)' syntax providing more readable rules as in
Person( name == "mark", address.( city == "london", country == "uk") )
Note the '.' prefix, this is necessary to differentiate the nested object constraints from a method call.
Inline casts and coercion
When dealing with nested objects, it also quite common the need to cast to a subtype. It is possible to do that via the # symbol as in:
Person( name == "mark", address#LongAddress.country == "uk" )
This example casts Address to LongAddress, making its getters available. If the cast is not possible (instanceof returns false), the evaluation will be considered false. Also fully qualified names are supported:
Person( name == "mark", address#org.domain.LongAddress.country == "uk" )
It is possible to use multiple inline casts in the same expression:
Person( name == "mark", address#LongAddress.country#DetailedCountry.population > 10000000 )
moreover, since we also support the instanceof operator, if that is used we will infer its results for further uses of that field, within that pattern:
Person( name == "mark", address instanceof LongAddress, address.country == "uk" )
Special literal support
Besides normal Java literals (including Java 5 enums), this literal is also supported:
Date literal
The date format dd-MMM-yyyy
is supported by default.
You can customize this by providing an alternative date format mask as the system property named drools.dateformat
. This system property can also contain a time format mask part (e.g.drools.dateformat="dd-MMM-yyyy HH:mm"
).
Another possibility to customize the date format is to change the language locale with drools.defaultlanguage
and drools.defaultcountry
system properties (e.g. locale of Thailand set as drools.defaultlanguage=th
and drools.defaultcountry=TH
).
Cheese( bestBefore < "27-Oct-2009" )
List and Map access
It’s possible to directly access a List
value by index:
// Same as childList(0).getAge() == 18
Person( childList[0].age == 18 )
It’s also possible to directly access a Map
value by key:
// Same as credentialMap.get("jsmith").isValid()
Person( credentialMap["jsmith"].valid )
Abbreviated combined relation condition
This allows you to place more than one restriction on a field using the restriction connectives &&
or ||
.
Grouping via parentheses is permitted, resulting in a recursive syntax pattern.
// Simple abbreviated combined relation condition using a single &&
Person( age > 30 && < 40 )
// Complex abbreviated combined relation using groupings
Person( age ( (> 30 && < 40) ||
(> 20 && < 25) ) )
// Mixing abbreviated combined relation with constraint connectives
Person( age > 30 && < 40 || location == "london" )
Special DRL operators
Coercion to the correct value for the evaluator and the field will be attempted.
The operators < ⇐ > >=
These operators can be used on properties with natural ordering.
For example, for Date fields, <
means before, for String
fields, it means alphabetically lower.
Person( firstName < $otherFirstName )
Person( birthDate < $otherBirthDate )
Only applies on Comparable
properties.
Null-safe dereferencing operator
The !. operator allows to derefencing in a null-safe way. More in details the matching algorithm requires the value to the left of the !. operator to be not null in order to give a positive result for pattern matching itself. In other words the pattern:
Person( $streetName : address!.street )
will be internally translated in:
Person( address != null, $streetName : address.street )
The operator matches
Matches a field against any valid Java Regular Expression. Typically that regexp is a string literal, but variables that resolve to a valid regexp are also allowed.
Cheese( type matches "(Buffalo)?\\S*Mozzarella" )
Like in Java, regular expressions written as string literals need to escape '\\'. |
Only applies on String
properties.
Using matches
against a null
value always evaluates to false.
The operator not matches
The operator returns true if the String does not match the regular expression.
The same rules apply as for the matches
operator.
Example:
Cheese( type not matches "(Buffalo)?\\S*Mozzarella" )
Only applies on String
properties.
Using not matches
against a null
value always evaluates to true.
The operator contains
The operator contains
is used to check whether a field that is a
Collection or elements contains the specified value.
CheeseCounter( cheeses contains "stilton" ) // contains with a String literal
CheeseCounter( cheeses contains $var ) // contains with a variable
Only applies on Collection
properties.
The operator contains
can also be used in place of String.contains()
constraints checks.
Cheese( name contains "tilto" )
Person( fullName contains "Jr" )
String( this contains "foo" )
The operator not contains
The operator not contains
is used to check whether a field that is a
Collection or elements does not contain the specified value.
CheeseCounter( cheeses not contains "cheddar" ) // not contains with a String literal
CheeseCounter( cheeses not contains $var ) // not contains with a variable
Only applies on Collection
properties.
For backward compatibility, the |
The operator not contains
can also be used in place of the logical negation of String.contains()
for constraints checks - i.e.: ! String.contains()
Cheese( name not contains "tilto" )
Person( fullName not contains "Jr" )
String( this not contains "foo" )
The operator memberOf
The operator memberOf
is used to check whether a field is a member of a collection or elements; that collection must be a variable.
CheeseCounter( cheese memberOf $matureCheeses )
The operator not memberOf
The operator not memberOf
is used to check whether a field is not a member of a collection or elements; that collection must be a variable.
CheeseCounter( cheese not memberOf $matureCheeses )
The operator soundslike
This operator is similar to matches
, but it checks whether a word has almost the same sound (using English pronunciation) as the given value.
This is based on the Soundex algorithm (see http://en.wikipedia.org/wiki/Soundex
).
// match cheese "fubar" or "foobar"
Cheese( name soundslike 'foobar' )
The operator str
This operator str
is used to check whether a field that is a String
starts with or ends with a certain value.
It can also be used to check the length of the String.
Message( routingValue str[startsWith] "R1" )
Message( routingValue str[endsWith] "R2" )
Message( routingValue str[length] 17 )
The operators in
and notin
(compound value restriction)
The compound value restriction is used where there is more than one possible value to match.
Currently only the in
and not in
evaluators support this.
The second operand of this operator must be a comma-separated list of values, enclosed in parentheses.
Values may be given as variables, literals, return values or qualified identifiers.
Both evaluators are actually syntactic
sugar, internally rewritten as a list of multiple restrictions using the operators !=
and ==
.
Person( $cheese : favouriteCheese )
Cheese( type in ( "stilton", "cheddar", $cheese ) )
Inline eval operator (deprecated)
An inline eval constraint can use any valid dialect expression as long as it results to a primitive boolean. The expression must be constant over time. Any previously bound variable, from the current or previous pattern, can be used; autovivification is also used to auto-create field binding variables. When an identifier is found that is not a current variable, the builder looks to see if the identifier is a field on the current object type, if it is, the field binding is auto-created as a variable of the same name. This is called autovivification of field variables inside of inline eval’s.
This example will find all male-female pairs where the male is 2 years older than the female; the variable age
is auto-created in the second pattern by the autovivification process.
Person( girlAge : age, sex = "F" )
Person( eval( age == girlAge + 2 ), sex = 'M' ) // eval() is actually obsolete in this example
Inline eval’s are effectively obsolete as their inner syntax is now directly supported. It’s recommended not to use them. Simply write the expression without wrapping eval() around it. |
Operator precedence
The operators are evaluated in this precedence:
Operator type | Operators | Notes |
---|---|---|
(nested / null safe) property access |
|
Not normal Java semantics |
List/Map access |
|
Not normal Java semantics |
constraint binding |
|
Not normal Java semantics |
multiplicative |
|
|
additive |
|
|
shift |
|
|
relational |
|
|
equality |
|
Does not use normal Java (not) same semantics: uses (not) equals semantics instead. |
non-short circuiting AND |
|
|
non-short circuiting exclusive OR |
|
|
non-short circuiting inclusive OR |
|
|
logical AND |
|
|
logical OR |
|
|
ternary |
|
|
Comma separated AND |
|
Not normal Java semantics |
5.8.3.4. Positional Arguments
Patterns now support positional arguments on type declarations.
Positional arguments are ones where you don’t need to specify the field name, as the position maps to a known named field. i.e. Person( name == "mark" ) can be rewritten as Person( "mark"; ). The semicolon ';' is important so that the Drools engine knows that everything before it is a positional argument. Otherwise we might assume it was a boolean expression, which is how it could be interpreted after the semicolon. You can mix positional and named arguments on a pattern by using the semicolon ';' to separate them. Any variables used in a positional that have not yet been bound will be bound to the field that maps to that position.
declare Cheese
name : String
shop : String
price : int
end
Example patterns, with two constraints and a binding. Remember semicolon ';' is used to differentiate the positional section from the named argument section. Variables and literals and expressions using just literals are supported in positional arguments, but not variables. Positional arguments are always resolved using unification.
Cheese( "stilton", "Cheese Shop", p; )
Cheese( "stilton", "Cheese Shop"; p : price )
Cheese( "stilton"; shop == "Cheese Shop", p : price )
Cheese( name == "stilton"; shop == "Cheese Shop", p : price )
Positional arguments that are given a previously declared binding will constrain against that using unification; these are referred to as input arguments. If the binding does not yet exist, it will create the declaration binding it to the field represented by the position argument; these are referred to as output arguments.
5.8.3.5. Fine grained property change listeners
When you call modify() (see the modify statement section) on a given object it will trigger a revaluation of all patterns of the matching object type in the KIE base. This can can lead to unwanted and useless evaluations and in the worst cases to infinite recursions. The only workaround to avoid it was to split up your objects into smaller ones having a 1 to 1 relationship with the original object.
This has been introduced to provide an easier and more consistent way to overcome this problem. In fact it allows the pattern matching to only react to modification of properties actually constrained or bound inside of a given pattern. That will help with performance and recursion and avoid artificial object splitting.
This feature is enabled by default, but in case you need or want to dectivate it on a specific bean you can annotate it with @classReactive. This annotation works both on DRL type declarations:
declare Person
@classReactive
firstName : String
lastName : String
end
and on Java classes:
@ClassReactive
public static class Person {
private String firstName;
private String lastName;
}
By using this feature, for instance, if you have a rule like the following:
rule "Every person named Mario is a male" when
$person : Person( firstName == "Mario" )
then
modify ( $person ) { setMale( true ) }
end
you won’t have to add the no-loop attribute to it in order to avoid an infinite recursion because the Drools engine recognizes that the pattern matching is done on the 'firstName' property while the RHS of the rule modifies the 'male' one. Note that this feature does not work for update(), and this is one of the reasons why we promote modify() since it encapsulates the field changes within the statement. Moreover, on Java classes, you can also annotate any method to say that its invocation actually modifies other properties. For instance in the former Person class you could have a method like:
@Modifies( { "firstName", "lastName" } )
public void setName(String name) {
String[] names = name.split("\\s");
this.firstName = names[0];
this.lastName = names[1];
}
That means that if a rule has a RHS like the following:
modify($person) { setName("Mario Fusco") }
it will correctly recognize that the values of both properties 'firstName' and 'lastName' could have potentially been modified and act accordingly, not missing of reevaluating the patterns constrained on them. At the moment the usage of @Modifies is not allowed on fields but only on methods. This is coherent with the most common scenario where the @Modifies will be used for methods that are not related with a class field as in the Person.setName() in the former example. Also note that @Modifies is not transitive, meaning that if another method internally invokes the Person.setName() one it won’t be enough to annotate it with @Modifies( { "name" } ), but it is necessary to use @Modifies( { "firstName", "lastName" } ) even on it. Very likely @Modifies transitivity will be implemented in the next release.
For what regards nested accessors, the Drools engine will be notified only for top level fields. In other words a pattern matching like:
Person ( address.city.name == "London )
will be revaluated only for modification of the 'address' property of a Person object. In the same way the constraints analysis is currently strictly limited to what there is inside a pattern. Another example could help to clarify this. An LHS like the following:
$p : Person( )
Car( owner = $p.name )
will not listen on modifications of the person’s name, while this one will do:
Person( $name : name )
Car( owner = $name )
To overcome this problem it is possible to annotate a pattern with @watch as it follows:
$p : Person( ) @watch ( name )
Car( owner = $p.name )
Indeed, annotating a pattern with @watch allows you to modify the inferred set of properties for which that pattern will react. Note that the properties named in the @watch annotation are actually added to the ones automatically inferred, but it is also possible to explicitly exclude one or more of them prepending their name with a ! and to make the pattern to listen for all or none of the properties of the type used in the pattern respectively with the wildcrds * and !*. So, for example, you can annotate a pattern in the LHS of a rule like:
// listens for changes on both firstName (inferred) and lastName
Person( firstName == $expectedFirstName ) @watch( lastName )
// listens for all the properties of the Person bean
Person( firstName == $expectedFirstName ) @watch( * )
// listens for changes on lastName and explicitly exclude firstName
Person( firstName == $expectedFirstName ) @watch( lastName, !firstName )
// listens for changes on all the properties except the age one
Person( firstName == $expectedFirstName ) @watch( *, !age )
Since it doesn’t make sense to use this annotation on a pattern using a type annotated with @ClassReactive the rule compiler will raise a compilation error if you try to do so. Also the duplicated usage of the same property in @watch (for example like in: @watch( firstName, ! firstName ) ) will end up in a compilation error. In a next release we will make the automatic detection of the properties to be listened smarter by doing analysis even outside of the pattern.
It is also possible to enable this feature only on specific types of your model or to completely disallow it by using on option of the KnowledgeBuilderConfiguration. In particular this new PropertySpecificOption can have one of the following 3 values:
- DISABLED => the feature is turned off and all the other related annotations are just ignored
- ALLOWED => types are not property reactive unless they are not annotated with @PropertyReactive (which is the dual of @ClassReactive)
- ALWAYS => all types are property reactive. This is the default behavior
So, for example, to have a KnowledgeBuilder for which property reactivity is disabled by default:
KnowledgeBuilderConfiguration config = KnowledgeBuilderFactory.newKnowledgeBuilderConfiguration();
config.setOption(PropertySpecificOption.ALLOWED);
KnowledgeBuilder kbuilder = KnowledgeBuilderFactory.newKnowledgeBuilder(config);
In this last case it will be possible to reenable the property reactivity feature on a specific type by annotating it with @PropertyReactive.
It is important to notice that property reactivity is automatically available only for modifications performed inside the consequence of a rule. Conversely a programmatic update is unaware of the object’s properties that have been changed, so it is unable of using this feature.
To workaround this limitation it is possible to optionally specify in an update statement the names of the properties that have been changed in the modified object as in the following example:
Person me = new Person("me", 40);
FactHandle meHandle = ksession.insert( me );
me.setAge(41);
me.setAddress("California Avenue");
ksession.update( meHandle, me, "age", "address" );
5.8.3.6. Basic conditional elements
Conditional Element and
The Conditional Element "and"
is used to group other Conditional Elements into a logical conjunction.
Drools supports both prefix and
and infix and
.
Traditional infix and
is supported:
//infixAnd
Cheese( cheeseType : type ) and Person( favouriteCheese == cheeseType )
Explicit grouping with parentheses is also supported:
//infixAnd with grouping
( Cheese( cheeseType : type ) and
( Person( favouriteCheese == cheeseType ) or
Person( favouriteCheese == cheeseType ) )
The symbol |
Prefix and
is also supported:
(and Cheese( cheeseType : type )
Person( favouriteCheese == cheeseType ) )
The root element of the LHS is an implicit prefix and
and doesn’t need to be specified:
when
Cheese( cheeseType : type )
Person( favouriteCheese == cheeseType )
then
...
Conditional Element or
The Conditional Element or
is used to group other Conditional Elements into a logical disjunction.
Drools supports both prefix or
and infix or
.
Traditional infix or
is supported:
//infixOr
Cheese( cheeseType : type ) or Person( favouriteCheese == cheeseType )
Explicit grouping with parentheses is also supported:
//infixOr with grouping
( Cheese( cheeseType : type ) or
( Person( favouriteCheese == cheeseType ) and
Person( favouriteCheese == cheeseType ) )
The symbol |
Prefix or
is also supported:
(or Person( sex == "f", age > 60 )
Person( sex == "m", age > 65 )
)
The behavior of the Conditional Element |
The Conditional Element or
also allows for optional pattern binding.
This means that each resulting subrule will bind its pattern to the pattern binding.
Each pattern must be bound separately, using eponymous variables:
pensioner : ( Person( sex == "f", age > 60 ) or Person( sex == "m", age > 65 ) )
(or pensioner : Person( sex == "f", age > 60 )
pensioner : Person( sex == "m", age > 65 ) )
Since the conditional element or
results in multiple subrule generation, one for each possible logically outcome, the example above would result in the internal generation of two rules.
These two rules work independently within the Working Memory, which means both can match, activate and fire - there is no shortcutting.
The best way to think of the conditional element or
is as a shortcut for generating two or more similar rules.
When you think of it that way, it’s clear that for a single rule there could be multiple activations if two or more terms of the disjunction are true.
Conditional Element not
The CE not
is first order logic’s non-existential quantifier and checks for the non-existence of something in the Working Memory.
Think of "not" as meaning "there must be none of…".
The keyword not
may be followed by parentheses around the CEs that it applies to.
In the simplest case of a single pattern (like below) you may optionally omit the parentheses.
not Bus()
// Brackets are optional:
not Bus(color == "red")
// Brackets are optional:
not ( Bus(color == "red", number == 42) )
// "not" with nested infix and - two patterns,
// brackets are requires:
not ( Bus(color == "red") and
Bus(color == "blue") )
Conditional Element exists
The CE exists
is first order logic’s existential quantifier and checks for the existence of something in the Working Memory.
Think of "exists" as meaning "there is at least one..". It is different from just having the pattern on its own, which is more like saying "for each one of…". If you use exists
with a pattern, the rule will only activate at most once, regardless of how much data there is in working memory that matches the condition inside of the exists
pattern.
Since only the existence matters, no bindings will be established.
The keyword exists
must be followed by parentheses around the CEs that it applies to.
In the simplest case of a single pattern (like below) you may omit the parentheses.
exists Bus()
exists Bus(color == "red")
// brackets are optional:
exists ( Bus(color == "red", number == 42) )
// "exists" with nested infix and,
// brackets are required:
exists ( Bus(color == "red") and
Bus(color == "blue") )
5.8.3.7. Advanced conditional elements
Conditional Element forall
The Conditional Element forall
completes the First Order Logic support in Drools.
The Conditional Element forall
evaluates to true when all facts that match the first pattern match all the remaining patterns.
Example:
rule "All English buses are red"
when
forall( $bus : Bus( type == 'english')
Bus( this == $bus, color = 'red' ) )
then
// all English buses are red
end
In the above rule, we "select" all Bus objects whose type is "english". Then, for each fact that matches this pattern we evaluate the following patterns and if they match, the forall CE will evaluate to true.
To state that all facts of a given type in the working memory must match a set of constraints, forall
can be written with a single pattern for simplicity.
Example:
rule "All Buses are Red"
when
forall( Bus( color == 'red' ) )
then
// all Bus facts are red
end
Another example shows multiple patterns inside the forall
:
rule "all employees have health and dental care programs"
when
forall( $emp : Employee()
HealthCare( employee == $emp )
DentalCare( employee == $emp )
)
then
// all employees have health and dental care
end
Forall can be nested inside other CEs.
For instance, forall
can be used inside a not
CE.
Note that only single patterns have optional parentheses, so that with a nested forall
parentheses must be used:
rule "not all employees have health and dental care"
when
not ( forall( $emp : Employee()
HealthCare( employee == $emp )
DentalCare( employee == $emp ) )
)
then
// not all employees have health and dental care
end
As a side note, forall( p1 p2 p3…)
is equivalent to writing:
not(p1 and not(and p2 p3...))
Also, it is important to note that forall
is a scope delimiter.
Therefore, it can use any previously bound variable, but no variable bound inside it will be available for use outside of it.
Conditional Element from
The Conditional Element from
enables users to specify an arbitrary source for data to be matched by LHS patterns.
This allows the Drools engine to reason over data not in the Working Memory.
The data source could be a sub-field on a bound variable or the results of a method call.
It is a powerful construction that allows out of the box integration with other application components and frameworks.
One common example is the integration with data retrieved on-demand from databases using hibernate named queries.
The expression used to define the object source is any expression that follows regular MVEL syntax. Therefore, it allows you to easily use object property navigation, execute method calls and access maps and collections elements.
Here is a simple example of reasoning and binding on another pattern sub-field:
rule "validate zipcode"
when
Person( $personAddress : address )
Address( zipcode == "23920W") from $personAddress
then
// zip code is ok
end
With all the flexibility from the new expressiveness in the Drools engine you can slice and dice this problem many ways. This is the same but shows how you can use a graph notation with the 'from':
rule "validate zipcode"
when
$p : Person( )
$a : Address( zipcode == "23920W") from $p.address
then
// zip code is ok
end
Previous examples were evaluations using a single pattern.
The CE from
also support object sources that return a collection of objects.
In that case, from
will iterate over all objects in the collection and try to match each of them individually.
For instance, if we want a rule that applies 10% discount to each item in an order, we could do:
rule "apply 10% discount to all items over US$ 100,00 in an order"
when
$order : Order()
$item : OrderItem( value > 100 ) from $order.items
then
// apply discount to $item
end
The above example will cause the rule to fire once for each item whose value is greater than 100 for each given order.
You must take caution, however, when using from
, especially in conjunction with the lock-on-active
rule attribute as it may produce unexpected results.
Consider the example provided earlier, but now slightly modified as follows:
rule "Assign people in North Carolina (NC) to sales region 1"
ruleflow-group "test"
lock-on-active true
when
$p : Person( )
$a : Address( state == "NC") from $p.address
then
modify ($p) {} // Assign person to sales region 1 in a modify block
end
rule "Apply a discount to people in the city of Raleigh"
ruleflow-group "test"
lock-on-active true
when
$p : Person( )
$a : Address( city == "Raleigh") from $p.address
then
modify ($p) {} // Apply discount to person in a modify block
end
In the above example, persons in Raleigh, NC should be assigned to sales region 1 and receive a discount; i.e., you would expect both rules to activate and fire. Instead you will find that only the second rule fires.
If you were to turn on the audit log, you would also see that when the second rule fires, it deactivates the first rule.
Since the rule attribute lock-on-active
prevents a rule from creating new activations when a set of facts change, the first rule fails to reactivate.
Though the set of facts have not changed, the use of from
returns a new fact for all intents and purposes each time it is evaluated.
First, it’s important to review why you would use the above pattern.
You may have many rules across different rule-flow groups.
When rules modify working memory and other rules downstream of your RuleFlow (in different rule-flow groups) need to be reevaluated, the use of modify
is critical.
You don’t, however, want other rules in the same rule-flow group to place activations on one another recursively.
In this case, the no-loop
attribute is ineffective, as it would only prevent a rule from activating itself recursively.
Hence, you resort to lock-on-active
.
There are several ways to address this issue:
-
Avoid the use of
from
when you can assert all facts into working memory or use nested object references in your constraint expressions (shown below). -
Place the variable assigned used in the modify block as the last sentence in your condition (LHS).
-
Avoid the use of
lock-on-active
when you can explicitly manage how rules within the same rule-flow group place activations on one another (explained below).
The preferred solution is to minimize use of from
when you can assert all your facts into working memory directly.
In the example above, both the Person and Address instance can be asserted into working memory.
In this case, because the graph is fairly simple, an even easier solution is to modify your rules as follows:
rule "Assign people in North Carolina (NC) to sales region 1"
ruleflow-group "test"
lock-on-active true
when
$p : Person(address.state == "NC" )
then
modify ($p) {} // Assign person to sales region 1 in a modify block
end
rule "Apply a discount to people in the city of Raleigh"
ruleflow-group "test"
lock-on-active true
when
$p : Person(address.city == "Raleigh" )
then
modify ($p) {} //Apply discount to person in a modify block
end
Now, you will find that both rules fire as expected.
However, it is not always possible to access nested facts as above.
Consider an example where a Person holds one or more Addresses and you wish to use an existential quantifier to match people with at least one address that meets certain conditions.
In this case, you would have to resort to the use of from
to reason over the collection.
There are several ways to use from
to achieve this and not all of them exhibit an issue with the use of lock-on-active
.
For example, the following use of from
causes both rules to fire as expected:
rule "Assign people in North Carolina (NC) to sales region 1"
ruleflow-group "test"
lock-on-active true
when
$p : Person($addresses : addresses)
exists (Address(state == "NC") from $addresses)
then
modify ($p) {} // Assign person to sales region 1 in a modify block
end
rule "Apply a discount to people in the city of Raleigh"
ruleflow-group "test"
lock-on-active true
when
$p : Person($addresses : addresses)
exists (Address(city == "Raleigh") from $addresses)
then
modify ($p) {} // Apply discount to person in a modify block
end
However, the following slightly different approach does exhibit the problem:
rule "Assign people in North Carolina (NC) to sales region 1"
ruleflow-group "test"
lock-on-active true
when
$assessment : Assessment()
$p : Person()
$addresses : List() from $p.addresses
exists (Address( state == "NC") from $addresses)
then
modify ($assessment) {} // Modify assessment in a modify block
end
rule "Apply a discount to people in the city of Raleigh"
ruleflow-group "test"
lock-on-active true
when
$assessment : Assessment()
$p : Person()
$addresses : List() from $p.addresses
exists (Address( city == "Raleigh") from $addresses)
then
modify ($assessment) {} // Modify assessment in a modify block
end
In the above example, the $addresses variable is returned from the use of from
.
The example also introduces a new object, assessment, to highlight one possible solution in this case.
If the $assessment variable assigned in the condition (LHS) is moved to the last condition in each rule, both rules fire as expected.
Though the above examples demonstrate how to combine the use of from
with lock-on-active
where no loss of rule activations occurs, they carry the drawback of placing a dependency on the order of conditions on the LHS.
In addition, the solutions present greater complexity for the rule author in terms of keeping track of which conditions may create issues.
A better alternative is to assert more facts into working memory.
In this case, a person’s addresses may be asserted into working memory and the use of from
would not be necessary.
There are cases, however, where asserting all data into working memory is not practical and we need to find other solutions.
Another option is to reevaluate the need for lock-on-active
.
An alternative to lock-on-active
is to directly manage how rules within the same rule-flow group activate one another by including conditions in each rule that prevent rules from activating each other recursively when working memory is modified.
For example, in the case above where a discount is applied to citizens of Raleigh, a condition may be added to the rule that checks whether the discount has already been applied.
If so, the rule does not activate.
The pattern containing a from clause cannot be followed by another pattern starting with a parenthesis as in the following example
This is because in that case the DRL parser reads the from expression as "from $l (String() or Number())" and it is impossible to disambiguate this expression from a function call. The straightforward fix to this is wrapping also the from clause in parenthesis as it follows:
|
Conditional Element collect
The Conditional Element collect
allows rules to reason over a collection of objects obtained from the given source or from the working memory.
In First Oder Logic terms this is the cardinality quantifier.
A simple example:
import java.util.ArrayList
rule "Raise priority if system has more than 3 pending alarms"
when
$system : System()
$alarms : ArrayList( size >= 3 )
from collect( Alarm( system == $system, status == 'pending' ) )
then
// Raise priority, because system $system has
// 3 or more alarms pending. The pending alarms
// are $alarms.
end
In the above example, the rule will look for all pending alarms in the working memory for each given system and group them in ArrayLists. If 3 or more alarms are found for a given system, the rule will fire.
The result pattern of collect
can be any concrete class that implements the java.util.Collection
interface and provides a default no-arg public constructor.
This means that you can use Java collections like ArrayList, LinkedList, HashSet, etc., or your own class, as long as it implements the java.util.Collection
interface and provide a default no-arg public constructor.
Both source and result patterns can be constrained as any other pattern.
Variables bound before the collect
CE are in the scope of both source and result patterns and therefore you can use them to constrain both your source and result patterns.
But note that collect
is a scope delimiter for bindings, so that any binding made inside of it is not available for use outside of it.
Collect accepts nested from
CEs.
The following example is a valid use of "collect":
import java.util.LinkedList;
rule "Send a message to all mothers"
when
$town : Town( name == 'Paris' )
$mothers : LinkedList()
from collect( Person( gender == 'F', children > 0 )
from $town.getPeople()
)
then
// send a message to all mothers
end
Conditional Element accumulate
The Conditional Element accumulate
is a more flexible and powerful form of collect
, in the sense that it can be used to do what collect
does and also achieve results that the CE collect
is not capable of achieving.
Accumulate allows a rule to iterate over a collection of objects, executing custom actions for each of the elements, and at the end, it returns a result object.
Accumulate supports both the use of pre-defined accumulate functions, or the use of inline custom code. Inline custom code should be avoided though, as it is harder for rule authors to maintain, and frequently leads to code duplication. Accumulate functions are easier to test and reuse.
The Accumulate CE also supports multiple different syntaxes. The preferred syntax is the top level accumulate, as noted bellow, but all other syntaxes are supported for backward compatibility.
Accumulate CE (preferred syntax)
The top level accumulate syntax is the most compact and flexible syntax. The simplified syntax is as follows:
accumulate( <source pattern>; <functions> [;<constraints>] )
For instance, a rule to calculate the minimum, maximum and average temperature reading for a given sensor and that raises an alarm if the minimum temperature is under 20C degrees and the average is over 70C degrees could be written in the following way, using Accumulate:
The DRL language defines “`acc`” as a synonym of “`accumulate`”. The author might prefer to use “`acc`” as a less verbose keyword or the full keyword “`accumulate`” for legibility. |
rule "Raise alarm"
when
$s : Sensor()
accumulate( Reading( sensor == $s, $temp : temperature );
$min : min( $temp ),
$max : max( $temp ),
$avg : average( $temp );
$min < 20, $avg > 70 )
then
// raise the alarm
end
In the above example, min, max and average are Accumulate Functions and will calculate the minimum, maximum and average temperature values over all the readings for each sensor.
Drools ships with several built-in accumulate functions, including:
-
average
-
min
-
max
-
count
-
sum
-
variance
-
standardDeviation
-
collectList
-
collectSet
These common functions accept any expression as input. For instance, if someone wants to calculate the average profit on all items of an order, a rule could be written using the average function:
rule "Average profit"
when
$order : Order()
accumulate( OrderItem( order == $order, $cost : cost, $price : price );
$avgProfit : average( 1 - $cost / $price ) )
then
// average profit for $order is $avgProfit
end
Accumulate Functions are all pluggable.
That means that if needed, custom, domain specific functions can easily be added to the Drools engine and rules can start to use them without any restrictions.
To implement a new Accumulate Function all one needs to do is to create a Java class that implements the org.kie.api.runtime.rule.AccumulateFunction
interface.
As an example of an Accumulate Function implementation, the following is the implementation of the average
function:
/**
* An implementation of an accumulator capable of calculating average values
*/
public class AverageAccumulateFunction implements org.kie.api.runtime.rule.AccumulateFunction<AverageAccumulateFunction.AverageData> {
public void readExternal(ObjectInput in) throws IOException, ClassNotFoundException {
}
public void writeExternal(ObjectOutput out) throws IOException {
}
public static class AverageData implements Externalizable {
public int count = 0;
public double total = 0;
public AverageData() {}
public void readExternal(ObjectInput in) throws IOException, ClassNotFoundException {
count = in.readInt();
total = in.readDouble();
}
public void writeExternal(ObjectOutput out) throws IOException {
out.writeInt(count);
out.writeDouble(total);
}
}
/* (non-Javadoc)
* @see org.kie.api.runtime.rule.AccumulateFunction#createContext()
*/
public AverageData createContext() {
return new AverageData();
}
/* (non-Javadoc)
* @see org.kie.api.runtime.rule.AccumulateFunction#init(java.io.Serializable)
*/
public void init(AverageData context) {
context.count = 0;
context.total = 0;
}
/* (non-Javadoc)
* @see org.kie.api.runtime.rule.AccumulateFunction#accumulate(java.io.Serializable, java.lang.Object)
*/
public void accumulate(AverageData context,
Object value) {
context.count++;
context.total += ((Number) value).doubleValue();
}
/* (non-Javadoc)
* @see org.kie.api.runtime.rule.AccumulateFunction#reverse(java.io.Serializable, java.lang.Object)
*/
public void reverse(AverageData context, Object value) {
context.count--;
context.total -= ((Number) value).doubleValue();
}
/* (non-Javadoc)
* @see org.kie.api.runtime.rule.AccumulateFunction#getResult(java.io.Serializable)
*/
public Object getResult(AverageData context) {
return new Double( context.count == 0 ? 0 : context.total / context.count );
}
/* (non-Javadoc)
* @see org.kie.api.runtime.rule.AccumulateFunction#supportsReverse()
*/
public boolean supportsReverse() {
return true;
}
/* (non-Javadoc)
* @see org.kie.api.runtime.rule.AccumulateFunction#getResultType()
*/
public Class< ? > getResultType() {
return Number.class;
}
}
The code for the function is very simple, as we could expect, as all the "dirty" integration work is done by the Drools engine. Finally, to use the function in the rules, the author can import it using the "import accumulate" statement:
import accumulate <class_name> <function_name>
For instance, if one implements the class some.package.VarianceFunction
function that implements the variance
function and wants to use it in the rules, he would do the following:
import accumulate some.package.VarianceFunction variance
rule "Calculate Variance"
when
accumulate( Test( $s : score ), $v : variance( $s ) )
then
// the variance of the test scores is $v
end
The built in functions (sum, average, etc) are imported automatically by the Drools engine. Only user-defined custom accumulate functions need to be explicitly imported. |
For backward compatibility, Drools still supports the configuration of accumulate functions through configuration files and system properties, but this is a deprecated method. In order to configure the variance function from the previous example using the configuration file or system property, the user would set a property like this:
Please note that " |
Alternate Syntax: single function with return type
The accumulate syntax evolved over time with the goal of becoming more compact and expressive. Nevertheless, Drools still supports previous syntaxes for backward compatibility purposes.
In case the rule is using a single accumulate function on a given accumulate, the author may add a pattern for the result object and use the "from" keyword to link it to the accumulate result. Example: a rule to apply a 10% discount on orders over $100 could be written in the following way:
rule "Apply 10% discount to orders over US$ 100,00"
when
$order : Order()
$total : Number( doubleValue > 100 )
from accumulate( OrderItem( order == $order, $value : value ),
sum( $value ) )
then
// apply discount to $order
end
In the above example, the accumulate element is using only one function (sum), and so, the rules author opted to explicitly write a pattern for the result type of the accumulate function (Number) and write the constraints inside it. There are no problems in using this syntax over the compact syntax presented before, except that is is a bit more verbose. Also note that it is not allowed to use both the return type and the functions binding in the same accumulate statement.
Compile-time checks are performed in order to ensure the pattern used with the "from
" keyword is assignable from the result of the accumulate function used.
With this syntax, the " |
In the above example, "$total
" is bound to the result returned by the accumulate sum() function.
As another example however, if the result of the accumulate function is a collection, "from
" still binds to the single result and it does not iterate:
rule "Person names"
when
$x : Object() from accumulate(MyPerson( $val : name );
collectList( $val ) )
then
// $x is a List
end
The binded "$x : Object()
" is the List itself, returned by the collectList accumulate function used.
This is an important distinction to highlight, as the "from
" keyword can also be used separately of accumulate, to iterate over the elements of a collection:
rule "Iterate the numbers"
when
$xs : List()
$x : Integer() from $xs
then
// $x matches and binds to each Integer in the collection
end
While this syntax is still supported for backward compatibility purposes, for this and other reasons we encourage rule authors to make use instead of the Accumulate CE preferred syntax (described in the previous chapter), so to avoid any potential pitfalls, as described by these examples.
Accumulate with inline custom code
The use of accumulate with inline custom code is not a good practice for several reasons, including difficulties on maintaining and testing rules that use them, as well as the inability of reusing that code. Implementing your own accumulate functions is very simple and straightforward, they are easy to unit test and to use. This form of accumulate is supported for backward compatibility only. |
Another possible syntax for the accumulate is to define inline custom code, instead of using accumulate functions. As noted on the previous warned, this is discouraged though for the stated reasons.
The general syntax of the accumulate
CE with inline custom code is:
<result pattern> from accumulate( <source pattern>,
init( <init code> ),
action( <action code> ),
reverse( <reverse code> ),
result( <result expression> ) )
The meaning of each of the elements is the following:
-
<source pattern>: the source pattern is a regular pattern that the Drools engine will try to match against each of the source objects.
-
<init code>: this is a semantic block of code in the selected dialect that will be executed once for each tuple, before iterating over the source objects.
-
<action code>: this is a semantic block of code in the selected dialect that will be executed for each of the source objects.
-
<reverse code>: this is an optional semantic block of code in the selected dialect that if present will be executed for each source object that no longer matches the source pattern. The objective of this code block is to undo any calculation done in the <action code> block, so that the Drools engine can do decremental calculation when a source object is modified or deleted, hugely improving performance of these operations.
-
<result expression>: this is a semantic expression in the selected dialect that is executed after all source objects are iterated.
-
<result pattern>: this is a regular pattern that the Drools engine tries to match against the object returned from the <result expression>. If it matches, the
accumulate
conditional element evaluates to true and the Drools engine proceeds with the evaluation of the next CE in the rule. If it does not matches, theaccumulate
CE evaluates to false and the Drools engine stops evaluating CEs for that rule.
It is easier to understand if we look at an example:
rule "Apply 10% discount to orders over US$ 100,00"
when
$order : Order()
$total : Number( doubleValue > 100 )
from accumulate( OrderItem( order == $order, $value : value ),
init( double total = 0; ),
action( total += $value; ),
reverse( total -= $value; ),
result( total ) )
then
// apply discount to $order
end
In the above example, for each Order
in the Working Memory, the Drools engine will execute the init
code initializing the total variable to zero.
Then it will iterate over all OrderItem
objects for that order, executing the action for each one (in the example, it will sum the value of all items into the total variable). After iterating over all OrderItem
objects, it will return the value corresponding to the result
expression (in the above example, the value of variable total
). Finally, the Drools engine will try to match the result with the Number
pattern, and if the double value is greater than 100, the rule will fire.
The example used Java as the semantic dialect, and as such, note that the usage of the semicolon as statement delimiter is mandatory in the init, action and reverse code blocks. The result is an expression and, as such, it does not admit ';'. If the user uses any other dialect, he must comply to that dialect’s specific syntax.
As mentioned before, the reverse code is optional, but it is strongly recommended that the user writes it in order to benefit from the improved performance on update and delete.
The accumulate
CE can be used to execute any action on source objects.
The following example instantiates and populates a custom object:
rule "Accumulate using custom objects"
when
$person : Person( $likes : likes )
$cheesery : Cheesery( totalAmount > 100 )
from accumulate( $cheese : Cheese( type == $likes ),
init( Cheesery cheesery = new Cheesery(); ),
action( cheesery.addCheese( $cheese ); ),
reverse( cheesery.removeCheese( $cheese ); ),
result( cheesery ) );
then
// do something
end
5.8.3.8. Conditional Element eval
The conditional element eval
is essentially a catch-all which allows any semantic code (that returns a primitive boolean) to be executed.
This code can refer to variables that were bound in the LHS of the rule, and functions in the rule package.
Overuse of eval reduces the declarativeness of your rules and can result in a poorly performing engine.
While eval
can be used anywhere in the patterns, the best practice is to add it as the last conditional element in the LHS of a rule.
Evals cannot be indexed and thus are not as efficient as Field Constraints. However this makes them ideal for being used when functions return values that change over time, which is not allowed within Field Constraints.
For folks who are familiar with Drools 2.x lineage, the old Drools parameter and condition tags are equivalent to binding a variable to an appropriate type, and then using it in an eval node.
p1 : Parameter()
p2 : Parameter()
eval( p1.getList().containsKey( p2.getItem() ) )
p1 : Parameter()
p2 : Parameter()
// call function isValid in the LHS
eval( isValid( p1, p2 ) )
5.8.4. The Right Hand Side (then)
5.8.4.1. Usage
The Right Hand Side (RHS) is a common name for the consequence or action part of the rule; this part should contain a list of actions to be executed. It is bad practice to use imperative or conditional code in the RHS of a rule; as a rule should be atomic in nature - "when this, then do this", not "when this, maybe do this". The RHS part of a rule should also be kept small, thus keeping it declarative and readable. If you find you need imperative and/or conditional code in the RHS, then maybe you should be breaking that rule down into multiple rules. The main purpose of the RHS is to insert, delete or modify working memory data. To assist with that there are a few convenience methods you can use to modify working memory; without having to first reference a working memory instance.
update(
object,
handle);
will tell the Drools engine that an object has changed (one that has been bound to something on the LHS) and rules may need to be reconsidered.
update(
object);
can also be used; here the Knowledge Helper will look up the facthandle for you, via an identity check, for the passed object.
(Note that if you provide Property Change Listeners to your Java beans that you are inserting into the Drools engine, you can avoid the need to call update()
when the object changes.). After a fact’s field values have changed you must call update before changing another fact, or you will cause problems with the indexing within the Drools engine.
The modify keyword avoids this problem.
insert(new
Something());
will place a new object of your creation into the Working Memory.
insertLogical(new
Something());
is similar to insert, but the object will be automatically deleted when there are no more facts to support the truth of the currently firing rule.
delete(
handle);
removes an object from Working Memory.
These convenience methods are basically macros that provide short cuts to the KnowledgeHelper
instance that lets you access your Working Memory from rules files.
The predefined variable drools
of type KnowledgeHelper
lets you call several other useful methods.
(Refer to the KnowledgeHelper
interface documentation for more advanced operations).
-
The call
drools.halt()
terminates rule execution immediately. This is required for returning control to the point whence the current session was put to work withfireUntilHalt()
. -
Methods
insert(Object o)
,update(Object o)
anddelete(Object o)
can be called ondrools
as well, but due to their frequent use they can be called without the object reference. -
drools.getWorkingMemory()
returns theWorkingMemory
object. -
drools.setFocus( String s)
sets the focus to the specified agenda group. -
drools.getRule().getName()
, called from a rule’s RHS, returns the name of the rule. -
drools.getTuple()
returns the Tuple that matches the currently executing rule, anddrools.getActivation()
delivers the corresponding Activation. (These calls are useful for logging and debugging purposes.)
The full Knowledge Runtime API is exposed through another predefined variable, kcontext
, of type KieContext
.
Its method getKieRuntime()
delivers an object of type KieRuntime
, which, in turn, provides access to a wealth of methods, many of which are quite useful for coding RHS logic.
-
The call
kcontext.getKieRuntime().halt()
terminates rule execution immediately. -
The accessor
getAgenda()
returns a reference to this session’sAgenda
, which in turn provides access to the various rule groups: activation groups, agenda groups, and rule flow groups. A fairly common paradigm is the activation of some agenda group, which could be done with the lengthy call:// give focus to the agenda group CleanUp kcontext.getKieRuntime().getAgenda().getAgendaGroup( "CleanUp" ).setFocus();
(You can achieve the same using
drools.setFocus( "CleanUp" )
.) -
To run a query, you call
getQueryResults(String query)
, whereupon you may process the results, as explained in section Query. Usingkcontext.getKieRuntime().getQueryResults()
or usingdrools.getKieRuntime().getQueryResults()
is the proper method of running a query from a rule’s RHS, and the only supported way. -
A set of methods dealing with event management lets you, among other things, add and remove event listeners for the Working Memory and the Agenda.
-
Method
getKieBase()
returns theKieBase
object, the backbone of all the Knowledge in your system, and the originator of the current session. -
You can manage globals with
setGlobal(…)
,getGlobal(…)
andgetGlobals()
. -
Method
getEnvironment()
returns the runtime’sEnvironment
which works much like what you know as your operating system’s environment.
5.8.4.2. The modify
Statement
This language extension provides a structured approach to fact updates.
It combines the update operation with a number of setter calls to change the object’s fields.
This is the syntax schema for the modify
statement:
modify ( <fact-expression> ) {
<expression> [ , <expression> ]*
}
The parenthesized <fact-expression> must yield a fact object reference. The expression list in the block should consist of setter calls for the given object, to be written without the usual object reference, which is automatically prepended by the compiler.
The example illustrates a simple fact modification.
rule "modify stilton"
when
$stilton : Cheese(type == "stilton")
then
modify( $stilton ){
setPrice( 20 ),
setAge( "overripe" )
}
end
The advantages in using the modify statment are particularly clear when used in conjuction with fine grained property change listeners. See the corresponding section for more details.
5.8.5. Conditional named consequences
Sometimes the constraint of having one single consequence for each rule can be somewhat limiting and leads to verbose and difficult to be maintained repetitions like in the following example:
rule "Give 10% discount to customers older than 60"
when
$customer : Customer( age > 60 )
then
modify($customer) { setDiscount( 0.1 ) };
end
rule "Give free parking to customers older than 60"
when
$customer : Customer( age > 60 )
$car : Car ( owner == $customer )
then
modify($car) { setFreeParking( true ) };
end
It is already possible to partially overcome this problem by making the second rule extending the first one like in:
rule "Give 10% discount to customers older than 60"
when
$customer : Customer( age > 60 )
then
modify($customer) { setDiscount( 0.1 ) };
end
rule "Give free parking to customers older than 60"
extends "Give 10% discount to customers older than 60"
when
$car : Car ( owner == $customer )
then
modify($car) { setFreeParking( true ) };
end
Anyway this feature makes it possible to define more labelled consequences other than the default one in a single rule, so, for example, the 2 former rules can be compacted in only one like it follows:
rule "Give 10% discount and free parking to customers older than 60"
when
$customer : Customer( age > 60 )
do[giveDiscount]
$car : Car ( owner == $customer )
then
modify($car) { setFreeParking( true ) };
then[giveDiscount]
modify($customer) { setDiscount( 0.1 ) };
end
This last rule has 2 consequences, the usual default one, plus another one named "giveDiscount" that is activated, using the keyword do, as soon as a customer older than 60 is found in the KIE base, regardless of the fact that he owns a car or not. The activation of a named consequence can be also guarded by an additional condition like in this further example:
rule "Give free parking to customers older than 60 and 10% discount to golden ones among them"
when
$customer : Customer( age > 60 )
if ( type == "Golden" ) do[giveDiscount]
$car : Car ( owner == $customer )
then
modify($car) { setFreeParking( true ) };
then[giveDiscount]
modify($customer) { setDiscount( 0.1 ) };
end
The condition in the if statement is always evaluated on the pattern immediately preceding it. In the end this last, a bit more complicated, example shows how it is possible to switch over different conditions using a nested if/else statement:
rule "Give free parking and 10% discount to over 60 Golden customer and 5% to Silver ones"
when
$customer : Customer( age > 60 )
if ( type == "Golden" ) do[giveDiscount10]
else if ( type == "Silver" ) break[giveDiscount5]
$car : Car ( owner == $customer )
then
modify($car) { setFreeParking( true ) };
then[giveDiscount10]
modify($customer) { setDiscount( 0.1 ) };
then[giveDiscount5]
modify($customer) { setDiscount( 0.05 ) };
end
Here the purpose is to give a 10% discount AND a free parking to Golden customers over 60, but only a 5% discount (without free parking) to the Silver ones. This result is achieved by activating the consequence named "giveDiscount5" using the keyword break instead of do. In fact do just schedules a consequence in the agenda, allowing the remaining part of the LHS to continue of being evaluated as per normal, while break also blocks any further pattern matching evaluation. Note, of course, that the activation of a named consequence not guarded by any condition with break doesn’t make sense (and generates a compile time error) since otherwise the LHS part following it would be never reachable.
5.8.6. A Note on Auto-boxing and Primitive Types
Drools attempts to preserve numbers in their primitive or object wrapper form, so a variable bound to an int primitive when used in a code block or expression will no longer need manual unboxing; unlike Drools 3.0 where all primitives were autoboxed, requiring manual unboxing. A variable bound to an object wrapper will remain as an object; the existing JDK 1.5 and JDK 5 rules to handle auto-boxing and unboxing apply in this case. When evaluating field constraints, the system attempts to coerce one of the values into a comparable format; so a primitive is comparable to an object wrapper.
5.9. Query
A query is a simple way to search the working memory for facts that match the stated conditions. Therefore, it contains only the structure of the LHS of a rule, so that you specify neither "when" nor "then". A query has an optional set of parameters, each of which can be optionally typed. If the type is not given, the type Object is assumed. The Drools engine will attempt to coerce the values as needed. Query names are global to the KieBase; so do not add queries of the same name to different packages for the same RuleBase.
To return the results use ksession.getQueryResults("name")
, where "name" is the query’s name.
This returns a list of query results, which allow you to retrieve the objects that matched the query.
The first example presents a simple query for all the people over the age of 30. The second one, using parameters, combines the age limit with a location.
query "people over the age of 30"
person : Person( age > 30 )
end
query "people over the age of x" (int x, String y)
person : Person( age > x, location == y )
end
We iterate over the returned QueryResults using a standard "for" loop. Each element is a QueryResultsRow which we can use to access each of the columns in the tuple. These columns can be accessed by bound declaration name or index position.
QueryResults results = ksession.getQueryResults( "people over the age of 30" );
System.out.println( "we have " + results.size() + " people over the age of 30" );
System.out.println( "These people are are over 30:" );
for ( QueryResultsRow row : results ) {
Person person = ( Person ) row.get( "person" );
System.out.println( person.getName() + "\n" );
}
Support for positional syntax has been added for more compact code. By default the declared type order in the type declaration matches the argument position. But it possible to override these using the @position annotation. This allows patterns to be used with positional arguments, instead of the more verbose named arguments.
declare Cheese
name : String @position(1)
shop : String @position(2)
price : int @position(0)
end
The @Position annotation, in the org.drools.definition.type package, can be used to annotate original pojos on the classpath.
Currently only fields on classes can be annotated.
Inheritance of classes is supported, but not interfaces or methods.
The isContainedIn query below demonstrates the use of positional arguments in a pattern; Location(x, y;)
instead of Location( thing == x, location == y).
Queries can now call other queries, this combined with optional query arguments provides derivation query style backward chaining. Positional and named syntax is supported for arguments. It is also possible to mix both positional and named, but positional must come first, separated by a semi colon. Literal expressions can be passed as query arguments, but at this stage you cannot mix expressions with variables. Here is an example of a query that calls another query. Note that 'z' here will always be an 'out' variable. The '?' symbol means the query is pull only, once the results are returned you will not receive further results as the underlying data changes.
declare Location
thing : String
location : String
end
query isContainedIn( String x, String y )
Location(x, y;)
or
( Location(z, y;) and ?isContainedIn(x, z;) )
end
As previously mentioned you can use live "open" queries to reactively receive changes over time from the query results, as the underlying data it queries against changes. Notice the "look" rule calls the query without using '?'.
query isContainedIn( String x, String y )
Location(x, y;)
or
( Location(z, y;) and isContainedIn(x, z;) )
end
rule look when
Person( $l : likes )
isContainedIn( $l, 'office'; )
then
insertLogical( $l 'is in the office' );
end
Drools supports unification for derivation queries, in short this means that arguments are optional. It is possible to call queries from Java leaving arguments unspecified using the static field org.drools.core.runtime.rule.Variable.v - note you must use 'v' and not an alternative instance of Variable. These are referred to as 'out' arguments. Note that the query itself does not declare at compile time whether an argument is in or an out, this can be defined purely at runtime on each use. The following example will return all objects contained in the office.
results = ksession.getQueryResults( "isContainedIn", new Object[] { Variable.v, "office" } );
l = new ArrayList<List<String>>();
for ( QueryResultsRow r : results ) {
l.add( Arrays.asList( new String[] { (String) r.get( "x" ), (String) r.get( "y" ) } ) );
}
The algorithm uses stacks to handle recursion, so the method stack will not blow up.
It is also possible to use as input argument for a query both the field of a fact as in:
query contains(String $s, String $c)
$s := String( this.contains( $c ) )
end
rule PersonNamesWithA when
$p : Person()
contains( $p.name, "a"; )
then
end
and more in general any kind of valid expression like in:
query checkLength(String $s, int $l)
$s := String( length == $l )
end
rule CheckPersonNameLength when
$i : Integer()
$p : Person()
checkLength( $p.name, 1 + $i + $p.age; )
then
end
The following is not yet supported:
-
List and Map unification
-
Expression unification - pred( X, X + 1, X * Y / 7 )
5.10. Domain Specific Languages
Domain Specific Languages (or DSLs) are a way of creating a rule language that is dedicated to your problem domain. A set of DSL definitions consists of transformations from DSL "sentences" to DRL constructs, which lets you use of all the underlying rule language and engine features. Given a DSL, you write rules in DSL rule (or DSLR) files, which will be translated into DRL files.
DSL and DSLR files are plain text files, and you can use any text editor to create and modify them. But there are also DSL and DSLR editors, both in the IDE as well as in the web based BRMS, and you can use those as well, although they may not provide you with the full DSL functionality.
5.10.1. When to Use a DSL
DSLs can serve as a layer of separation between rule authoring (and rule authors) and the technical intricacies resulting from the modelling of domain object and the Drools engine’s native language and methods. If your rules need to be read and validated by domain experts (such as business analysts, for instance) who are not programmers, you should consider using a DSL; it hides implementation details and focuses on the rule logic proper. DSL sentences can also act as "templates" for conditional elements and consequence actions that are used repeatedly in your rules, possibly with minor variations. You may define DSL sentences as being mapped to these repeated phrases, with parameters providing a means for accommodating those variations.
DSLs have no impact on the Drools engine at runtime, they are just a compile time feature, requiring a special parser and transformer.
5.10.2. DSL Basics
The Drools DSL mechanism allows you to customise conditional expressions and consequence actions. A global substitution mechanism ("keyword") is also available.
[when]Something is {colour}=Something(colour=="{colour}")
In the preceding example, [when]
indicates the scope of the expression, i.e., whether it is valid for the LHS or the RHS of a rule.
The part after the bracketed keyword is the expression that you use in the rule; typically a natural language expression, but it doesn’t have to be.
The part to the right of the equal sign ("=") is the mapping of the expression into the rule language.
The form of this string depends on its destination, RHS or LHS.
If it is for the LHS, then it ought to be a term according to the regular LHS syntax; if it is for the RHS then it might be a Java statement.
Whenever the DSL parser matches a line from the rule file written in the DSL with an expression in the DSL definition, it performs three steps of string manipulation.
First, it extracts the string values appearing where the expression contains variable names in braces (here: {colour}
). Then, the values obtained from these captures are then interpolated wherever that name, again enclosed in braces, occurs on the right hand side of the mapping.
Finally, the interpolated string replaces whatever was matched by the entire expression in the line of the DSL rule file.
Note that the expressions (i.e., the strings on the left hand side of the equal sign) are used as regular expressions in a pattern matching operation against a line of the DSL rule file, matching all or part of a line. This means you can use (for instance) a '?' to indicate that the preceding character is optional. One good reason to use this is to overcome variations in natural language phrases of your DSL. But, given that these expressions are regular expression patterns, this also means that all "magic" characters of Java’s pattern syntax have to be escaped with a preceding backslash ('\').
It is important to note that the compiler transforms DSL rule files line by line. In the above example, all the text after "Something is " to the end of the line is captured as the replacement value for "{colour}", and this is used for interpolating the target string. This may not be exactly what you want. For instance, when you intend to merge different DSL expressions to generate a composite DRL pattern, you need to transform a DSLR line in several independent operations. The best way to achieve this is to ensure that the captures are surrounded by characteristic text - words or even single characters. As a result, the matching operation done by the parser plucks out a substring from somewhere within the line. In the example below, quotes are used as distinctive characters. Note that the characters that surround the capture are not included during interpolation, just the contents between them.
As a rule of thumb, use quotes for textual data that a rule editor may want to enter.
You can also enclose the capture with words to ensure that the text is correctly matched.
Both is illustrated by the following example.
Note that a single line such as Something is "green" and
another solid thing
is now correctly expanded.
[when]something is "{colour}"=Something(colour=="{colour}")
[when]another {state} thing=OtherThing(state=="{state})"
It is a good idea to avoid punctuation (other than quotes or apostrophes) in your DSL expressions as much as possible. The main reason is that punctuation is easy to forget for rule authors using your DSL. Another reason is that parentheses, the period and the question mark are magic characters, requiring escaping in the DSL definition.
In a DSL mapping, the braces "{" and "}" should only be used to enclose a variable definition or reference, resulting in a capture. If they should occur literally, either in the expression or within the replacement text on the right hand side, they must be escaped with a preceding backslash ("\"):
[then]do something= if (foo) \{ doSomething(); \}
If braces "{" and "}" should appear in the replacement string of a DSL definition, escape them with a backslash ('\'). |
# This is a comment to be ignored.
[when]There is a person with name of "{name}"=Person(name=="{name}")
[when]Person is at least {age} years old and lives in "{location}"=
Person(age >= {age}, location=="{location}")
[then]Log "{message}"=System.out.println("{message}");
[when]And = and
Given the above DSL examples, the following examples show the expansion of various DSLR snippets:
There is a person with name of "Kitty"
==> Person(name="Kitty")
Person is at least 42 years old and lives in "Atlanta"
==> Person(age >= 42, location="Atlanta")
Log "boo"
==> System.out.println("boo");
There is a person with name of "Bob" And Person is at least 30 years old and lives in "Utah"
==> Person(name="Bob") and Person(age >= 30, location="Utah")
Don’t forget that if you are capturing plain text from a DSL rule line and want to use it as a string literal in the expansion, you must provide the quotes on the right hand side of the mapping. |
You can chain DSL expressions together on one line, as long as it is clear to the parser where one ends and the next one begins and where the text representing a parameter ends. (Otherwise you risk getting all the text until the end of the line as a parameter value.) The DSL expressions are tried, one after the other, according to their order in the DSL definition file. After any match, all remaining DSL expressions are investigated, too.
The resulting DRL text may consist of more than one line.
Line ends are in the replacement text are written as \n
.
5.10.3. Adding Constraints to Facts
A common requirement when writing rule conditions is to be able to add an arbitrary combination of constraints to a pattern. Given that a fact type may have many fields, having to provide an individual DSL statement for each combination would be plain folly.
The DSL facility allows you to add constraints to a pattern by a simple convention: if your DSL expression starts with a hyphen (minus character, "-") it is assumed to be a field constraint and, consequently, is is added to the last pattern line preceding it.
For an example, lets take look at class Cheese
, with the following fields: type, price, age and country.
We can express some LHS condition in normal DRL like the following
Cheese(age < 5, price == 20, type=="stilton", country=="ch")
The DSL definitions given below result in three DSL phrases which may be used to create any combination of constraint involving these fields.
[when]There is a Cheese with=Cheese()
[when]- age is less than {age}=age<{age}
[when]- type is '{type}'=type=='{type}'
[when]- country equal to '{country}'=country=='{country}'
You can then write rules with conditions like the following:
There is a Cheese with
- age is less than 42
- type is 'stilton'
The parser will pick up a line beginning with "-" and add it as a constraint to the preceding pattern, inserting a comma when it is required. For the preceding example, the resulting DRL is:
Cheese(age<42, type=='stilton')
Combining all numeric fields with all relational operators (according to the DSL expression "age is less than…" in the preceding example) produces an unwieldy amount of DSL entries. But you can define DSL phrases for the various operators and even a generic expression that handles any field constraint, as shown below. (Notice that the expression definition contains a regular expression in addition to the variable name.)
[when][]is less than or equal to=<=
[when][]is less than=<
[when][]is greater than or equal to=>=
[when][]is greater than=>
[when][]is equal to===
[when][]equals===
[when][]There is a Cheese with=Cheese()
[when][]- {field:\w*} {operator} {value:\d*}={field} {operator} {value}
Given these DSL definitions, you can write rules with conditions such as:
There is a Cheese with
- age is less than 42
- rating is greater than 50
- type equals 'stilton'
In this specific case, a phrase such as "is less than" is replaced by <
, and then the line matches the last DSL entry.
This removes the hyphen, but the final result is still added as a constraint to the preceding pattern.
After processing all of the lines, the resulting DRL text is:
Cheese(age<42, rating > 50, type=='stilton')
The order of the entries in the DSL is important if separate DSL expressions are intended to match the same line, one after the other. |
5.10.4. Developing a DSL
A good way to get started is to write representative samples of the rules your application requires, and to test them as you develop. This will provide you with a stable framework of conditional elements and their constraints. Rules, both in DRL and in DSLR, refer to entities according to the data model representing the application data that should be subject to the reasoning process defined in rules. Notice that writing rules is generally easier if most of the data model’s types are facts.
Given an initial set of rules, it should be possible to identify recurring or similar code snippets and to mark variable parts as parameters. This provides reliable leads as to what might be a handy DSL entry. Also, make sure you have a full grasp of the jargon the domain experts are using, and base your DSL phrases on this vocabulary.
You may postpone implementation decisions concerning conditions and actions during this first design phase by leaving certain conditional elements and actions in their DRL form by prefixing a line with a greater sign (">"). (This is also handy for inserting debugging statements.)
During the next development phase, you should find that the DSL configuration stabilizes pretty quickly. New rules can be written by reusing the existing DSL definitions, or by adding a parameter to an existing condition or consequence entry.
Try to keep the number of DSL entries small. Using parameters lets you apply the same DSL sentence for similar rule patterns or constraints. But do not exaggerate: authors using the DSL should still be able to identify DSL phrases by some fixed text.
5.10.5. DSL and DSLR Reference
A DSL file is a text file in a line-oriented format. Its entries are used for transforming a DSLR file into a file according to DRL syntax.
-
A line starting with "" or "//" (with or without preceding white space) is treated as a comment. A comment line starting with "/" is scanned for words requesting a debug option, see below.
-
Any line starting with an opening bracket ("[") is assumed to be the first line of a DSL entry definition.
-
Any other line is appended to the preceding DSL entry definition, with the line end replaced by a space.
A DSL entry consists of the following four parts:
-
A scope definition, written as one of the keywords "when" or "condition", "then" or "consequence", "*" and "keyword", enclosed in brackets ("[" and "]"). This indicates whether the DSL entry is valid for the condition or the consequence of a rule, or both. A scope indication of "keyword" means that the entry has global significance, i.e., it is recognized anywhere in a DSLR file.
-
A type definition, written as a Java class name, enclosed in brackets. This part is optional unless the next part begins with an opening bracket. An empty pair of brackets is valid, too.
-
A DSL expression consists of a (Java) regular expression, with any number of embedded variable definitions, terminated by an equal sign ("="). A variable definition is enclosed in braces ("{" and "}"). It consists of a variable name and two optional attachments, separated by colons (":"). If there is one attachment, it is a regular expression for matching text that is to be assigned to the variable; if there are two attachments, the first one is a hint for the GUI editor and the second one the regular expression.
Note that all characters that are "magic" in regular expressions must be escaped with a preceding backslash ("\") if they should occur literally within the expression.
-
The remaining part of the line after the delimiting equal sign is the replacement text for any DSLR text matching the regular expression. It may contain variable references, i.e., a variable name enclosed in braces. Optionally, the variable name may be followed by an exclamation mark ("!") and a transformation function, see below.
Note that braces ("{" and "}") must be escaped with a preceding backslash ("\") if they should occur literally within the replacement string.
Debugging of DSL expansion can be turned on, selectively, by using a comment line starting with "#/" which may contain one or more words from the table presented below. The resulting output is written to standard output.
Word | Description |
---|---|
result |
Prints the resulting DRL text, with line numbers. |
steps |
Prints each expansion step of condition and consequence lines. |
keyword |
Dumps the internal representation of all DSL entries with scope "keyword". |
when |
Dumps the internal representation of all DSL entries with scope "when" or "*". |
then |
Dumps the internal representation of all DSL entries with scope "then" or "*". |
usage |
Displays a usage statistic of all DSL entries. |
Below are some sample DSL definitions, with comments describing the language features they illustrate.
# Comment: DSL examples
#/ debug: display result and usage
# keyword definition: replaces "regula" by "rule"
[keyword][]regula=rule
# conditional element: "T" or "t", "a" or "an", convert matched word
[when][][Tt]here is an? {entity:\w+}=
${entity!lc}: {entity!ucfirst} ()
# consequence statement: convert matched word, literal braces
[then][]update {entity:\w+}=modify( ${entity!lc} )\{ \}
The transformation of a DSLR file proceeds as follows:
-
The text is read into memory.
-
Each of the "keyword" entries is applied to the entire text. First, the regular expression from the keyword definition is modified by replacing white space sequences with a pattern matching any number of white space characters, and by replacing variable definitions with a capture made from the regular expression provided with the definition, or with the default (".*?"). Then, the DSLR text is searched exhaustively for occurrences of strings matching the modified regular expression. Substrings of a matching string corresponding to variable captures are extracted and replace variable references in the corresponding replacement text, and this text replaces the matching string in the DSLR text.
-
Sections of the DSLR text between "when" and "then", and "then" and "end", respectively, are located and processed in a uniform manner, line by line, as described below.
For a line, each DSL entry pertaining to the line’s section is taken in turn, in the order it appears in the DSL file. Its regular expression part is modified: white space is replaced by a pattern matching any number of white space characters; variable definitions with a regular expression are replaced by a capture with this regular expression, its default being ".*?". If the resulting regular expression matches all or part of the line, the matched part is replaced by the suitably modified replacement text.
Modification of the replacement text is done by replacing variable references with the text corresponding to the regular expression capture. This text may be modified according to the string transformation function given in the variable reference; see below for details.
If there is a variable reference naming a variable that is not defined in the same entry, the expander substitutes a value bound to a variable of that name, provided it was defined in one of the preceding lines of the current rule.
-
If a DSLR line in a condition is written with a leading hyphen, the expanded result is inserted into the last line, which should contain a pattern CE, i.e., a type name followed by a pair of parentheses. if this pair is empty, the expanded line (which should contain a valid constraint) is simply inserted, otherwise a comma (",") is inserted beforehand.
If a DSLR line in a consequence is written with a leading hyphen, the expanded result is inserted into the last line, which should contain a "modify" statement, ending in a pair of braces ("{" and "}"). If this pair is empty, the expanded line (which should contain a valid method call) is simply inserted, otherwise a comma (",") is inserted beforehand.
It is currently not possible to use a line with a leading hyphen to insert text into other conditional element forms (e.g., "accumulate") or it may only work for the first insertion (e.g., "eval"). |
All string transformation functions are described in the following table.
Name | Description |
---|---|
uc |
Converts all letters to upper case. |
lc |
Converts all letters to lower case. |
ucfirst |
Converts the first letter to upper case, and all other letters to lower case. |
num |
Extracts all digits and "-" from the string. If the last two digits in the original string are preceded by "." or ",", a decimal period is inserted in the corresponding position. |
a?b/c |
Compares the string with string a, and if they are equal, replaces it with b, otherwise with c. But c can be another triplet a, b, c, so that the entire structure is, in fact, a translation table. |
The following DSL examples show how to use string transformation functions.
# definitions for conditions
[when][]There is an? {entity}=${entity!lc}: {entity!ucfirst}()
[when][]- with an? {attr} greater than {amount}={attr} <= {amount!num}
[when][]- with a {what} {attr}={attr} {what!positive?>0/negative?%lt;0/zero?==0/ERROR}
A file containing a DSL definition has to be put under the resources folder or any of its subfolders like any other drools artifact.
It must have the extension .dsl
, or alternatively be marked with type ResourceType.DSL
.
when programmatically added to a KieFileSystem
.
For a file using DSL definition, the extension .dslr
should be used, while it can be added to a KieFileSystem
with type ResourceType.DSLR
.
For parsing and expanding a DSLR file the DSL configuration is read and supplied to the parser. Thus, the parser can "recognize" the DSL expressions and transform them into native rule language expressions.
6. Decision Model and Notation (DMN)
6.1. Decision Model and Notation (DMN)
Decision Model and Notation (DMN) is a standard established by the Object Management Group (OMG) for describing and modeling operational decisions. DMN defines an XML schema that enables DMN models to be shared between DMN-compliant platforms and across organizations so that business analysts and business rules developers can collaborate in designing and implementing DMN decision services. The DMN standard is similar to and can be used together with the Business Process Model and Notation (BPMN) standard for designing and modeling business processes.
For more information about the background and applications of DMN, see the OMG Decision Model and Notation specification.
6.1.1. DMN conformance levels
The DMN specification defines three incremental levels of conformance in a software implementation. A product that claims compliance at one level must also be compliant with any preceding levels. For example, a conformance level 3 implementation must also include the supported components in conformance levels 1 and 2. For the formal definitions of each conformance level, see the OMG Decision Model and Notation specification.
The following list summarizes the three DMN conformance levels:
- Conformance level 1
-
A DMN conformance level 1 implementation supports decision requirement diagrams (DRDs), decision logic, and decision tables, but decision models are not executable. Any language can be used to define the expressions, including natural, unstructured languages.
- Conformance level 2
-
A DMN conformance level 2 implementation includes the requirements in conformance level 1, and supports Simplified Friendly Enough Expression Language (S-FEEL) expressions and fully executable decision models.
- Conformance level 3
-
A DMN conformance level 3 implementation includes the requirements in conformance levels 1 and 2, and supports Friendly Enough Expression Language (FEEL) expressions, the full set of boxed expressions, and fully executable decision models.
Drools provides design and runtime support for DMN 1.2 models at conformance level 3. You can design your DMN models directly in Business Central or import existing DMN models into your Drools projects for deployment and execution.
6.1.2. DMN decision requirements diagram (DRD) components
A decision requirements diagram (DRD) is a visual representation of your DMN model. This diagram consists of one or more decision requirements graphs (DRGs) that represent a particular domain of an overall DRD. The DRGs trace business decisions using decision nodes, business knowledge models, sources of business knowledge, input data, and decision services.
The following table summarizes the components in a DRD:
Component | Description | Notation | |
---|---|---|---|
Elements |
Decision |
Node where one or more input elements determine an output based on defined decision logic. |
|
Business knowledge model |
Reusable function with one or more decision elements. Decisions that have the same logic but depend on different sub-input data or sub-decisions use business knowledge models to determine which procedure to follow. |
||
Knowledge source |
External authorities, documents, committees, or policies that regulate a decision or business knowledge model. Knowledge sources are references to real-world factors rather than executable business rules. |
||
Input data |
Information used in a decision node or a business knowledge model. Input data usually includes business-level concepts or objects relevant to the business, such as loan applicant data used in a lending strategy. |
||
Decision service |
Top-level decision containing a set of reusable decisions published as a service for invocation. A decision service can be invoked from an external application or a BPMN business process. |
||
Requirement connectors |
Information requirement |
Connection from an input data node or decision node to another decision node that requires the information. |
|
Knowledge requirement |
Connection from a business knowledge model to a decision node or to another business knowledge model that invokes the decision logic. |
||
Authority requirement |
Connection from an input data node or a decision node to a dependent knowledge source or from a knowledge source to a decision node, business knowledge model, or another knowledge source. |
||
Artifacts |
Text annotation |
Explanatory note associated with an input data node, decision node, business knowledge model, or knowledge source. |
|
Association |
Connection from an input data node, decision node, business knowledge model, or knowledge source to a text annotation. |
The following table summarizes the permitted connectors between DRD elements:
Starts from | Connects to | Connection type | Example |
---|---|---|---|
Decision |
Decision |
Information requirement |
|
Business knowledge model |
Decision |
Knowledge requirement |
|
Business knowledge model |
|||
Decision service |
Decision |
Knowledge requirement |
|
Business knowledge model |
|||
Input data |
Decision |
Information requirement |
|
Knowledge source |
Authority requirement |
||
Knowledge source |
Decision |
Authority requirement |
|
Business knowledge model |
|||
Knowledge source |
|||
Decision |
Text annotation |
Association |
|
Business knowledge model |
|||
Knowledge source |
|||
Input data |
The following example DRD illustrates some of these DMN components in practice:
The following example DRD illustrates DMN components that are part of a reusable decision service:
In a DMN decision service node, the decision nodes in the bottom segment incorporate input data from outside of the decision service to arrive at a final decision in the top segment of the decision service node. The resulting top-level decisions from the decision service are then implemented in any subsequent decisions or business knowledge requirements of the DMN model. You can reuse DMN decision services in other DMN models to apply the same decision logic with different input data and different outgoing connections.
6.1.3. Rule expressions in FEEL
Friendly Enough Expression Language (FEEL) is an expression language defined by the Object Management Group (OMG) DMN specification. FEEL expressions define the logic of a decision in a DMN model. FEEL is designed to facilitate both decision modeling and execution by assigning semantics to the decision model constructs. FEEL expressions in decision requirements diagrams (DRDs) occupy table cells in boxed expressions for decision nodes and business knowledge models.
For more information about FEEL in DMN, see the OMG Decision Model and Notation specification.
6.1.3.1. Variable and function names in FEEL
Unlike many traditional expression languages, Friendly Enough Expression Language (FEEL) supports spaces and a few special characters as part of variable and function names. A FEEL name must start with a letter
, ?
, or _
element. The unicode letter characters are also allowed. Variable names cannot start with a language keyword, such as and
, true
, or every
. The remaining characters in a variable name can be any of the starting characters, as well as digits
, white spaces, and special characters such as +
, -
, /
, *
, '
, and .
.
For example, the following names are all valid FEEL names:
-
Age
-
Birth Date
-
Flight 234 pre-check procedure
Several limitations apply to variable and function names in FEEL:
- Ambiguity
-
The use of spaces, keywords, and other special characters as part of names can make FEEL ambiguous. The ambiguities are resolved in the context of the expression, matching names from left to right. The parser resolves the variable name as the longest name matched in scope. You can use
( )
to disambiguate names if necessary. - Spaces in names
-
The DMN specification limits the use of spaces in FEEL names. According to the DMN specification, names can contain multiple spaces but not two consecutive spaces.
In order to make the language easier to use and avoid common errors due to spaces, Drools removes the limitation on the use of consecutive spaces. Drools supports variable names with any number of consecutive spaces, but normalizes them into a single space. For example, the variable references
First Name
with one space andFirst Name
with two spaces are both acceptable in Drools.Drools also normalizes the use of other white spaces, like the non-breakable white space that is common in web pages, tabs, and line breaks. From a Drools FEEL engine perspective, all of these characters are normalized into a single white space before processing.
- The keyword
in
-
The keyword
in
is the only keyword in the language that cannot be used as part of a variable name. Although the specifications allow the use of keywords in the middle of variable names, the use ofin
in variable names conflicts with the grammar definition offor
,every
andsome
expression constructs.
6.1.3.2. Data types in FEEL
Friendly Enough Expression Language (FEEL) supports the following data types:
-
Numbers
-
Strings
-
Boolean values
-
Dates
-
Time
-
Date and time
-
Days and time duration
-
Years and months duration
-
Functions
-
Contexts
-
Ranges (or intervals)
-
Lists
The DMN specification currently does not provide an explicit way of declaring a variable as a function , context , range , or list , but Drools extends the DMN built-in types to support variables of these types.
|
The following list describes each data type:
- Numbers
-
Numbers in FEEL are based on the IEEE 754-2008 Decimal 128 format, with 34 digits of precision. Internally, numbers are represented in Java as
BigDecimals
withMathContext DECIMAL128
. FEEL supports only one number data type, so the same type is used to represent both integers and floating point numbers.FEEL numbers use a dot (
.
) as a decimal separator. FEEL does not support-INF
,+INF
, orNaN
. FEEL usesnull
to represent invalid numbers.Drools extends the DMN specification and supports additional number notations:
-
Scientific: You can use scientific notation with the suffix
e<exp>
orE<exp>
. For example,1.2e3
is the same as writing the expression1.2*10**3
, but is a literal instead of an expression. -
Hexadecimal: You can use hexadecimal numbers with the prefix
0x
. For example,0xff
is the same as the decimal number255
. Both uppercase and lowercase letters are supported. For example,0XFF
is the same as0xff
. -
Type suffixes: You can use the type suffixes
f
,F
,d
,D
,l
, andL
. These suffixes are ignored.
-
- Strings
-
Strings in FEEL are any sequence of characters delimited by double quotation marks.
Example:
"John Doe"
- Boolean values
-
FEEL uses three-valued boolean logic, so a boolean logic expression may have values
true
,false
, ornull
. - Dates
-
Date literals are not supported in FEEL, but you can use the built-in
date()
function to construct date values. Date strings in FEEL follow the format defined in the XML Schema Part 2: Datatypes document. The format is"YYYY-MM-DD"
whereYYYY
is the year with four digits,MM
is the number of the month with two digits, andDD
is the number of the day.Example:
date( "2017-06-23" )
Date objects have time equal to
"00:00:00"
, which is midnight. The dates are considered to be local, without a timezone. - Time
-
Time literals are not supported in FEEL, but you can use the built-in
time()
function to construct time values. Time strings in FEEL follow the format defined in the XML Schema Part 2: Datatypes document. The format is"hh:mm:ss[.uuu][(+-)hh:mm]"
wherehh
is the hour of the day (from00
to23
),mm
is the minutes in the hour, andss
is the number of seconds in the minute. Optionally, the string may define the number of milliseconds (uuu
) within the second and contain a positive (+
) or negative (-
) offset from UTC time to define its timezone. Instead of using an offset, you can use the letterz
to represent the UTC time, which is the same as an offset of-00:00
. If no offset is defined, the time is considered to be local.Examples:
time( "04:25:12" ) time( "14:10:00+02:00" ) time( "22:35:40.345-05:00" ) time( "15:00:30z" )
Time values that define an offset or a timezone cannot be compared to local times that do not define an offset or a timezone.
- Date and time
-
Date and time literals are not supported in FEEL, but you can use the built-in
date and time()
function to construct date and time values. Date and time strings in FEEL follow the format defined in the XML Schema Part 2: Datatypes document. The format is"<date>T<time>"
, where<date>
and<time>
follow the prescribed XML schema formatting, conjoined byT
.Examples:
date and time( "2017-10-22T23:59:00" ) date and time( "2017-06-13T14:10:00+02:00" ) date and time( "2017-02-05T22:35:40.345-05:00" ) date and time( "2017-06-13T15:00:30z" )
Date and time values that define an offset or a timezone cannot be compared to local date and time values that do not define an offset or a timezone.
If your implementation of the DMN specification does not support spaces in the XML schema, use the keyword dateTime
as a synonym ofdate and time
. - Days and time duration
-
Days and time duration literals are not supported in FEEL, but you can use the built-in
duration()
function to construct days and time duration values. Days and time duration strings in FEEL follow the format defined in the XML Schema Part 2: Datatypes document, but are restricted to only days, hours, minutes and seconds. Months and years are not supported.Examples:
duration( "P1DT23H12M30S" ) duration( "P23D" ) duration( "PT12H" ) duration( "PT35M" )
If your implementation of the DMN specification does not support spaces in the XML schema, use the keyword dayTimeDuration
as a synonym ofdays and time duration
. - Years and months duration
-
Years and months duration literals are not supported in FEEL, but you can use the built-in
duration()
function to construct days and time duration values. Years and months duration strings in FEEL follow the format defined in the XML Schema Part 2: Datatypes document, but are restricted to only years and months. Days, hours, minutes, or seconds are not supported.Examples:
duration( "P3Y5M" ) duration( "P2Y" ) duration( "P10M" ) duration( "P25M" )
If your implementation of the DMN specification does not support spaces in the XML schema, use the keyword yearMonthDuration
as a synonym ofyears and months duration
. - Functions
-
FEEL has
function
literals (or anonymous functions) that you can use to create functions. The DMN specification currently does not provide an explicit way of declaring a variable as afunction
, but Drools extends the DMN built-in types to support variables of functions.Example:
function(a, b) a + b
In this example, the FEEL expression creates a function that adds the parameters
a
andb
and returns the result. - Contexts
-
FEEL has
context
literals that you can use to create contexts. Acontext
in FEEL is a list of key and value pairs, similar to maps in languages like Java. The DMN specification currently does not provide an explicit way of declaring a variable as acontext
, but Drools extends the DMN built-in types to support variables of contexts.Example:
{ x : 5, y : 3 }
In this example, the expression creates a context with two entries,
x
andy
, representing a coordinate in a chart.In DMN 1.2, another way to create contexts is to create an item definition that contains the list of keys as attributes, and then declare the variable as having that item definition type.
The Drools DMN API supports DMN
ItemDefinition
structural types in aDMNContext
represented in two ways:-
User-defined Java type: Must be a valid JavaBeans object defining properties and getters for each of the components in the DMN
ItemDefinition
. If necessary, you can also use the@FEELProperty
annotation for those getters representing a component name which would result in an invalid Java identifier. -
java.util.Map
interface: The map needs to define the appropriate entries, with the keys corresponding to the component name in the DMNItemDefinition
.
-
- Ranges (or intervals)
-
FEEL has
range
literals that you can use to create ranges or intervals. Arange
in FEEL is a value that defines a lower and an upper bound, where either can be open or closed. The DMN specification currently does not provide an explicit way of declaring a variable as arange
, but Drools extends the DMN built-in types to support variables of ranges.The syntax of a range is defined in the following formats:
range := interval_start endpoint '..' endpoint interval_end interval_start := open_start | closed_start open_start := '(' | ']' closed_start := '[' interval_end := open_end | closed_end open_end := ')' | '[' closed_end := ']' endpoint := expression
The expression for the endpoint must return a comparable value, and the lower bound endpoint must be lower than the upper bound endpoint.
For example, the following literal expression defines an interval between
1
and10
, including the boundaries (a closed interval on both endpoints):[ 1 .. 10 ]
The following literal expression defines an interval between 1 hour and 12 hours, including the lower boundary (a closed interval), but excluding the upper boundary (an open interval):
[ duration("PT1H") .. duration("PT12H") ]
You can use ranges in decision tables to test for ranges of values, or use ranges in simple literal expressions. For example, the following literal expression returns
true
if the value of a variablex
is between0
and100
:x in [ 1 .. 100 ]
- Lists
-
FEEL has
list
literals that you can use to create lists of items. Alist
in FEEL is represented by a comma-separated list of values enclosed in square brackets. The DMN specification currently does not provide an explicit way of declaring a variable as alist
, but Drools extends the DMN built-in types to support variables of lists.Example:
[ 2, 3, 4, 5 ]
All lists in FEEL contain elements of the same type and are immutable. Elements in a list can be accessed by index, where the first element is
1
. Negative indexes can access elements starting from the end of the list so that-1
is the last element.For example, the following expression returns the second element of a list
x
:x[2]
The following expression returns the second-to-last element of a list
x
:x[-2]
Elements in a list can also be counted by the function
count
, which uses the list of elements as the parameter.For example, the following expression returns
4
:count([ 2, 3, 4, 5 ])
6.1.4. DMN decision logic in boxed expressions
Boxed expressions in DMN are tables that you use to define the underlying logic of decision nodes and business knowledge models in a decision requirements diagram (DRD) or decision requirements graph (DRG). Some boxed expressions can contain other boxed expressions, but the top-level boxed expression corresponds to the decision logic of a single DRD artifact. While DRDs with one or more DRGs represent the flow of a DMN decision model, boxed expressions define the actual decision logic of individual nodes. DRDs and boxed expressions together form a complete and functional DMN decision model.
The following are the types of DMN boxed expressions:
-
Decision tables
-
Literal expressions
-
Contexts
-
Relations
-
Functions
-
Invocations
-
Lists
Drools does not provide boxed list expressions in Business Central, but supports a FEEL list data type that you can use in boxed literal expressions. For more information about the list data type and other FEEL data types in Drools, see Data types in FEEL.
|
All Friendly Enough Expression Language (FEEL) expressions that you use in your boxed expressions must conform to the FEEL syntax requirements in the OMG Decision Model and Notation specification.
6.1.4.1. DMN decision tables
A decision table in DMN is a visual representation of one or more business rules in a tabular format. You use decision tables to define rules for a decision node that applies those rules at a given point in the decision model. Each rule consists of a single row in the table, and includes columns that define the conditions (input) and outcome (output) for that particular row. The definition of each row is precise enough to derive the outcome using the values of the conditions. Input and output values can be FEEL expressions or defined data type values.
For example, the following decision table determines credit score ratings based on a defined range of a loan applicant’s credit score:
The following decision table determines the next step in a lending strategy for applicants depending on applicant loan eligibility and the bureau call type:
The following decision table determines applicant qualification for a loan as the concluding decision node in a loan prequalification decision model:
Decision tables are a popular way of modeling rules and decision logic, and are used in many methodologies (such as DMN) and implementation frameworks (such as Drools).
Drools supports both DMN decision tables and Drools-native decision tables, but they are different types of assets with different syntax requirements and are not interchangeable. For more information about Drools-native decision tables in Drools, see Spreadsheet decision tables. |
Hit policies in DMN decision tables
Hit policies determine how to reach an outcome when multiple rules in a decision table match the provided input values. For example, if one rule in a decision table applies a sales discount to military personnel and another rule applies a discount to students, then when a customer is both a student and in the military, the decision table hit policy must indicate whether to apply one discount or the other (Unique, First) or both discounts (Collect Sum). You specify the single character of the hit policy (U, F, C+) in the upper-left corner of the decision table.
The following decision table hit policies are supported in DMN:
-
Unique (U): Permits only one rule to match. Any overlap raises an error.
-
Any (A): Permits multiple rules to match, but they must all have the same output. If multiple matching rules do not have the same output, an error is raised.
-
Priority (P): Permits multiple rules to match, with different outputs. The output that comes first in the output values list is selected.
-
First (F): Uses the first match in rule order.
-
Collect (C+, C>, C<, C#): Aggregates output from multiple rules based on an aggregation function.
-
Collect ( C ): Aggregates values in an arbitrary list.
-
Collect Sum (C+): Outputs the sum of all collected values. Values must be numeric.
-
Collect Min (C<): Outputs the minimum value among the matches. The resulting values must be comparable, such as numbers, dates, or text (lexicographic order).
-
Collect Max (C>): Outputs the maximum value among the matches. The resulting values must be comparable, such as numbers, dates or text (lexicographic order).
-
Collect Count (C#): Outputs the number of matching rules.
-
6.1.4.2. Boxed literal expressions
A boxed literal expression in DMN is a literal FEEL expression as text in a table cell, typically with a labeled column and an assigned data type. You use boxed literal expressions to define simple or complex node logic or decision data directly in FEEL for a particular node in a decision. Literal FEEL expressions must conform to FEEL syntax requirements in the OMG Decision Model and Notation specification.
For example, the following boxed literal expression defines the minimum acceptable PITI calculation (principal, interest, taxes, and insurance) in a lending decision, where acceptable rate
is a variable defined in the DMN model:
The following boxed literal expression sorts a list of possible dating candidates (soul mates) in an online dating application based on their score on criteria such as age, location, and interests:
6.1.4.3. Boxed context expressions
A boxed context expression in DMN is a set of variable names and values with a result value. Each name-value pair is a context entry. You use context expressions to represent data definitions in decision logic and set a value for a desired decision element within the DMN decision model. A value in a boxed context expression can be a data type value or FEEL expression, or can contain a nested sub-expression of any type, such as a decision table, a literal expression, or another context expression.
For example, the following boxed context expression defines the factors for sorting delayed passengers in a flight-rebooking decision model, based on defined data types (tPassengerTable
, tFlightNumberList
):
The following boxed context expression defines the factors that determine whether a loan applicant can meet minimum mortgage payments based on principal, interest, taxes, and insurance (PITI), represented as a front-end ratio calculation with a sub-context expression:
6.1.4.4. Boxed relation expressions
A boxed relation expression in DMN is a traditional data table with information about given entities, listed as rows. You use boxed relation tables to define decision data for relevant entities in a decision at a particular node. Boxed relation expressions are similar to context expressions in that they set variable names and values, but relation expressions contain no result value and list all variable values based on a single defined variable in each column.
For example, the following boxed relation expression provides information about employees in an employee rostering decision:
6.1.4.5. Boxed function expressions
A boxed function expression in DMN is a parameterized boxed expression containing a literal FEEL expression, a nested context expression of an external JAVA or PMML function, or a nested boxed expression of any type. By default, all business knowledge models are defined as boxed function expressions. You use boxed function expressions to call functions on your decision logic and to define all business knowledge models.
For example, the following boxed function expression determines airline flight capacity in a flight-rebooking decision model:
The following boxed function expression contains a basic Java function as a context expression for determining absolute value in a decision model calculation:
The following boxed function expression determines a monthly mortgage installment as a business knowledge model in a lending decision, with the function value defined as a nested context expression:
6.1.4.6. Boxed invocation expressions
A boxed invocation expression in DMN is a boxed expression that invokes a business knowledge model. A boxed invocation expression contains the name of the business knowledge model to be invoked and a list of parameter bindings. Each binding is represented by two boxed expressions on a row: The box on the left contains the name of a parameter and the box on the right contains the binding expression whose value is assigned to the parameter to evaluate the invoked business knowledge model. You use boxed invocations to invoke at a particular decision node a business knowledge model defined in the decision model.
For example, the following boxed invocation expression invokes a Reassign Next Passenger
business knowledge model as the concluding decision node in a flight-rebooking decision model:
The following boxed invocation expression invokes an InstallmentCalculation
business knowledge model to calculate a monthly installment amount for a loan before proceeding to affordability decisions:
6.1.5. DMN model example
The following is a real-world DMN model example that demonstrates how you can use decision modeling to reach a decision based on input data, circumstances, and company guidelines. In this scenario, a flight from San Diego to New York is canceled, requiring the affected airline to find alternate arrangements for its inconvenienced passengers.
First, the airline collects the information necessary to determine how best to get the travelers to their destinations:
- Input data
-
-
List of flights
-
List of passengers
-
- Decisions
-
-
Prioritize the passengers who will get seats on a new flight
-
Determine which flights those passengers will be offered
-
- Business knowledge models
-
-
The company process for determining passenger priority
-
Any flights that have space available
-
Company rules for determining how best to reassign inconvenienced passengers
-
The airline then uses the DMN standard to model its decision process in the following decision requirements diagram (DRD) for determining the best rebooking solution:
Similar to flowcharts, DRDs use shapes to represent the different elements in a process. Ovals contain the two necessary input data, rectangles contain the decision points in the model, and rectangles with clipped corners (business knowledge models) contain reusable logic that can be repeatedly invoked.
The DRD draws logic for each element from boxed expressions that provide variable definitions using FEEL expressions or data type values.
Some boxed expressions are basic, such as the following decision for establishing a prioritized waiting list:
Some boxed expressions are more complex with greater detail and calculation, such as the following business knowledge model for reassigning the next delayed passenger:
The following is the DMN source file for this decision model:
<dmn:definitions xmlns="https://www.drools.org/kie-dmn/Flight-rebooking" xmlns:dmn="http://www.omg.org/spec/DMN/20151101/dmn.xsd" xmlns:feel="http://www.omg.org/spec/FEEL/20140401" id="_0019_flight_rebooking" name="0019-flight-rebooking" namespace="https://www.drools.org/kie-dmn/Flight-rebooking">
<dmn:itemDefinition id="_tFlight" name="tFlight">
<dmn:itemComponent id="_tFlight_Flight" name="Flight Number">
<dmn:typeRef>feel:string</dmn:typeRef>
</dmn:itemComponent>
<dmn:itemComponent id="_tFlight_From" name="From">
<dmn:typeRef>feel:string</dmn:typeRef>
</dmn:itemComponent>
<dmn:itemComponent id="_tFlight_To" name="To">
<dmn:typeRef>feel:string</dmn:typeRef>
</dmn:itemComponent>
<dmn:itemComponent id="_tFlight_Dep" name="Departure">
<dmn:typeRef>feel:dateTime</dmn:typeRef>
</dmn:itemComponent>
<dmn:itemComponent id="_tFlight_Arr" name="Arrival">
<dmn:typeRef>feel:dateTime</dmn:typeRef>
</dmn:itemComponent>
<dmn:itemComponent id="_tFlight_Capacity" name="Capacity">
<dmn:typeRef>feel:number</dmn:typeRef>
</dmn:itemComponent>
<dmn:itemComponent id="_tFlight_Status" name="Status">
<dmn:typeRef>feel:string</dmn:typeRef>
</dmn:itemComponent>
</dmn:itemDefinition>
<dmn:itemDefinition id="_tFlightTable" isCollection="true" name="tFlightTable">
<dmn:typeRef>tFlight</dmn:typeRef>
</dmn:itemDefinition>
<dmn:itemDefinition id="_tPassenger" name="tPassenger">
<dmn:itemComponent id="_tPassenger_Name" name="Name">
<dmn:typeRef>feel:string</dmn:typeRef>
</dmn:itemComponent>
<dmn:itemComponent id="_tPassenger_Status" name="Status">
<dmn:typeRef>feel:string</dmn:typeRef>
</dmn:itemComponent>
<dmn:itemComponent id="_tPassenger_Miles" name="Miles">
<dmn:typeRef>feel:number</dmn:typeRef>
</dmn:itemComponent>
<dmn:itemComponent id="_tPassenger_Flight" name="Flight Number">
<dmn:typeRef>feel:string</dmn:typeRef>
</dmn:itemComponent>
</dmn:itemDefinition>
<dmn:itemDefinition id="_tPassengerTable" isCollection="true" name="tPassengerTable">
<dmn:typeRef>tPassenger</dmn:typeRef>
</dmn:itemDefinition>
<dmn:itemDefinition id="_tFlightNumberList" isCollection="true" name="tFlightNumberList">
<dmn:typeRef>feel:string</dmn:typeRef>
</dmn:itemDefinition>
<dmn:inputData id="i_Flight_List" name="Flight List">
<dmn:variable name="Flight List" typeRef="tFlightTable"/>
</dmn:inputData>
<dmn:inputData id="i_Passenger_List" name="Passenger List">
<dmn:variable name="Passenger List" typeRef="tPassengerTable"/>
</dmn:inputData>
<dmn:decision name="Prioritized Waiting List" id="d_PrioritizedWaitingList">
<dmn:variable name="Prioritized Waiting List" typeRef="tPassengerTable"/>
<dmn:informationRequirement>
<dmn:requiredInput href="#i_Passenger_List"/>
</dmn:informationRequirement>
<dmn:informationRequirement>
<dmn:requiredInput href="#i_Flight_List"/>
</dmn:informationRequirement>
<dmn:knowledgeRequirement>
<dmn:requiredKnowledge href="#b_PassengerPriority"/>
</dmn:knowledgeRequirement>
<dmn:context>
<dmn:contextEntry>
<dmn:variable name="Cancelled Flights" typeRef="tFlightNumberList"/>
<dmn:literalExpression>
<dmn:text>Flight List[ Status = "cancelled" ].Flight Number</dmn:text>
</dmn:literalExpression>
</dmn:contextEntry>
<dmn:contextEntry>
<dmn:variable name="Waiting List" typeRef="tPassengerTable"/>
<dmn:literalExpression>
<dmn:text>Passenger List[ list contains( Cancelled Flights, Flight Number ) ]</dmn:text>
</dmn:literalExpression>
</dmn:contextEntry>
<dmn:contextEntry>
<dmn:literalExpression>
<dmn:text>sort( Waiting List, passenger priority )</dmn:text>
</dmn:literalExpression>
</dmn:contextEntry>
</dmn:context>
</dmn:decision>
<dmn:decision name="Rebooked Passengers" id="d_RebookedPassengers">
<dmn:variable name="Rebooked Passengers" typeRef="tPassengerTable"/>
<dmn:informationRequirement>
<dmn:requiredDecision href="#d_PrioritizedWaitingList"/>
</dmn:informationRequirement>
<dmn:informationRequirement>
<dmn:requiredInput href="#i_Flight_List"/>
</dmn:informationRequirement>
<dmn:knowledgeRequirement>
<dmn:requiredKnowledge href="#b_ReassignNextPassenger"/>
</dmn:knowledgeRequirement>
<dmn:invocation>
<dmn:literalExpression>
<dmn:text>reassign next passenger</dmn:text>
</dmn:literalExpression>
<dmn:binding>
<dmn:parameter name="Waiting List"/>
<dmn:literalExpression>
<dmn:text>Prioritized Waiting List</dmn:text>
</dmn:literalExpression>
</dmn:binding>
<dmn:binding>
<dmn:parameter name="Reassigned Passengers List"/>
<dmn:literalExpression>
<dmn:text>[]</dmn:text>
</dmn:literalExpression>
</dmn:binding>
<dmn:binding>
<dmn:parameter name="Flights"/>
<dmn:literalExpression>
<dmn:text>Flight List</dmn:text>
</dmn:literalExpression>
</dmn:binding>
</dmn:invocation>
</dmn:decision>
<dmn:businessKnowledgeModel id="b_PassengerPriority" name="passenger priority">
<dmn:encapsulatedLogic>
<dmn:formalParameter name="Passenger1" typeRef="tPassenger"/>
<dmn:formalParameter name="Passenger2" typeRef="tPassenger"/>
<dmn:decisionTable hitPolicy="UNIQUE">
<dmn:input id="b_Passenger_Priority_dt_i_P1_Status" label="Passenger1.Status">
<dmn:inputExpression typeRef="feel:string">
<dmn:text>Passenger1.Status</dmn:text>
</dmn:inputExpression>
<dmn:inputValues>
<dmn:text>"gold", "silver", "bronze"</dmn:text>
</dmn:inputValues>
</dmn:input>
<dmn:input id="b_Passenger_Priority_dt_i_P2_Status" label="Passenger2.Status">
<dmn:inputExpression typeRef="feel:string">
<dmn:text>Passenger2.Status</dmn:text>
</dmn:inputExpression>
<dmn:inputValues>
<dmn:text>"gold", "silver", "bronze"</dmn:text>
</dmn:inputValues>
</dmn:input>
<dmn:input id="b_Passenger_Priority_dt_i_P1_Miles" label="Passenger1.Miles">
<dmn:inputExpression typeRef="feel:string">
<dmn:text>Passenger1.Miles</dmn:text>
</dmn:inputExpression>
</dmn:input>
<dmn:output id="b_Status_Priority_dt_o" label="Passenger1 has priority">
<dmn:outputValues>
<dmn:text>true, false</dmn:text>
</dmn:outputValues>
<dmn:defaultOutputEntry>
<dmn:text>false</dmn:text>
</dmn:defaultOutputEntry>
</dmn:output>
<dmn:rule id="b_Passenger_Priority_dt_r1">
<dmn:inputEntry id="b_Passenger_Priority_dt_r1_i1">
<dmn:text>"gold"</dmn:text>
</dmn:inputEntry>
<dmn:inputEntry id="b_Passenger_Priority_dt_r1_i2">
<dmn:text>"gold"</dmn:text>
</dmn:inputEntry>
<dmn:inputEntry id="b_Passenger_Priority_dt_r1_i3">
<dmn:text>>= Passenger2.Miles</dmn:text>
</dmn:inputEntry>
<dmn:outputEntry id="b_Passenger_Priority_dt_r1_o1">
<dmn:text>true</dmn:text>
</dmn:outputEntry>
</dmn:rule>
<dmn:rule id="b_Passenger_Priority_dt_r2">
<dmn:inputEntry id="b_Passenger_Priority_dt_r2_i1">
<dmn:text>"gold"</dmn:text>
</dmn:inputEntry>
<dmn:inputEntry id="b_Passenger_Priority_dt_r2_i2">
<dmn:text>"silver","bronze"</dmn:text>
</dmn:inputEntry>
<dmn:inputEntry id="b_Passenger_Priority_dt_r2_i3">
<dmn:text>-</dmn:text>
</dmn:inputEntry>
<dmn:outputEntry id="b_Passenger_Priority_dt_r2_o1">
<dmn:text>true</dmn:text>
</dmn:outputEntry>
</dmn:rule>
<dmn:rule id="b_Passenger_Priority_dt_r3">
<dmn:inputEntry id="b_Passenger_Priority_dt_r3_i1">
<dmn:text>"silver"</dmn:text>
</dmn:inputEntry>
<dmn:inputEntry id="b_Passenger_Priority_dt_r3_i2">
<dmn:text>"silver"</dmn:text>
</dmn:inputEntry>
<dmn:inputEntry id="b_Passenger_Priority_dt_r3_i3">
<dmn:text>>= Passenger2.Miles</dmn:text>
</dmn:inputEntry>
<dmn:outputEntry id="b_Passenger_Priority_dt_r3_o1">
<dmn:text>true</dmn:text>
</dmn:outputEntry>
</dmn:rule>
<dmn:rule id="b_Passenger_Priority_dt_r4">
<dmn:inputEntry id="b_Passenger_Priority_dt_r4_i1">
<dmn:text>"silver"</dmn:text>
</dmn:inputEntry>
<dmn:inputEntry id="b_Passenger_Priority_dt_r4_i2">
<dmn:text>"bronze"</dmn:text>
</dmn:inputEntry>
<dmn:inputEntry id="b_Passenger_Priority_dt_r4_i3">
<dmn:text>-</dmn:text>
</dmn:inputEntry>
<dmn:outputEntry id="b_Passenger_Priority_dt_r4_o1">
<dmn:text>true</dmn:text>
</dmn:outputEntry>
</dmn:rule>
<dmn:rule id="b_Passenger_Priority_dt_r5">
<dmn:inputEntry id="b_Passenger_Priority_dt_r5_i1">
<dmn:text>"bronze"</dmn:text>
</dmn:inputEntry>
<dmn:inputEntry id="b_Passenger_Priority_dt_r5_i2">
<dmn:text>"bronze"</dmn:text>
</dmn:inputEntry>
<dmn:inputEntry id="b_Passenger_Priority_dt_r5_i3">
<dmn:text>>= Passenger2.Miles</dmn:text>
</dmn:inputEntry>
<dmn:outputEntry id="b_Passenger_Priority_dt_r5_o1">
<dmn:text>true</dmn:text>
</dmn:outputEntry>
</dmn:rule>
</dmn:decisionTable>
</dmn:encapsulatedLogic>
<dmn:variable name="passenger priority" typeRef="feel:boolean"/>
</dmn:businessKnowledgeModel>
<dmn:businessKnowledgeModel id="b_ReassignNextPassenger" name="reassign next passenger">
<dmn:encapsulatedLogic>
<dmn:formalParameter name="Waiting List" typeRef="tPassengerTable"/>
<dmn:formalParameter name="Reassigned Passengers List" typeRef="tPassengerTable"/>
<dmn:formalParameter name="Flights" typeRef="tFlightTable"/>
<dmn:context>
<dmn:contextEntry>
<dmn:variable name="Next Passenger" typeRef="tPassenger"/>
<dmn:literalExpression>
<dmn:text>Waiting List[1]</dmn:text>
</dmn:literalExpression>
</dmn:contextEntry>
<dmn:contextEntry>
<dmn:variable name="Original Flight" typeRef="tFlight"/>
<dmn:literalExpression>
<dmn:text>Flights[ Flight Number = Next Passenger.Flight Number ][1]</dmn:text>
</dmn:literalExpression>
</dmn:contextEntry>
<dmn:contextEntry>
<dmn:variable name="Best Alternate Flight" typeRef="tFlight"/>
<dmn:literalExpression>
<dmn:text>Flights[ From = Original Flight.From and To = Original Flight.To and Departure > Original Flight.Departure and Status = "scheduled" and has capacity( item, Reassigned Passengers List ) ][1]</dmn:text>
</dmn:literalExpression>
</dmn:contextEntry>
<dmn:contextEntry>
<dmn:variable name="Reassigned Passenger" typeRef="tPassenger"/>
<dmn:context>
<dmn:contextEntry>
<dmn:variable name="Name" typeRef="feel:string"/>
<dmn:literalExpression>
<dmn:text>Next Passenger.Name</dmn:text>
</dmn:literalExpression>
</dmn:contextEntry>
<dmn:contextEntry>
<dmn:variable name="Status" typeRef="feel:string"/>
<dmn:literalExpression>
<dmn:text>Next Passenger.Status</dmn:text>
</dmn:literalExpression>
</dmn:contextEntry>
<dmn:contextEntry>
<dmn:variable name="Miles" typeRef="feel:number"/>
<dmn:literalExpression>
<dmn:text>Next Passenger.Miles</dmn:text>
</dmn:literalExpression>
</dmn:contextEntry>
<dmn:contextEntry>
<dmn:variable name="Flight Number" typeRef="feel:string"/>
<dmn:literalExpression>
<dmn:text>Best Alternate Flight.Flight Number</dmn:text>
</dmn:literalExpression>
</dmn:contextEntry>
</dmn:context>
</dmn:contextEntry>
<dmn:contextEntry>
<dmn:variable name="Remaining Waiting List" typeRef="tPassengerTable"/>
<dmn:literalExpression>
<dmn:text>remove( Waiting List, 1 )</dmn:text>
</dmn:literalExpression>
</dmn:contextEntry>
<dmn:contextEntry>
<dmn:variable name="Updated Reassigned Passengers List" typeRef="tPassengerTable"/>
<dmn:literalExpression>
<dmn:text>append( Reassigned Passengers List, Reassigned Passenger )</dmn:text>
</dmn:literalExpression>
</dmn:contextEntry>
<dmn:contextEntry>
<dmn:literalExpression>
<dmn:text>if count( Remaining Waiting List ) > 0 then reassign next passenger( Remaining Waiting List, Updated Reassigned Passengers List, Flights ) else Updated Reassigned Passengers List</dmn:text>
</dmn:literalExpression>
</dmn:contextEntry>
</dmn:context>
</dmn:encapsulatedLogic>
<dmn:variable name="reassign next passenger" typeRef="tPassengerTable"/>
<dmn:knowledgeRequirement>
<dmn:requiredKnowledge href="#b_HasCapacity"/>
</dmn:knowledgeRequirement>
</dmn:businessKnowledgeModel>
<dmn:businessKnowledgeModel id="b_HasCapacity" name="has capacity">
<dmn:encapsulatedLogic>
<dmn:formalParameter name="flight" typeRef="tFlight"/>
<dmn:formalParameter name="rebooked list" typeRef="tPassengerTable"/>
<dmn:literalExpression>
<dmn:text>flight.Capacity > count( rebooked list[ Flight Number = flight.Flight Number ] )</dmn:text>
</dmn:literalExpression>
</dmn:encapsulatedLogic>
<dmn:variable name="has capacity" typeRef="feel:boolean"/>
</dmn:businessKnowledgeModel>
</dmn:definitions>
6.2. DMN support in Drools
Drools provides design and runtime support for DMN 1.2 models at conformance level 3. You can integrate DMN models with your Drools decision services in several ways:
-
Design your DMN models directly in Business Central using the DMN designer
-
Import DMN files into your project in Business Central (Menu → Design → Projects → Import Asset)
-
Package DMN files as part of your project knowledge JAR (KJAR) file without Business Central
In addition to all DMN conformance level 3 requirements, Drools also includes enhancements and fixes to FEEL and DMN model components to optimize the experience of implementing DMN decision services with Drools. From a platform perspective, DMN models are like any other business asset in Drools, such as DRL files or spreadsheet decision tables, that you can include in your Drools project and deploy to KIE Server in order to start your DMN decision services.
For more information about including external DMN files with your Drools project packaging and deployment method, see Build, Deploy, Utilize and Run.
6.2.1. FEEL enhancements in Drools
Drools includes the following enhancements and other changes to FEEL in the current DMN implementation:
-
Space Sensitivity: This DMN implementation of the FEEL language is space insensitive. The goal is to avoid non-deterministic behavior based on the context and differences in behavior based on invisible characters, such as white spaces. This means that for this implementation, a variable named
first name
with one space is exactly the same asfirst name
with two spaces in it. -
List functions
or()
andand()
: The specification defines two list functions namedor()
andand()
. However, according to the FEEL grammar, these are not valid function names, asand
andor
are reserved keywords. This implementation renames these functions toany()
andall()
respectively, in anticipation for DMN 1.2. -
Keyword
in
cannot be used in variable names: The specification defines that any keyword can be reused as part of a variable name, but the ambiguities caused with thefor … in … return
loop prevent the reuse of thein
keyword. All other keywords are supported as part of variable names. -
Keywords are not supported in attributes of anonymous types: FEEL is not a strongly typed language and the parser must resolve ambiguity in name parts of an attribute of an anonymous type. The parser supports reusable keywords as part of a variable name defined in the scope, but the parser does not support keywords in attributes of an anonymous type. For example,
for item in Order.items return Federal Tax for Item( item )
is a valid and supported FEEL expression, where a function namedFederal Tax for Item(…)
can be defined and invoked correctly in the scope. However, the expressionfor i in [ {x and y : true, n : 1}, {x and y : false, n: 2} ] return i.x and y
is not supported because anonymous types are defined in the iteration context of thefor
expression and the parser cannot resolve the ambiguity. -
Support for date and time literals on ranges: According to the grammar rules #8, #18, #19, #34 and #62,
date and time
literals are supported in ranges (pages 110-111). Chapter 10.3.2.7 on page 114, on the other hand, contradicts the grammar and says they are not supported. This implementation chose to follow the grammar and supportdate and time
literals on ranges, as well as extend the specification to support any arbitrary expression (see extensions below). -
Invalid time syntax: Chapter 10.3.2.3.4 on page 112 and bullet point about
time
on page 131 both state that thetime
string lexical representation follows the XML Schema Datatypes specification as well as ISO 8601. According to the XML Schema specification (https://www.w3.org/TR/xmlschema-2/#time), the lexical representation of a time follows the patternhh:mm:ss.sss
without any leading character. The DMN specification uses a leading "T" in several examples, that we understand is a typo and not in accordance with the standard. -
Support for scientific and hexadecimal notations: This implementation supports scientific and hexadecimal notation for numbers. For example,
1.2e5
(scientific notation),0xD5
(hexadecimal notation). -
Support for expressions as end points in ranges: This implementation supports expressions as endpoints for ranges. For example,
[date("2016-11-24")..date("2016-11-27")]
-
Support for additional types: The specification only defines the following as basic types of the language:
-
number
-
string
-
boolean
-
days and time duration
-
years and month duration
-
time
-
date and time
For completeness and orthogonality, this implementation also supports the following types:
-
context
-
list
-
range
-
function
-
unary test
-
-
Support for unary tests: For completeness and orthogonality, unary tests are supported as first class citizens in the language. They are functions with an implicit single parameter and can be invoked in the same way as functions. For example,
UnaryTestAsFunction.feel{ is minor : < 18, Bob is minor : is minor( bob.age ) }
-
Support for additional built-in functions: The following additional functions are supported:
-
now()
: Returns the current local date and time. -
today()
: Returns the current local date. -
decision table()
: Returns a decision table function, although the specification mentions a decision table. The function on page 114 is not implementable as defined. -
string( mask, p… )
: Returns a string formatted as per the mask. See Java String.format() for details on the mask syntax. For example,string( "%4.2f", 7.1298 )
returns the string"7.12"
.
-
-
Support for additional date and time arithmetics: Subtracting two dates returns a day and time duration with the number of days between the two dates, ignoring daylight savings. For example,
DateArithmetic.feeldate( "2017-05-12" ) - date( "2017-04-25" ) = duration( "P17D" )
6.2.2. DMN model enhancements in Drools
Drools includes the following enhancements to DMN model support in the current DMN implementation:
-
Support for types with spaces on names: The DMN XML schema defines type refs such as QNames. The QNames do not allow spaces. Therefore, it is not possible to use types like FEEL
date and time
,days and time duration
oryears and months duration
. This implementation does parse such typerefs as strings and allows type names with spaces. However, in order to comply with the XML schema, it also adds the following aliases to such types that can be used instead:-
date and time
=dateTime
-
days and time duration
=duration
ordayTimeDuration
-
years and months duration
=duration
oryearMonthDuration
Note that, for the "duration" types, the user can simply use
duration
and the Drools engine will infer the proper duration, eitherdays and time duration
oryears and months duration
.
-
-
Lists support heterogeneous element types: Currently this implementation supports lists with heterogeneous element types. This is an experimental extension and does limit the functionality of some functions and filters. This decision will be re-evaluated in the future.
-
TypeRef link between Decision Tables and Item Definitions: On decision tables/input clause, if no values list is defined, the Drools engine automatically checks the type reference and applies the allowed values check if it is defined.
6.2.3. Configurable DMN properties in Drools
Drools provides the following DMN properties that you can configure when you execute your DMN models on KIE Server or on your client application:
- org.kie.dmn.strictConformance
-
When enabled, this property disables by default any extensions or profiles provided beyond the DMN standard, such as some helper functions or enhanced features of DMN 1.2 backported into DMN 1.1. You can use this property to configure the Drools engine to support only pure DMN features, such as when running the DMN Technology Compatibility Kit (TCK).
Default value:
false
-Dorg.kie.dmn.strictConformance=true
- org.kie.dmn.runtime.typecheck
-
When enabled, this property enables verification of actual values conforming to their declared types in the DMN model, as input or output of DRD elements. You can use this property to verify whether data supplied to the DMN model or produced by the DMN model is compliant with what is specified in the model.
Default value:
false
-Dorg.kie.dmn.runtime.typecheck=true
- org.kie.dmn.decisionservice.coercesingleton
-
By default, this property makes the result of a decision service defining a single output decision be the single value of the output decision value. When disabled, this property makes the result of a decision service defining a single output decision be a
context
with the single entry for that decision. You can use this property to adjust your decision service outputs according to your project requirements.Default value:
true
-Dorg.kie.dmn.decisionservice.coercesingleton=false
- org.kie.dmn.profiles.$PROFILE_NAME
-
When valorized with a Java fully qualified name, this property loads a DMN profile onto the Drools engine at start time. You can use this property to implement a predefined DMN profile with supported features different from or beyond the DMN standard. For example, if you are creating DMN models using the Signavio DMN modeller, use this property to implement features from the Signavio DMN profile into your DMN decision service.
-Dorg.kie.dmn.profiles.signavio=org.kie.dmn.signavio.KieDMNSignavioProfile
- org.kie.dmn.compiler.execmodel
-
When enabled, this property enables DMN decision table logic to be compiled into executable rule models during run time. You can use this property to evaluate DMN decision table logic more efficiently. This property is helpful when the executable model compilation was not originally performed during project compile time. Enabling this property may result in added compile time during the first evaluation by the Drools engine, but subsequent compilations are more efficient.
Default value:
false
-Dorg.kie.dmn.compiler.execmodel=true
6.3. Creating and editing DMN models in Business Central
You can use the DMN designer in Business Central to design DMN decision requirements diagrams (DRDs) and define decision logic for a complete and functional DMN decision model. Drools provides design and runtime support for DMN 1.2 models at conformance level 3, and includes enhancements and fixes to FEEL and DMN model components to optimize the experience of implementing DMN decision services with Drools.
-
In Business Central, go to Menu → Design → Projects and click the project name.
-
Create or import a DMN file in your Business Central project.
To create a DMN file, click Add Asset → DMN, enter an informative DMN model name, select the appropriate Package, and click Ok.
To import an existing DMN file, click Import Asset, enter the DMN model name, select the appropriate Package, select the DMN file to upload, and click Ok.
The new DMN file is now listed in the DMN panel of the Project Explorer, and the DMN decision requirements diagram (DRD) canvas appears.
If you imported a DMN file that does not contain layout information, the imported decision requirements diagram (DRD) is formatted automatically in the DMN designer. Click Save in the DMN designer to save the DRD layout. -
Begin adding components to your new or imported DMN decision requirements diagram (DRD) by clicking and dragging one of the DMN nodes from the left toolbar:
Figure 96. Adding DRD componentsThe following DRD components are available:
-
Decision: Use this node for a DMN decision, where one or more input elements determine an output based on defined decision logic.
-
Business knowledge model: Use this node for reusable functions with one or more decision elements. Decisions that have the same logic but depend on different sub-input data or sub-decisions use business knowledge models to determine which procedure to follow.
-
Knowledge source: Use this node for external authorities, documents, committees, or policies that regulate a decision or business knowledge model. Knowledge sources are references to real-world factors rather than executable business rules.
-
Input data: Use this node for information used in a decision node or a business knowledge model. Input data usually includes business-level concepts or objects relevant to the business, such as loan applicant data used in a lending strategy.
-
Text annotation: Use this node for explanatory notes associated with an input data node, decision node, business knowledge model, or knowledge source.
-
Decision service: Use this node to enclose a set of reusable decisions implemented as a decision service for invocation. A decision service can be used in other DMN models and can be invoked from an external application or a BPMN business process.
-
-
In the DMN designer canvas, double-click the new DRD node to enter an informative node name.
-
If the node is a decision or business knowledge model, select the node to display the node options and click the Edit icon to open the DMN boxed expression designer to define the decision logic for the node:
Figure 97. Opening a new decision node boxed expressionFigure 98. Opening a new business knowledge model boxed expressionBy default, all business knowledge models are defined as boxed function expressions containing a literal FEEL expression, a nested context expression of an external JAVA or PMML function, or a nested boxed expression of any type.
For decision nodes, you click the undefined table to select the type of boxed expression you want to use, such as a boxed literal expression, boxed context expression, decision table, or other DMN boxed expression.
Figure 99. Selecting the logic type for a decision nodeFor business knowledge models, you click the top-left function cell to select the function type, or right-click the function value cell, select Clear, and select a boxed expression of another type.
Figure 100. Selecting the function or other logic type for a business knowledge model -
In the selected boxed expression designer for either a decision node (any expression type) or business knowledge model (function expression), click the applicable table cells to define the table name, variable data types, variable names and values, function parameters and bindings, or FEEL expressions to include in the decision logic.
You can right-click cells for additional actions where applicable, such as inserting or removing table rows and columns or clearing table contents.
The following is an example decision table for a decision node that determines credit score ratings based on a defined range of a loan applicant’s credit score:
Figure 101. Decision node decision table for credit score ratingThe following is an example boxed function expression for a business knowledge model that calculates mortgage payments based on principal, interest, taxes, and insurance (PITI) as a literal expression:
Figure 102. Business knowledge model function for PITI calculation -
After you define the decision logic for the selected node, click Back to "<MODEL_NAME>" to return to the DRD view.
-
For the selected DRD node, use the available connection options to create and connect to the next node in the DRD, or click and drag a new node onto the DRD canvas from the left toolbar.
The node type determines which connection options are supported. For example, an Input data node can connect to a decision node, knowledge source, or text annotation using the applicable connection type, whereas a Knowledge source node can connect to any DRD element. A Decision node can connect only to another decision or a text annotation.
The following connection types are available, depending on the node type:
-
Information requirement: Use this connection from an input data node or decision node to another decision node that requires the information.
-
Knowledge requirement: Use this connection from a business knowledge model to a decision node or to another business knowledge model that invokes the decision logic.
-
Authority requirement: Use this connection from an input data node or a decision node to a dependent knowledge source or from a knowledge source to a decision node, business knowledge model, or another knowledge source.
-
Association: Use this connection from an input data node, decision node, business knowledge model, or knowledge source to a text annotation.
Figure 103. Connecting credit score input to the credit score rating decision -
-
Continue adding and defining the remaining DRD components of your decision model. Periodically click Save in the DMN designer to save your work.
-
After you add and define all components of the DRD, click Save to save and validate the completed DRD.
The following is an example DRD for a loan prequalification decision model:
Figure 104. Completed DRD for loan prequalificationThe following is an example DRD for a phone call handling decision model using a reusable decision service:
Figure 105. Completed DRD for phone call handling with a decision serviceIn a DMN decision service node, the decision nodes in the bottom segment incorporate input data from outside of the decision service to arrive at a final decision in the top segment of the decision service node. The resulting top-level decisions from the decision service are then implemented in any subsequent decisions or business knowledge requirements of the DMN model. You can reuse DMN decision services in other DMN models to apply the same decision logic with different input data and different outgoing connections.
6.3.1. Defining DMN decision logic in boxed expressions in Business Central
Boxed expressions in DMN are tables that you use to define the underlying logic of decision nodes and business knowledge models in a decision requirements diagram (DRD) or decision requirements graph (DRG). Some boxed expressions can contain other boxed expressions, but the top-level boxed expression corresponds to the decision logic of a single DRD artifact. While DRDs with one or more DRGs represent the flow of a DMN decision model, boxed expressions define the actual decision logic of individual nodes. DRDs and boxed expressions together form a complete and functional DMN decision model.
You can use the DMN designer in Business Central to define decision logic for your DRD components using built-in boxed expressions.
-
You have created or imported a DMN file in Business Central.
-
In Business Central, go to Menu → Design → Projects, click the project name, and select the DMN file you want to modify.
-
In the DMN designer canvas, select a decision node or business knowledge model that you want to define and click the Edit icon to open the DMN boxed expression designer:
Figure 106. Opening a new decision node boxed expressionFigure 107. Opening a new business knowledge model boxed expressionBy default, all business knowledge models are defined as boxed function expressions containing a literal FEEL expression, a nested context expression of an external JAVA or PMML function, or a nested boxed expression of any type.
For decision nodes, you click the undefined table to select the type of boxed expression you want to use, such as a boxed literal expression, boxed context expression, decision table, or other DMN boxed expression.
Figure 108. Selecting the logic type for a decision nodeFor business knowledge models, you click the top-left function cell to select the function type, or right-click the function value cell, select Clear, and select a boxed expression of another type.
Figure 109. Selecting the function or other logic type for a business knowledge model -
For this example, use a decision node and select Decision Table as the boxed expression type.
A decision table in DMN is a visual representation of one or more rules in a tabular format. Each rule consists of a single row in the table, and includes columns that define the conditions (input) and outcome (output) for that particular row.
-
Click the input column header to define the name and data type for the input condition. For example, name the input column Credit Score.FICO with a
number
data type. This column specifies numeric credit score values or ranges of loan applicants. -
Click the output column header to define the name and data type for the output values. For example, name the output column Credit Score Rating and next to the Data Type option, click Manage to go to the Data Types page where you can create a custom data type with score ratings as constraints.
Figure 110. Managing data types for a column header value -
On the Data Types page, click Add and create a Credit_Score_Rating data type as a
string
:Figure 111. Adding a new data type -
Click Constraints, select Enumeration from the drop-down options, and add the following constraints:
-
"Excellent"
-
"Good"
-
"Fair"
-
"Poor"
-
"Bad"
Figure 112. Adding constraints to the new data typeFor information about constraint types and syntax requirements for the specified data type, see the Decision Model and Notation specification.
-
-
Click OK to save the constraints and click Save to save the data type.
-
Return to the Credit Score Rating decision table, click the Credit Score Rating column header, and set the data type to this new custom data type.
-
Use the Credit Score.FICO input column to define credit score values or ranges of values, and use the Credit Score Rating column to specify one of the corresponding ratings you defined in the Credit_Score_Rating data type.
Right-click any value cell to insert or delete rows (rules) or columns (clauses).
Figure 113. Decision node decision table for credit score rating -
After you define all rules, click the top-left corner of the decision table to define the rule Hit Policy and Builtin Aggregator (for COLLECT hit policy only).
The hit policy determines how to reach an outcome when multiple rules in a decision table match the provided input values. The built-in aggregator determines how to aggregate rule values when you use the COLLECT hit policy.
Figure 114. Defining the decision table hit policyThe following example is a more complex decision table that determines applicant qualification for a loan as the concluding decision node in the same loan prequalification decision model:
Figure 115. Decision table for loan prequalification
For boxed expression types other than decision tables, you follow these guidelines similarly to navigate the boxed expression tables and define variables and parameters for decision logic, but according to the requirements of the boxed expression type. Some boxed expressions, such as boxed literal expressions, can be single-column tables, while other boxed expressions, such as function, context, and invocation expressions, can be multi-column tables with nested boxed expressions of other types.
For example, the following boxed context expression defines the parameters that determine whether a loan applicant can meet minimum mortgage payments based on principal, interest, taxes, and insurance (PITI), represented as a front-end ratio calculation with a sub-context expression:
The following boxed function expression determines a monthly mortgage installment as a business knowledge model in a lending decision, with the function value defined as a nested context expression:
For more information and examples of each boxed expression type, see DMN decision logic in boxed expressions.
6.3.2. Creating custom data types for DMN boxed expressions in Business Central
In DMN boxed expressions in Business Central, data types determine the structure of the data that you use within an associated table, column, or field in the boxed expression. You can use default DMN data types (such as String, Number, Boolean) or you can create custom data types to specify additional fields and constraints that you want to implement for the boxed expression values.
Custom data types that you create for a boxed expression can be simple or structured:
-
Simple data types have only a name and a type assignment. Example:
Age (number)
. -
Structured data types contain multiple fields associated with a parent data type. Example: A single type
Person
containing the fieldsName (string)
,Age (number)
,Email (string)
.
-
You have created or imported a DMN file in Business Central.
-
In Business Central, go to Menu → Design → Projects, click the project name, and select the DMN file you want to modify.
-
In the DMN designer canvas, select a decision node or business knowledge model for which you want to define the data types and click the Edit icon to open the DMN boxed expression designer.
-
If the boxed expression is for a decision node that is not yet defined, click the undefined table to select the type of boxed expression you want to use, such as a boxed literal expression, boxed context expression, decision table, or other DMN boxed expression.
Figure 118. Selecting the logic type for a decision node -
Click the cell for the table header, column header, or parameter field (depending on the boxed expression type) for which you want to define the data type and click Manage to go to the Data Types page where you can create a custom data type.
Figure 119. Managing data types for a column header valueYou can also set and manage custom data types for a specified decision node or business knowledge model node by selecting the Diagram properties icon in the upper-right corner of the DMN designer:
Figure 120. Managing data types in diagram propertiesThe data type that you define for a specified cell in a boxed expression determines the structure of the data that you use within that associated table, column, or field in the boxed expression.
In this example, an output column Credit Score Rating for a DMN decision table defines a set of custom credit score ratings based on an applicant’s credit score.
-
On the Data Types page, click Add and create a Credit_Score_Rating data type as a
string
:Figure 121. Adding a new data typeIf the data type requires a list of items, enable the List setting.
-
Click Constraints, select Enumeration from the drop-down options, and add the following constraints:
-
"Excellent"
-
"Good"
-
"Fair"
-
"Poor"
-
"Bad"
Figure 122. Adding constraints to the new data typeFor information about constraint types and syntax requirements for the specified data type, see the Decision Model and Notation specification.
-
-
Click OK to save the constraints and click Save to save the data type.
-
Return to the Credit Score Rating decision table, click the Credit Score Rating column header, set the data type to this new custom data type, and define the rule values for that column with the rating constraints that you specified.
Figure 123. Decision table for credit score ratingIn the DMN decision model for this scenario, the Credit Score Rating decision flows into the following Loan Prequalification decision that also requires custom data types:
Figure 124. Decision table for loan prequalification -
Continuing with this example, return to the Data Types window, click Add, and create a Loan_Qualification data type as a
Structure
with no constraints.When you save the new structured data type, the first sub-field appears so that you can begin defining nested data fields in this parent data type. You can use these sub-fields in association with the parent structured data type in boxed expressions, such as nested column headers in decision tables or nested table parameters in context or function expressions.
For additional sub-fields, next to the Loan_Qualification data type, select the settings icon (three vertical dots) and select Insert nested field:
Figure 125. Adding a new structured data type with nested fields -
For this example, under the structured Loan_Qualification data type, add a Qualification field with
"Qualified"
and"Not Qualified"
enumeration constraints, and a Reason field with no constraints. Add also a simple Back_End_Ratio and a Front_End_Ratio data type, both with"Sufficient"
and"Insufficient"
enumeration constraints.Click Save for each data type that you create.
Figure 126. Adding nested data types with constraints -
Return to the decision table and, for each column, click the column header cell, set the data type to the new corresponding custom data type, and define the rule values as needed for the column with the constraints that you specified, if applicable.
Figure 127. Decision table for loan prequalification
For boxed expression types other than decision tables, you follow these guidelines similarly to navigate the boxed expression tables and define custom data types as needed.
For example, the following boxed function expression uses custom tCandidate
and tProfile
structured data types to associate data for online dating compatibility:
6.3.3. Including other DMN models within a DMN file in Business Central
In Business Central, you can include other DMN models from your project in a specified DMN file so that you can reuse the decision requirements diagram (DRD) components of the included models in that DMN file. When you include a DMN model within another DMN file, you can use all of the nodes and logic from both models in the same DRD, but you cannot edit the nodes from the included model. To edit nodes from included models, you must update the source file for the included model directly. All changes to the source of the included model are automatically applied across DMN files that include that model.
You cannot include DMN models from other projects in Business Central.
-
You have created or imported the DMN models in the same project in Business Central as the DMN file in which you want to include the models.
-
In Business Central, go to Menu → Design → Projects, click the project name, and select the DMN file you want to modify.
-
In the DMN designer, click the Included Models tab.
-
Click Include Model, select a DMN model from your project in the DMN models list, enter a unique name for the included model, and click Include:
Figure 131. Including a DMN modelThe DMN model is added to this DMN file, and all DRD nodes from the included model are listed under Decision Components in the Decision Navigator view:
Figure 132. DMN file with decision components from the included DMN modelAll data types from the included model are also listed in the Data Types tab for the DMN file:
Figure 133. DMN file with data types from the included DMN model -
In the Model tab of the DMN designer, click and drag the included DRD components onto the canvas to begin implementing them in your DRD:
Figure 134. Adding DRD components from the included DMN modelTo edit DRD nodes or data types from included models, you must update the source file for the included model directly. All changes to the source of the included model are automatically applied across DMN files that include that model.
To edit the included model name or to remove the included model from the DMN file, use the Included Models tab in the DMN designer.
When you remove an included model, any nodes from that included model that are currently used in the DRD are also removed.
6.3.4. DMN designer navigation and properties in Business Central
The DMN designer in Business Central provides the following additional features to help you navigate through the components and properties of decision requirements diagrams (DRDs).
- DMN file and diagram views
-
In the upper-left corner of the DMN designer, select the Project Explorer view to navigate between all DMN and other files or select the Decision Navigator view to navigate between the decision components, graphs, and boxed expressions of a selected DRD:
Figure 135. Project Explorer viewFigure 136. Decision Navigator viewThe DRD components from any DMN models included in the DMN file (in the Included Models tab) are also listed in the Decision Components panel for the DMN file. In the upper-right corner of the DMN designer, select the Explore diagram icon to view an elevated preview of the selected DRD and to navigate between the nodes of the selected DRD:
Figure 137. Explore diagram view - DRD properties and design
-
In the upper-right corner of the DMN designer, select the Diagram properties icon to modify the identifying information, data types, and appearance of a selected DRD, DRD node, or boxed expression cell:
Figure 138. DRD node propertiesTo view the properties of the entire DRD, click the DRD canvas background instead of a specific node.
6.4. DMN model execution
You can create or import DMN files in your Drools project using Business Central or package the DMN files as part of your project knowledge JAR (KJAR) file without Business Central. After you implement your DMN files in your Drools project, you can execute the DMN decision service by deploying the KIE container that contains it to KIE Server for remote access or by manipulating the KIE container directly as a dependency of the calling application. Other options for creating and deploying DMN knowledge packages are also available, and most are similar for all types of knowledge assets, such as DRL files or process definitions.
For information about including external DMN assets with your project packaging and deployment method, see Build, Deploy, Utilize and Run.
6.4.1. Embedding a DMN call directly in a Java application
A KIE container is local when the knowledge assets are either embedded directly into the calling program or are physically pulled in using Maven dependencies for the KJAR. You typically embed knowledge assets directly into a project if there is a tight relationship between the version of the code and the version of the DMN definition. Any changes to the decision take effect after you have intentionally updated and redeployed the application. A benefit of this approach is that proper operation does not rely on any external dependencies to the run time, which can be a limitation of locked-down environments.
Using Maven dependencies enables further flexibility because the specific version of the decision can dynamically change, (for example, by using a system property), and it can be periodically scanned for updates and automatically updated. This introduces an external dependency on the deploy time of the service, but executes the decision locally, reducing reliance on an external service being available during run time.
-
A KIE container is deployed in KIE Server in the form of a KJAR that includes the DMN model, ideally compiled as an executable model for more efficient execution:
mvn clean install -DgenerateDMNModel=yes
For more information about project packaging and deployment and executable models, see Build, Deploy, Utilize and Run.
-
In your client application, add the following dependencies to the relevant classpath of your Java project:
<!-- Required for the DMN runtime API --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-dmn-core</artifactId> <version>${drools.version}</version> </dependency> <!-- Required if not using classpath KIE container --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-ci</artifactId> <version>${drools.version}</version> </dependency>
The
<version>
is the Maven artifact version for Drools currently used in your project (for example, 7.26.0.Final-redhat-00002). -
Create a KIE container from
classpath
orReleaseId
:KieServices kieServices = KieServices.Factory.get(); ReleaseId releaseId = kieServices.newReleaseId( "org.acme", "my-kjar", "1.0.0" ); KieContainer kieContainer = kieServices.newKieContainer( releaseId );
Alternative option:
KieServices kieServices = KieServices.Factory.get(); KieContainer kieContainer = kieServices.getKieClasspathContainer();
-
Obtain
DMNRuntime
from the KIE container and a reference to the DMN model to be evaluated, by using the modelnamespace
andmodelName
:DMNRuntime dmnRuntime = KieRuntimeFactory.of(kieContainer.getKieBase()).get(DMNRuntime.class); String namespace = "http://www.redhat.com/_c7328033-c355-43cd-b616-0aceef80e52a"; String modelName = "dmn-movieticket-ageclassification"; DMNModel dmnModel = dmnRuntime.getModel(namespace, modelName);
-
Execute the decision services for the desired model:
DMNContext dmnContext = dmnRuntime.newContext(); (1) for (Integer age : Arrays.asList(1,12,13,64,65,66)) { dmnContext.set("Age", age); (2) DMNResult dmnResult = dmnRuntime.evaluateAll(dmnModel, dmnContext); (3) for (DMNDecisionResult dr : dmnResult.getDecisionResults()) { (4) log.info("Age: " + age + ", " + "Decision: '" + dr.getDecisionName() + "', " + "Result: " + dr.getResult()); } }
1 Instantiate a new DMN Context to be the input for the model evaluation. Note that this example is looping through the Age Classification decision multiple times. 2 Assign input variables for the input DMN context. 3 Evaluate all DMN decisions defined in the DMN model. 4 Each evaluation may result in one or more results, creating the loop. This example prints the following output:
Age 1 Decision 'AgeClassification' : Child Age 12 Decision 'AgeClassification' : Child Age 13 Decision 'AgeClassification' : Adult Age 64 Decision 'AgeClassification' : Adult Age 65 Decision 'AgeClassification' : Senior Age 66 Decision 'AgeClassification' : Senior
If the DMN model was not previously compiled as an executable model for more efficient execution, you can enable the following property when you execute your DMN models:
-Dorg.kie.dmn.compiler.execmodel=true
6.4.2. Executing a DMN service using the KIE Server Java client API
The KIE Server Java client API provides a lightweight approach to invoking a remote DMN service either through the REST or JMS interfaces of KIE Server. This approach reduces the number of runtime dependencies necessary to interact with a KIE base. Decoupling the calling code from the decision definition also increases flexibility by enabling them to iterate independently at the appropriate pace.
For more information about the KIE Server Java client API, see KIE Server Java client API for KIE containers and business assets.
-
KIE Server is installed and configured, including a known user name and credentials for a user with the
kie-server
role. For installation options, see Installation and Setup (Core and IDE). -
A KIE container is deployed in KIE Server in the form of a KJAR that includes the DMN model, ideally compiled as an executable model for more efficient execution:
mvn clean install -DgenerateDMNModel=yes
For more information about project packaging and deployment and executable models, see Build, Deploy, Utilize and Run.
-
You have the container ID of the KIE container containing the DMN model. If more than one model is present, you must also know the model namespace and model name of the relevant model.
-
In your client application, add the following dependency to the relevant classpath of your Java project:
<!-- Required for the KIE Server Java client API --> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-client</artifactId> <version>${drools.version}</version> </dependency>
The
<version>
is the Maven artifact version for Drools currently used in your project (for example, 7.26.0.Final-redhat-00002). -
Instantiate a
KieServicesClient
instance with the appropriate connection information.Example:
KieServicesConfiguration conf = KieServicesFactory.newRestConfiguration(URL, USER, PASSWORD); (1) conf.setMarshallingFormat(MarshallingFormat.JSON); (2) KieServicesClient kieServicesClient = KieServicesFactory.newKieServicesClient(conf);
1 The connection information: -
Example URL:
http://localhost:8080/kie-server/services/rest/server
-
The credentials should reference a user with the
kie-server
role.
2 The Marshalling format is an instance of org.kie.server.api.marshalling.MarshallingFormat
. It controls whether the messages will be JSON or XML. Options for Marshalling format are JSON, JAXB, or XSTREAM. -
-
Obtain a
DMNServicesClient
from the KIE server Java client connected to the related KIE Server by invoking the methodgetServicesClient()
on the KIE server Java client instance:DMNServicesClient dmnClient = kieServicesClient.getServicesClient(DMNServicesClient.class );
The
dmnClient
can now execute decision services on KIE Server. -
Execute the decision services for the desired model.
Example:
for (Integer age : Arrays.asList(1,12,13,64,65,66)) { DMNContext dmnContext = dmnClient.newContext(); (1) dmnContext.set("Age", age); (2) ServiceResponse<DMNResult> serverResp = (3) dmnClient.evaluateAll($kieContainerId, $modelNamespace, $modelName, dmnContext); DMNResult dmnResult = serverResp.getResult(); (4) for (DMNDecisionResult dr : dmnResult.getDecisionResults()) { log.info("Age: " + age + ", " + "Decision: '" + dr.getDecisionName() + "', " + "Result: " + dr.getResult()); } }
1 Instantiate a new DMN Context to be the input for the model evaluation. Note that this example is looping through the Age Classification decision multiple times. 2 Assign input variables for the input DMN Context. 3 Evaluate all the DMN Decisions defined in the DMN model: -
$kieContainerId
is the ID of the container where the KJAR containing the DMN model is deployed -
$modelNamespace
is the namespace for the model. -
$modelName
is the name for the model.
4 The DMN Result object is available from the server response. At this point, the
dmnResult
contains all the decision results from the evaluated DMN model.You can also execute only a specific DMN decision in the model by using alternative methods of the
DMNServicesClient
.If the KIE container only contains one DMN model, you can omit $modelNamespace
and$modelName
because the KIE Server API selects it by default. -
6.4.3. Executing a DMN service using the KIE Server REST API
Directly interacting with the REST endpoints of KIE Server provides the most separation between the calling code and the decision logic definition. The calling code is completely free of direct dependencies, and you can implement it in an entirely different development platform such as node.js
or .net
. The examples in this section demonstrate Nix-style curl commands but provide relevant information to adapt to any REST client.
For more information about the KIE Server REST API, see KIE Server REST API for KIE containers and business assets.
-
KIE Server is installed and configured, including a known user name and credentials for a user with the
kie-server
role. For installation options, see Installation and Setup (Core and IDE). -
A KIE container is deployed in KIE Server in the form of a KJAR that includes the DMN model, ideally compiled as an executable model for more efficient execution:
mvn clean install -DgenerateDMNModel=yes
For more information about project packaging and deployment and executable models, see Build, Deploy, Utilize and Run.
-
You have the container ID of the KIE container containing the DMN model. If more than one model is present, you must also know the model namespace and model name of the relevant model.
-
You have the DMN model deployed. For more information about project deployment, see Build, Deploy, Utilize and Run.
-
Determine the base URL for accessing the KIE Server REST API endpoints. This requires knowing the following values (with the default local deployment values as an example):
-
Host (
localhost
) -
Port (
8080
) -
Root context (
kie-server
) -
Base REST path (
services/rest/
)
Example base URL in local deployment:
http://localhost:8080/kie-server/services/rest/
-
-
Determine user authentication requirements.
When users are defined directly in the KIE Server configuration, HTTP Basic authentication is used and requires the user name and password. Successful requests require that the user have the
kie-server
role.The following example demonstrates how to add credentials to a curl request:
curl -u username:password <request>
If KIE Server is configured with Red Hat Single Sign-On, the request must include a bearer token:
curl -H "Authorization: bearer $TOKEN" <request>
-
Specify the format of the request and response. The REST API endpoints work with both JSON and XML formats and are set using request headers:
JSONcurl -H "accept: application/json" -H "content-type: application/json"
XMLcurl -H "accept: application/xml" -H "content-type: application/xml"
-
(Optional) Query the container for a list of deployed decision models:
[GET]
server/containers/{containerId}/dmn
Example curl request:
curl -u krisv:krisv -H "accept: application/xml" -X GET "http://localhost:8080/kie-server/services/rest/server/containers/MovieDMNContainer/dmn"
Sample XML output:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <response type="SUCCESS" msg="OK models successfully retrieved from container 'MovieDMNContainer'"> <dmn-model-info-list> <model> <model-namespace>http://www.redhat.com/_c7328033-c355-43cd-b616-0aceef80e52a</model-namespace> <model-name>dmn-movieticket-ageclassification</model-name> <model-id>_99</model-id> <decisions> <dmn-decision-info> <decision-id>_3</decision-id> <decision-name>AgeClassification</decision-name> </dmn-decision-info> </decisions> </model> </dmn-model-info-list> </response>
Sample JSON output:
{ "type" : "SUCCESS", "msg" : "OK models successfully retrieved from container 'MovieDMNContainer'", "result" : { "dmn-model-info-list" : { "models" : [ { "model-namespace" : "http://www.redhat.com/_c7328033-c355-43cd-b616-0aceef80e52a", "model-name" : "dmn-movieticket-ageclassification", "model-id" : "_99", "decisions" : [ { "decision-id" : "_3", "decision-name" : "AgeClassification" } ] } ] } } }
-
Execute the model:
[POST]
server/containers/{containerId}/dmn
Example curl request:
curl -u krisv:krisv -H "accept: application/json" -H "content-type: application/json" -X POST "http://localhost:8080/kie-server/services/rest/server/containers/MovieDMNContainer/dmn" -d "{ \"model-namespace\" : \"http://www.redhat.com/_c7328033-c355-43cd-b616-0aceef80e52a\", \"model-name\" : \"dmn-movieticket-ageclassification\", \"decision-name\" : [ ], \"decision-id\" : [ ], \"dmn-context\" : {\"Age\" : 66}}"
Example JSON request:
{ "model-namespace" : "http://www.redhat.com/_c7328033-c355-43cd-b616-0aceef80e52a", "model-name" : "dmn-movieticket-ageclassification", "decision-name" : [ ], "decision-id" : [ ], "dmn-context" : {"Age" : 66} }
Example XML request (JAXB format):
<?xml version="1.0" encoding="UTF-8"?> <dmn-evaluation-context> <model-namespace>http://www.redhat.com/_c7328033-c355-43cd-b616-0aceef80e52a</model-namespace> <model-name>dmn-movieticket-ageclassification</model-name> <dmn-context xsi:type="jaxbListWrapper" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <type>MAP</type> <element xsi:type="jaxbStringObjectPair" key="Age"> <value xsi:type="xs:int" xmlns:xs="http://www.w3.org/2001/XMLSchema">66</value> </element> </dmn-context> </dmn-evaluation-context>
Regardless of the request format, the request requires the following elements:
-
Model namespace
-
Model name
-
Context object containing input values
Example JSON response:
{ "type" : "SUCCESS", "msg" : "OK from container 'MovieDMNContainer'", "result" : { "dmn-evaluation-result" : { "messages" : [ ], "model-namespace" : "http://www.redhat.com/_c7328033-c355-43cd-b616-0aceef80e52a", "model-name" : "dmn-movieticket-ageclassification", "decision-name" : [ ], "dmn-context" : { "Age" : 66, "AgeClassification" : "Senior" }, "decision-results" : { "_3" : { "messages" : [ ], "decision-id" : "_3", "decision-name" : "AgeClassification", "result" : "Senior", "status" : "SUCCEEDED" } } } } }
Example XML (JAXB format) response:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <response type="SUCCESS" msg="OK from container 'MovieDMNContainer'"> <dmn-evaluation-result> <model-namespace>http://www.redhat.com/_c7328033-c355-43cd-b616-0aceef80e52a</model-namespace> <model-name>dmn-movieticket-ageclassification</model-name> <dmn-context xsi:type="jaxbListWrapper" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <type>MAP</type> <element xsi:type="jaxbStringObjectPair" key="Age"> <value xsi:type="xs:int" xmlns:xs="http://www.w3.org/2001/XMLSchema">66</value> </element> <element xsi:type="jaxbStringObjectPair" key="AgeClassification"> <value xsi:type="xs:string" xmlns:xs="http://www.w3.org/2001/XMLSchema">Senior</value> </element> </dmn-context> <messages/> <decisionResults> <entry> <key>_3</key> <value> <decision-id>_3</decision-id> <decision-name>AgeClassification</decision-name> <result xsi:type="xs:string" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">Senior</result> <messages/> <status>SUCCEEDED</status> </value> </entry> </decisionResults> </dmn-evaluation-result> </response>
-
7. Predictive Model Markup Language (PMML)
7.1. Predictive Model Markup Language (PMML)
Predictive Model Markup Language (PMML) is an XML-based standard established by the Data Mining Group (DMG) for defining statistical and data-mining models. PMML models can be shared between PMML-compliant platforms and across organizations so that business analysts and developers are unified in designing, analyzing, and implementing PMML-based assets and services.
For more information about the background and applications of PMML, see the DMG PMML specification.
7.1.1. PMML conformance levels
The PMML specification defines producer and consumer conformance levels in a software implementation to ensure that PMML models are created and integrated reliably. For the formal definitions of each conformance level, see the DMG PMML conformance page.
The following list summarizes the PMML conformance levels:
- Producer conformance
-
A tool or application is producer conforming if it generates valid PMML documents for at least one type of model. Satisfying PMML producer conformance requirements ensures that a model definition document is syntactically correct and defines a model instance that is consistent with semantic criteria that are defined in model specifications.
- Consumer conformance
-
An application is consumer conforming if it accepts valid PMML documents for at least one type of model. Satisfying consumer conformance requirements ensures that a PMML model created according to producer conformance can be integrated and used as defined. For example, if an application is consumer conforming for Regression model types, then valid PMML documents defining models of this type produced by different conforming producers would be interchangeable in the application.
Drools includes consumer conformance support for the following PMML 4.2.1 model types:
-
Mining models (with sub-types
modelChain
,selectAll
, andselectFirst
)
For a list of all PMML model types, including those not supported in Drools, see the DMG PMML specification.
7.2. PMML model examples
PMML defines an XML schema that enables PMML models to be used between different PMML-compliant platforms. The PMML specification enables multiple software platforms to work with the same file for authoring, testing, and production execution, assuming producer and consumer conformance are met.
The following are examples of PMML Regression, Scorecard, Tree, and Mining models. These examples illustrate the supported types of models that you can integrate with your decision services in Drools.
For more PMML examples, see the DMG PMML Sample Files page.
<PMML version="4.2" xsi:schemaLocation="http://www.dmg.org/PMML-4_2 http://www.dmg.org/v4-2-1/pmml-4-2.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.dmg.org/PMML-4_2">
<Header copyright="JBoss"/>
<DataDictionary numberOfFields="5">
<DataField dataType="double" name="fld1" optype="continuous"/>
<DataField dataType="double" name="fld2" optype="continuous"/>
<DataField dataType="string" name="fld3" optype="categorical">
<Value value="x"/>
<Value value="y"/>
</DataField>
<DataField dataType="double" name="fld4" optype="continuous"/>
<DataField dataType="double" name="fld5" optype="continuous"/>
</DataDictionary>
<RegressionModel algorithmName="linearRegression" functionName="regression" modelName="LinReg" normalizationMethod="logit" targetFieldName="fld4">
<MiningSchema>
<MiningField name="fld1"/>
<MiningField name="fld2"/>
<MiningField name="fld3"/>
<MiningField name="fld4" usageType="predicted"/>
<MiningField name="fld5" usageType="target"/>
</MiningSchema>
<RegressionTable intercept="0.5">
<NumericPredictor coefficient="5" exponent="2" name="fld1"/>
<NumericPredictor coefficient="2" exponent="1" name="fld2"/>
<CategoricalPredictor coefficient="-3" name="fld3" value="x"/>
<CategoricalPredictor coefficient="3" name="fld3" value="y"/>
<PredictorTerm coefficient="0.4">
<FieldRef field="fld1"/>
<FieldRef field="fld2"/>
</PredictorTerm>
</RegressionTable>
</RegressionModel>
</PMML>
<PMML version="4.2" xsi:schemaLocation="http://www.dmg.org/PMML-4_2 http://www.dmg.org/v4-2-1/pmml-4-2.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.dmg.org/PMML-4_2">
<Header copyright="JBoss"/>
<DataDictionary numberOfFields="4">
<DataField name="param1" optype="continuous" dataType="double"/>
<DataField name="param2" optype="continuous" dataType="double"/>
<DataField name="overallScore" optype="continuous" dataType="double" />
<DataField name="finalscore" optype="continuous" dataType="double" />
</DataDictionary>
<Scorecard modelName="ScorecardCompoundPredicate" useReasonCodes="true" isScorable="true" functionName="regression" baselineScore="15" initialScore="0.8" reasonCodeAlgorithm="pointsAbove">
<MiningSchema>
<MiningField name="param1" usageType="active" invalidValueTreatment="asMissing">
</MiningField>
<MiningField name="param2" usageType="active" invalidValueTreatment="asMissing">
</MiningField>
<MiningField name="overallScore" usageType="target"/>
<MiningField name="finalscore" usageType="predicted"/>
</MiningSchema>
<Characteristics>
<Characteristic name="ch1" baselineScore="50" reasonCode="reasonCh1">
<Attribute partialScore="20">
<SimplePredicate field="param1" operator="lessThan" value="20"/>
</Attribute>
<Attribute partialScore="100">
<CompoundPredicate booleanOperator="and">
<SimplePredicate field="param1" operator="greaterOrEqual" value="20"/>
<SimplePredicate field="param2" operator="lessOrEqual" value="25"/>
</CompoundPredicate>
</Attribute>
<Attribute partialScore="200">
<CompoundPredicate booleanOperator="and">
<SimplePredicate field="param1" operator="greaterOrEqual" value="20"/>
<SimplePredicate field="param2" operator="greaterThan" value="25"/>
</CompoundPredicate>
</Attribute>
</Characteristic>
<Characteristic name="ch2" reasonCode="reasonCh2">
<Attribute partialScore="10">
<CompoundPredicate booleanOperator="or">
<SimplePredicate field="param2" operator="lessOrEqual" value="-5"/>
<SimplePredicate field="param2" operator="greaterOrEqual" value="50"/>
</CompoundPredicate>
</Attribute>
<Attribute partialScore="20">
<CompoundPredicate booleanOperator="and">
<SimplePredicate field="param2" operator="greaterThan" value="-5"/>
<SimplePredicate field="param2" operator="lessThan" value="50"/>
</CompoundPredicate>
</Attribute>
</Characteristic>
</Characteristics>
</Scorecard>
</PMML>
<PMML version="4.2" xsi:schemaLocation="http://www.dmg.org/PMML-4_2 http://www.dmg.org/v4-2-1/pmml-4-2.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.dmg.org/PMML-4_2">
<Header copyright="JBOSS"/>
<DataDictionary numberOfFields="5">
<DataField dataType="double" name="fld1" optype="continuous"/>
<DataField dataType="double" name="fld2" optype="continuous"/>
<DataField dataType="string" name="fld3" optype="categorical">
<Value value="true"/>
<Value value="false"/>
</DataField>
<DataField dataType="string" name="fld4" optype="categorical">
<Value value="optA"/>
<Value value="optB"/>
<Value value="optC"/>
</DataField>
<DataField dataType="string" name="fld5" optype="categorical">
<Value value="tgtX"/>
<Value value="tgtY"/>
<Value value="tgtZ"/>
</DataField>
</DataDictionary>
<TreeModel functionName="classification" modelName="TreeTest">
<MiningSchema>
<MiningField name="fld1"/>
<MiningField name="fld2"/>
<MiningField name="fld3"/>
<MiningField name="fld4"/>
<MiningField name="fld5" usageType="predicted"/>
</MiningSchema>
<Node score="tgtX">
<True/>
<Node score="tgtX">
<SimplePredicate field="fld4" operator="equal" value="optA"/>
<Node score="tgtX">
<CompoundPredicate booleanOperator="surrogate">
<SimplePredicate field="fld1" operator="lessThan" value="30.0"/>
<SimplePredicate field="fld2" operator="greaterThan" value="20.0"/>
</CompoundPredicate>
<Node score="tgtX">
<SimplePredicate field="fld2" operator="lessThan" value="40.0"/>
</Node>
<Node score="tgtZ">
<SimplePredicate field="fld2" operator="greaterOrEqual" value="10.0"/>
</Node>
</Node>
<Node score="tgtZ">
<CompoundPredicate booleanOperator="or">
<SimplePredicate field="fld1" operator="greaterOrEqual" value="60.0"/>
<SimplePredicate field="fld1" operator="lessOrEqual" value="70.0"/>
</CompoundPredicate>
<Node score="tgtZ">
<SimpleSetPredicate booleanOperator="isNotIn" field="fld4">
<Array type="string">optA optB</Array>
</SimpleSetPredicate>
</Node>
</Node>
</Node>
<Node score="tgtY">
<CompoundPredicate booleanOperator="or">
<SimplePredicate field="fld4" operator="equal" value="optA"/>
<SimplePredicate field="fld4" operator="equal" value="optC"/>
</CompoundPredicate>
<Node score="tgtY">
<CompoundPredicate booleanOperator="and">
<SimplePredicate field="fld1" operator="greaterThan" value="10.0"/>
<SimplePredicate field="fld1" operator="lessThan" value="50.0"/>
<SimplePredicate field="fld4" operator="equal" value="optA"/>
<SimplePredicate field="fld2" operator="lessThan" value="100.0"/>
<SimplePredicate field="fld3" operator="equal" value="false"/>
</CompoundPredicate>
</Node>
<Node score="tgtZ">
<CompoundPredicate booleanOperator="and">
<SimplePredicate field="fld4" operator="equal" value="optC"/>
<SimplePredicate field="fld2" operator="lessThan" value="30.0"/>
</CompoundPredicate>
</Node>
</Node>
</Node>
</TreeModel>
</PMML>
<PMML version="4.2" xsi:schemaLocation="http://www.dmg.org/PMML-4_2 http://www.dmg.org/v4-2-1/pmml-4-2.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.dmg.org/PMML-4_2">
<Header>
<Application name="Drools-PMML" version="7.0.0-SNAPSHOT" />
</Header>
<DataDictionary numberOfFields="7">
<DataField name="age" optype="continuous" dataType="double" />
<DataField name="occupation" optype="categorical" dataType="string">
<Value value="SKYDIVER" />
<Value value="ASTRONAUT" />
<Value value="PROGRAMMER" />
<Value value="TEACHER" />
<Value value="INSTRUCTOR" />
</DataField>
<DataField name="residenceState" optype="categorical" dataType="string">
<Value value="AP" />
<Value value="KN" />
<Value value="TN" />
</DataField>
<DataField name="validLicense" optype="categorical" dataType="boolean" />
<DataField name="overallScore" optype="continuous" dataType="double" />
<DataField name="grade" optype="categorical" dataType="string">
<Value value="A" />
<Value value="B" />
<Value value="C" />
<Value value="D" />
<Value value="F" />
</DataField>
<DataField name="qualificationLevel" optype="categorical" dataType="string">
<Value value="Unqualified" />
<Value value="Barely" />
<Value value="Well" />
<Value value="Over" />
</DataField>
</DataDictionary>
<MiningModel modelName="SampleModelChainMine" functionName="classification">
<MiningSchema>
<MiningField name="age" />
<MiningField name="occupation" />
<MiningField name="residenceState" />
<MiningField name="validLicense" />
<MiningField name="overallScore" />
<MiningField name="qualificationLevel" usageType="target"/>
</MiningSchema>
<Segmentation multipleModelMethod="modelChain">
<Segment id="1">
<True />
<Scorecard modelName="Sample Score 1" useReasonCodes="true" isScorable="true" functionName="regression" baselineScore="0.0" initialScore="0.345">
<MiningSchema>
<MiningField name="age" usageType="active" invalidValueTreatment="asMissing" />
<MiningField name="occupation" usageType="active" invalidValueTreatment="asMissing" />
<MiningField name="residenceState" usageType="active" invalidValueTreatment="asMissing" />
<MiningField name="validLicense" usageType="active" invalidValueTreatment="asMissing" />
<MiningField name="overallScore" usageType="predicted" />
</MiningSchema>
<Output>
<OutputField name="calculatedScore" displayName="Final Score" dataType="double" feature="predictedValue" targetField="overallScore" />
</Output>
<Characteristics>
<Characteristic name="AgeScore" baselineScore="0.0" reasonCode="ABZ">
<Extension name="cellRef" value="$B$8" />
<Attribute partialScore="10.0">
<Extension name="cellRef" value="$C$10" />
<SimplePredicate field="age" operator="lessOrEqual" value="5" />
</Attribute>
<Attribute partialScore="30.0" reasonCode="CX1">
<Extension name="cellRef" value="$C$11" />
<CompoundPredicate booleanOperator="and">
<SimplePredicate field="age" operator="greaterOrEqual" value="5" />
<SimplePredicate field="age" operator="lessThan" value="12" />
</CompoundPredicate>
</Attribute>
<Attribute partialScore="40.0" reasonCode="CX2">
<Extension name="cellRef" value="$C$12" />
<CompoundPredicate booleanOperator="and">
<SimplePredicate field="age" operator="greaterOrEqual" value="13" />
<SimplePredicate field="age" operator="lessThan" value="44" />
</CompoundPredicate>
</Attribute>
<Attribute partialScore="25.0">
<Extension name="cellRef" value="$C$13" />
<SimplePredicate field="age" operator="greaterOrEqual" value="45" />
</Attribute>
</Characteristic>
<Characteristic name="OccupationScore" baselineScore="0.0">
<Extension name="cellRef" value="$B$16" />
<Attribute partialScore="-10.0" reasonCode="CX2">
<Extension name="description" value="skydiving is a risky occupation" />
<Extension name="cellRef" value="$C$18" />
<SimpleSetPredicate field="occupation" booleanOperator="isIn">
<Array n="2" type="string">SKYDIVER ASTRONAUT</Array>
</SimpleSetPredicate>
</Attribute>
<Attribute partialScore="10.0">
<Extension name="cellRef" value="$C$19" />
<SimpleSetPredicate field="occupation" booleanOperator="isIn">
<Array n="2" type="string">TEACHER INSTRUCTOR</Array>
</SimpleSetPredicate>
</Attribute>
<Attribute partialScore="5.0">
<Extension name="cellRef" value="$C$20" />
<SimplePredicate field="occupation" operator="equal" value="PROGRAMMER" />
</Attribute>
</Characteristic>
<Characteristic name="ResidenceStateScore" baselineScore="0.0" reasonCode="RES">
<Extension name="cellRef" value="$B$22" />
<Attribute partialScore="-10.0">
<Extension name="cellRef" value="$C$24" />
<SimplePredicate field="residenceState" operator="equal" value="AP" />
</Attribute>
<Attribute partialScore="10.0">
<Extension name="cellRef" value="$C$25" />
<SimplePredicate field="residenceState" operator="equal" value="KN" />
</Attribute>
<Attribute partialScore="5.0">
<Extension name="cellRef" value="$C$26" />
<SimplePredicate field="residenceState" operator="equal" value="TN" />
</Attribute>
</Characteristic>
<Characteristic name="ValidLicenseScore" baselineScore="0.0">
<Extension name="cellRef" value="$B$28" />
<Attribute partialScore="1.0" reasonCode="LX00">
<Extension name="cellRef" value="$C$30" />
<SimplePredicate field="validLicense" operator="equal" value="true" />
</Attribute>
<Attribute partialScore="-1.0" reasonCode="LX00">
<Extension name="cellRef" value="$C$31" />
<SimplePredicate field="validLicense" operator="equal" value="false" />
</Attribute>
</Characteristic>
</Characteristics>
</Scorecard>
</Segment>
<Segment id="2">
<True />
<TreeModel modelName="SampleTree" functionName="classification" missingValueStrategy="lastPrediction" noTrueChildStrategy="returnLastPrediction">
<MiningSchema>
<MiningField name="age" usageType="active" />
<MiningField name="validLicense" usageType="active" />
<MiningField name="calculatedScore" usageType="active" />
<MiningField name="qualificationLevel" usageType="predicted" />
</MiningSchema>
<Output>
<OutputField name="qualification" displayName="Qualification Level" dataType="string" feature="predictedValue" targetField="qualificationLevel" />
</Output>
<Node score="Well" id="1">
<True/>
<Node score="Barely" id="2">
<CompoundPredicate booleanOperator="and">
<SimplePredicate field="age" operator="greaterOrEqual" value="16" />
<SimplePredicate field="validLicense" operator="equal" value="true" />
</CompoundPredicate>
<Node score="Barely" id="3">
<SimplePredicate field="calculatedScore" operator="lessOrEqual" value="50.0" />
</Node>
<Node score="Well" id="4">
<CompoundPredicate booleanOperator="and">
<SimplePredicate field="calculatedScore" operator="greaterThan" value="50.0" />
<SimplePredicate field="calculatedScore" operator="lessOrEqual" value="60.0" />
</CompoundPredicate>
</Node>
<Node score="Over" id="5">
<SimplePredicate field="calculatedScore" operator="greaterThan" value="60.0" />
</Node>
</Node>
<Node score="Unqualified" id="6">
<CompoundPredicate booleanOperator="surrogate">
<SimplePredicate field="age" operator="lessThan" value="16" />
<SimplePredicate field="calculatedScore" operator="lessOrEqual" value="40.0" />
<True />
</CompoundPredicate>
</Node>
</Node>
</TreeModel>
</Segment>
</Segmentation>
</MiningModel>
</PMML>
7.3. PMML support in Drools
Drools includes consumer conformance support for the following PMML 4.2.1 model types:
-
Mining models (with sub-types
modelChain
,selectAll
, andselectFirst
)
For a list of all PMML model types, including those not supported in Drools, see the DMG PMML specification.
Drools does not include a built-in PMML model editor, but you can use an XML or PMML-specific authoring tool to create PMML models and then integrate the PMML models in your decision services in Drools. You can import PMML files into your project in Business Central (Menu → Design → Projects → Import Asset) or package the PMML files as part of your project knowledge JAR (KJAR) file without Business Central.
When you add a PMML file to a project in Drools, multiple assets are generated. Each type of PMML model generates a different set of assets, but all PMML model types generate at least the following set of assets:
-
A DRL file that contains all of the rules associated with your PMML model
-
At least two Java classes:
-
A data class that is used as the default object type for the model type
-
A
RuleUnit
class that is used to manage data sources and rule execution
-
If a PMML file has MiningModel
as the root model, multiple instances of each of these files are generated.
For more information about including assets such as PMML files with your project packaging and deployment method, see Build, Deploy, Utilize and Run.
7.3.1. PMML naming conventions in Drools
The following are naming conventions for generated PMML packages, classes, and rules:
-
If no package name is given in a PMML model file, then the default package name
org.kie.pmml.pmml_4_2
is prefixed to the model name for the generated rules in the format"org.kie.pmml.pmml_4_2"+modelName
. -
The package name for the generated
RuleUnit
Java class is the same as the package name for the generated rules. -
The name of the generated
RuleUnit
Java class is the model name withRuleUnit
added to it in the formatmodelName+"RuleUnit"
. -
Each PMML model has at least one data class that is generated. The package name for these classes is
org.kie.pmml.pmml_4_2.model
. -
The names of generated data classes are determined by the model type, prefixed with the model name:
-
Regression models: One data class named
modelName+"RegressionData"
-
Scorecard models: One data class named
modelName+"ScoreCardData"
-
Tree models: Two data classes, the first named
modelName+"TreeNode"
and the second namedmodelName+"TreeToken"
-
Mining models: One data class named
modelName+"MiningModelData"
-
The mining model also generates all of the rules and classes that are within each of its segments. |
7.3.2. PMML extensions in Drools
The PMML specification supports Extension
elements that extend the content of a PMML model. You can use extensions at almost every level of a PMML model definition, and as the first and last child in the main element of a model for maximum flexibility. For more information about PMML extensions, see the DMG PMML Extension Mechanism.
To optimize PMML integration, Drools supports the following additional PMML extensions:
-
modelPackage
: Designates a package name for the generated rules and Java classes. Include this extension in theHeader
section of the PMML model file. -
adapter
: Designates the type of construct (bean
ortrait
) that is used to contain input and output data for rules. Insert this extension in theMiningSchema
orOutput
section (or both) of the PMML model file. -
externalClass
: Used in conjunction with theadapter
extension in defining aMiningField
orOutputField
. This extension contains a class with an attribute name that matches the name of theMiningField
orOutputField
element.
7.4. PMML model execution
You can import PMML files into your Drools project using Business Central (Menu → Design → Projects → Import Asset) or package the PMML files as part of your project knowledge JAR (KJAR) file without Business Central. After you implement your PMML files in your Drools project, you can execute the PMML-based decision service by embedding PMML calls directly in your Java application or by sending an ApplyPmmlModelCommand
command to a configured KIE Server.
For more information about including PMML assets with your project packaging and deployment method, see Build, Deploy, Utilize and Run.
7.4.1. Embedding a PMML call directly in a Java application
A KIE container is local when the knowledge assets are either embedded directly into the calling program or are physically pulled in using Maven dependencies for the KJAR. You typically embed knowledge assets directly into a project if there is a tight relationship between the version of the code and the version of the PMML definition. Any changes to the decision take effect after you have intentionally updated and redeployed the application. A benefit of this approach is that proper operation does not rely on any external dependencies to the run time, which can be a limitation of locked-down environments.
Using Maven dependencies enables further flexibility because the specific version of the decision can dynamically change (for example, by using a system property), and it can be periodically scanned for updates and automatically updated. This introduces an external dependency on the deploy time of the service, but executes the decision locally, reducing reliance on an external service being available during run time.
-
A KJAR containing the PMML model to execute has been created. For more information about project packaging, see Build, Deploy, Utilize and Run.
-
In your client application, add the following dependencies to the relevant classpath of your Java project:
<!-- Required for the PMML compiler --> <dependency> <groupId>org.drools</groupId> <artifactId>kie-pmml</artifactId> <version>${drools.version}</version> </dependency> <!-- Required for the KIE public API --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-api</artifactId> <version>${drools.version}</version> </dependencies> <!-- Required if not using classpath KIE container --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-ci</artifactId> <version>${drools.version}</version> </dependency>
The
<version>
is the Maven artifact version for Drools currently used in your project (for example, 7.26.0.Final-redhat-00002). -
Create a KIE container from
classpath
orReleaseId
:KieServices kieServices = KieServices.Factory.get(); ReleaseId releaseId = kieServices.newReleaseId( "org.acme", "my-kjar", "1.0.0" ); KieContainer kieContainer = kieServices.newKieContainer( releaseId );
Alternative option:
KieServices kieServices = KieServices.Factory.get(); KieContainer kieContainer = kieServices.getKieClasspathContainer();
-
Create an instance of the
PMMLRequestData
class, which applies your PMML model to a set of data:public class PMMLRequestData { private String correlationId; (1) private String modelName; (2) private String source; (3) private List<ParameterInfo<?>> requestParams; (4) ... }
1 Identifies data that is associated with a particular request or result 2 The name of the model that should be applied to the request data 3 Used by internally generated PMMLRequestData
objects to identify the segment that generated the request4 The default mechanism for sending input data points -
Create an instance of the
PMML4Result
class, which holds the output information that is the result of applying the PMML-based rules to the input data:public class PMML4Result { private String correlationId; private String segmentationId; (1) private String segmentId; (2) private int segmentIndex; (3) private String resultCode; (4) private Map<String, Object> resultVariables; (5) ... }
1 Used when the model type is MiningModel
. ThesegmentationId
is used to differentiate between multiple segmentations.2 Used in conjunction with the segmentationId
to identify which segment generated the results.3 Used to maintain the order of segments. 4 Used to determine whether the model was successfully applied, where OK
indicates success.5 Contains the name of a resultant variable and its associated value. In addition to the normal getter methods, the
PMML4Result
class also supports the following methods for directly retrieving the values for result variables:public <T> Optional<T> getResultValue(String objName, String objField, Class<T> clazz, Object...params) public Object getResultValue(String objName, String objField, Object...params)
-
Create an instance of the
ParameterInfo
class, which serves as a wrapper for basic data type objects used as part of thePMMLRequestData
class:public class ParameterInfo<T> { (1) private String correlationId; private String name; (2) private String capitalizedName; private Class<T> type; (3) private T value; (4) ... }
1 The parameterized class to handle many different types 2 The name of the variable that is expected as input for the model 3 The class that is the actual type of the variable 4 The actual value of the variable -
Execute the PMML model based on the required PMML class instances that you have created:
public void executeModel(KieBase kbase, Map<String,Object> variables, String modelName, String correlationId, String modelPkgName) { RuleUnitExecutor executor = RuleUnitExecutor.create().bind(kbase); PMMLRequestData request = new PMMLRequestData(correlationId, modelName); PMML4Result resultHolder = new PMML4Result(correlationId); variables.entrySet().forEach( es -> { request.addRequestParam(es.getKey(), es.getValue()); }); DataSource<PMMLRequestData> requestData = executor.newDataSource("request"); DataSource<PMML4Result> resultData = executor.newDataSource("results"); DataSource<PMMLData> internalData = executor.newDataSource("pmmlData"); requestData.insert(request); resultData.insert(resultHolder); List<String> possiblePackageNames = calculatePossiblePackageNames(modelName, modelPkgName); Class<? extends RuleUnit> ruleUnitClass = getStartingRuleUnit("RuleUnitIndicator", (InternalKnowledgeBase)kbase, possiblePackageNames); if (ruleUnitClass != null) { executor.run(ruleUnitClass); if ( "OK".equals(resultHolder.getResultCode()) ) { // extract result variables here } } } protected Class<? extends RuleUnit> getStartingRuleUnit(String startingRule, InternalKnowledgeBase ikb, List<String> possiblePackages) { RuleUnitRegistry unitRegistry = ikb.getRuleUnitRegistry(); Map<String,InternalKnowledgePackage> pkgs = ikb.getPackagesMap(); RuleImpl ruleImpl = null; for (String pkgName: possiblePackages) { if (pkgs.containsKey(pkgName)) { InternalKnowledgePackage pkg = pkgs.get(pkgName); ruleImpl = pkg.getRule(startingRule); if (ruleImpl != null) { RuleUnitDescr descr = unitRegistry.getRuleUnitFor(ruleImpl).orElse(null); if (descr != null) { return descr.getRuleUnitClass(); } } } } return null; } protected List<String> calculatePossiblePackageNames(String modelId, String...knownPackageNames) { List<String> packageNames = new ArrayList<>(); String javaModelId = modelId.replaceAll("\\s",""); if (knownPackageNames != null && knownPackageNames.length > 0) { for (String knownPkgName: knownPackageNames) { packageNames.add(knownPkgName + "." + javaModelId); } } String basePkgName = PMML4UnitImpl.DEFAULT_ROOT_PACKAGE+"."+javaModelId; packageNames.add(basePkgName); return packageNames; }
Rules are executed by the
RuleUnitExecutor
class. TheRuleUnitExecutor
class creates KIE sessions and adds the requiredDataSource
objects to those sessions, and then executes the rules based on theRuleUnit
that is passed as a parameter to therun()
method. ThecalculatePossiblePackageNames
and thegetStartingRuleUnit
methods determine the fully qualified name of theRuleUnit
class that is passed to therun()
method.
To facilitate your PMML model execution, you can also use a PMML4ExecutionHelper
class supported in Drools. For more information about the PMML helper class, see PMML execution helper class.
7.4.1.1. PMML execution helper class
Drools provides a PMML4ExecutionHelper
class that helps create the PMMLRequestData
class required for PMML model execution and that helps execute rules using the RuleUnitExecutor
class.
The following are examples of a PMML model execution without and with the PMML4ExecutionHelper
class, as a comparison:
PMML4ExecutionHelper
public void executeModel(KieBase kbase,
Map<String,Object> variables,
String modelName,
String correlationId,
String modelPkgName) {
RuleUnitExecutor executor = RuleUnitExecutor.create().bind(kbase);
PMMLRequestData request = new PMMLRequestData(correlationId, modelName);
PMML4Result resultHolder = new PMML4Result(correlationId);
variables.entrySet().forEach( es -> {
request.addRequestParam(es.getKey(), es.getValue());
});
DataSource<PMMLRequestData> requestData = executor.newDataSource("request");
DataSource<PMML4Result> resultData = executor.newDataSource("results");
DataSource<PMMLData> internalData = executor.newDataSource("pmmlData");
requestData.insert(request);
resultData.insert(resultHolder);
List<String> possiblePackageNames = calculatePossiblePackageNames(modelName,
modelPkgName);
Class<? extends RuleUnit> ruleUnitClass = getStartingRuleUnit("RuleUnitIndicator",
(InternalKnowledgeBase)kbase,
possiblePackageNames);
if (ruleUnitClass != null) {
executor.run(ruleUnitClass);
if ( "OK".equals(resultHolder.getResultCode()) ) {
// extract result variables here
}
}
}
protected Class<? extends RuleUnit> getStartingRuleUnit(String startingRule, InternalKnowledgeBase ikb, List<String> possiblePackages) {
RuleUnitRegistry unitRegistry = ikb.getRuleUnitRegistry();
Map<String,InternalKnowledgePackage> pkgs = ikb.getPackagesMap();
RuleImpl ruleImpl = null;
for (String pkgName: possiblePackages) {
if (pkgs.containsKey(pkgName)) {
InternalKnowledgePackage pkg = pkgs.get(pkgName);
ruleImpl = pkg.getRule(startingRule);
if (ruleImpl != null) {
RuleUnitDescr descr = unitRegistry.getRuleUnitFor(ruleImpl).orElse(null);
if (descr != null) {
return descr.getRuleUnitClass();
}
}
}
}
return null;
}
protected List<String> calculatePossiblePackageNames(String modelId, String...knownPackageNames) {
List<String> packageNames = new ArrayList<>();
String javaModelId = modelId.replaceAll("\\s","");
if (knownPackageNames != null && knownPackageNames.length > 0) {
for (String knownPkgName: knownPackageNames) {
packageNames.add(knownPkgName + "." + javaModelId);
}
}
String basePkgName = PMML4UnitImpl.DEFAULT_ROOT_PACKAGE+"."+javaModelId;
packageNames.add(basePkgName);
return packageNames;
}
PMML4ExecutionHelper
public void executeModel(KieBase kbase,
Map<String,Object> variables,
String modelName,
String modelPkgName,
String correlationId) {
PMML4ExecutionHelper helper = PMML4ExecutionHelperFactory.getExecutionHelper(modelName, kbase);
helper.addPossiblePackageName(modelPkgName);
PMMLRequestData request = new PMMLRequestData(correlationId, modelName);
variables.entrySet().forEach(entry -> {
request.addRequestParam(entry.getKey(), entry.getValue);
});
PMML4Result resultHolder = helper.submitRequest(request);
if ("OK".equals(resultHolder.getResultCode)) {
// extract result variables here
}
}
When you use the PMML4ExecutionHelper
, you do not need to specify the possible package names nor the RuleUnit
class as you would in a typical PMML model execution.
To construct a PMML4ExecutionHelper
class, you use the PMML4ExecutionHelperFactory
class to determine how instances of PMML4ExecutionHelper
are retrieved.
The following are the available PMML4ExecutionHelperFactory
class methods for constructing a PMML4ExecutionHelper
class:
- PMML4ExecutionHelperFactory methods for PMML assets in a KIE base
-
Use these methods when PMML assets have already been compiled and are being used from an existing KIE base:
public static PMML4ExecutionHelper getExecutionHelper(String modelName, KieBase kbase) public static PMML4ExecutionHelper getExecutionHelper(String modelName, KieBase kbase, boolean includeMiningDataSources)
- PMML4ExecutionHelperFactory methods for PMML assets on the project classpath
-
Use these methods when PMML assets are on the project classpath. The
classPath
argument is the project classpath location of the PMML file:public static PMML4ExecutionHelper getExecutionHelper(String modelName, String classPath, KieBaseConfiguration kieBaseConf) public static PMML4ExecutionHelper getExecutionHelper(String modelName,String classPath, KieBaseConfiguration kieBaseConf, boolean includeMiningDataSources)
- PMML4ExecutionHelperFactory methods for PMML assets in a byte array
-
Use these methods when PMML assets are in the form of a byte array:
public static PMML4ExecutionHelper getExecutionHelper(String modelName, byte[] content, KieBaseConfiguration kieBaseConf) public static PMML4ExecutionHelper getExecutionHelper(String modelName, byte[] content, KieBaseConfiguration kieBaseConf, boolean includeMiningDataSources)
- PMML4ExecutionHelperFactory methods for PMML assets in a
Resource
-
Use these methods when PMML assets are in the form of an
org.kie.api.io.Resource
object:public static PMML4ExecutionHelper getExecutionHelper(String modelName, Resource resource, KieBaseConfiguration kieBaseConf) public static PMML4ExecutionHelper getExecutionHelper(String modelName, Resource resource, KieBaseConfiguration kieBaseConf, boolean includeMiningDataSources)
The classpath, byte array, and resource PMML4ExecutionHelperFactory methods create a KIE container for the generated rules and Java classes. The container is used as the source of the KIE base that the RuleUnitExecutor uses. The container is not persisted. The PMML4ExecutionHelperFactory method for PMML assets that are already in a KIE base does not create a KIE container in this way.
|
7.4.2. Executing a PMML model using KIE Server
You can execute PMML models that have been deployed to KIE Server by sending the ApplyPmmlModelCommand
command to the configured KIE Server. When you use this command, a PMMLRequestData
object is sent to the KIE Server and a PMML4Result
result object is received as a reply. You can send PMML requests to KIE Server through the KIE Server REST API from a configured Java class or directly from a REST client.
-
KIE Server is installed and configured, including a known user name and credentials for a user with the
kie-server
role. For installation options, see Installation and Setup (Core and IDE). -
A KIE container is deployed in KIE Server in the form of a KJAR that includes the PMML model. For more information about project packaging, see Build, Deploy, Utilize and Run.
-
You have the container ID of the KIE container containing the PMML model.
-
In your client application, add the following dependencies to the relevant classpath of your Java project:
<!-- Required for the PMML compiler --> <dependency> <groupId>org.drools</groupId> <artifactId>kie-pmml</artifactId> <version>${drools.version}</version> </dependency> <!-- Required for the KIE public API --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-api</artifactId> <version>${drools.version}</version> </dependencies> <!-- Required for the KIE Server Java client API --> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-client</artifactId> <version>${drools.version}</version> </dependency> <!-- Required if not using classpath KIE container --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-ci</artifactId> <version>${drools.version}</version> </dependency>
The
<version>
is the Maven artifact version for Drools currently used in your project (for example, 7.26.0.Final-redhat-00002). -
Create a KIE container from
classpath
orReleaseId
:KieServices kieServices = KieServices.Factory.get(); ReleaseId releaseId = kieServices.newReleaseId( "org.acme", "my-kjar", "1.0.0" ); KieContainer kieContainer = kieServices.newKieContainer( releaseId );
Alternative option:
KieServices kieServices = KieServices.Factory.get(); KieContainer kieContainer = kieServices.getKieClasspathContainer();
-
Create a class for sending requests to KIE Server and receiving responses:
public class ApplyScorecardModel { private static final ReleaseId releaseId = new ReleaseId("org.acme","my-kjar","1.0.0"); private static final String containerId = "SampleModelContainer"; private static KieCommands commandFactory; private static ClassLoader kjarClassLoader; (1) private RuleServicesClient serviceClient; (2) // Attributes specific to your class instance private String rankedFirstCode; private Double score; // Initialization of non-final static attributes static { commandFactory = KieServices.Factory.get().getCommands(); // Specifications for kjarClassLoader, if used KieMavenRepository kmp = KieMavenRepository.getMavenRepository(); File artifactFile = kmp.resolveArtifact(releaseId).getFile(); if (artifactFile != null) { URL urls[] = new URL[1]; try { urls[0] = artifactFile.toURI().toURL(); classLoader = new KieURLClassLoader(urls,PMML4Result.class.getClassLoader()); } catch (MalformedURLException e) { logger.error("Error getting classLoader for "+containerId); logger.error(e.getMessage()); } } else { logger.warn("Did not find the artifact file for "+releaseId.toString()); } } public ApplyScorecardModel(KieServicesConfiguration kieConfig) { KieServicesClient clientFactory = KieServicesFactory.newKieServicesClient(kieConfig); serviceClient = clientFactory.getServicesClient(RuleServicesClient.class); } ... // Getters and setters ... // Method for executing the PMML model on KIE Server public void applyModel(String occupation, int age) { PMMLRequestData input = new PMMLRequestData("1234","SampleModelName"); (3) input.addRequestParam(new ParameterInfo("1234","occupation",String.class,occupation)); input.addRequestParam(new ParameterInfo("1234","age",Integer.class,age)); CommandFactoryServiceImpl cf = (CommandFactoryServiceImpl)commandFactory; ApplyPmmlModelCommand command = (ApplyPmmlModelCommand) cf.newApplyPmmlModel(request); (4) ServiceResponse<ExecutionResults> results = ruleClient.executeCommandsWithResults(CONTAINER_ID, command); (5) if (results != null) { (6) PMML4Result resultHolder = (PMML4Result)results.getResult().getValue("results"); if (resultHolder != null && "OK".equals(resultHolder.getResultCode())) { this.score = resultHolder.getResultValue("ScoreCard","score",Double.class).get(); Map<String,Object> rankingMap = (Map<String,Object>)resultHolder.getResultValue("ScoreCard","ranking"); if (rankingMap != null && !rankingMap.isEmpty()) { this.rankedFirstCode = rankingMap.keySet().iterator().next(); } } } } }
1 Defines the class loader if you did not include the KJAR in your client project dependencies 2 Identifies the service client as defined in the configuration settings, including KIE Server REST API access credentials 3 Initializes a PMMLRequestData
object4 Creates an instance of the ApplyPmmlModelCommand
5 Sends the command using the service client 6 Retrieves the results of the executed PMML model -
Execute the class instance to send the PMML invocation request to KIE Server.
Alternatively, you can use JMS and REST interfaces to send the
ApplyPmmlModelCommand
command to KIE Server. For REST requests, you use theApplyPmmlModelCommand
command as aPOST
request tohttp://SERVER:PORT/kie-server/services/rest/server/containers/instances/{containerId}
in JSON, JAXB, or XStream request format.Example POST endpointhttp://localhost:8080/kie-server/services/rest/server/containers/instances/SampleModelContainer
Example JSON request body{ "commands": [ { "apply-pmml-model-command": { "outIdentifier": null, "packageName": null, "hasMining": false, "requestData": { "correlationId": "123", "modelName": "SimpleScorecard", "source": null, "requestParams": [ { "correlationId": "123", "name": "param1", "type": "java.lang.Double", "value": "10.0" }, { "correlationId": "123", "name": "param2", "type": "java.lang.Double", "value": "15.0" } ] } } } ] }
Example curl request with endpoint and bodycurl -X POST "http://localhost:8080/kie-server/services/rest/server/containers/instances/SampleModelContainer" -H "accept: application/json" -H "content-type: application/json" -d "{ \"commands\": [ { \"apply-pmml-model-command\": { \"outIdentifier\": null, \"packageName\": null, \"hasMining\": false, \"requestData\": { \"correlationId\": \"123\", \"modelName\": \"SimpleScorecard\", \"source\": null, \"requestParams\": [ { \"correlationId\": \"123\", \"name\": \"param1\", \"type\": \"java.lang.Double\", \"value\": \"10.0\" }, { \"correlationId\": \"123\", \"name\": \"param2\", \"type\": \"java.lang.Double\", \"value\": \"15.0\" } ] } } } ]}"
Example JSON response{ "results" : [ { "value" : {"org.kie.api.pmml.DoubleFieldOutput":{ "value" : 40.8, "correlationId" : "123", "segmentationId" : null, "segmentId" : null, "name" : "OverallScore", "displayValue" : "OverallScore", "weight" : 1.0 }}, "key" : "OverallScore" }, { "value" : {"org.kie.api.pmml.PMML4Result":{ "resultVariables" : { "OverallScore" : { "value" : 40.8, "correlationId" : "123", "segmentationId" : null, "segmentId" : null, "name" : "OverallScore", "displayValue" : "OverallScore", "weight" : 1.0 }, "ScoreCard" : { "modelName" : "SimpleScorecard", "score" : 40.8, "holder" : { "modelName" : "SimpleScorecard", "correlationId" : "123", "voverallScore" : null, "moverallScore" : true, "vparam1" : 10.0, "mparam1" : false, "vparam2" : 15.0, "mparam2" : false }, "enableRC" : true, "pointsBelow" : true, "ranking" : { "reasonCh1" : 5.0, "reasonCh2" : -6.0 } } }, "correlationId" : "123", "segmentationId" : null, "segmentId" : null, "segmentIndex" : 0, "resultCode" : "OK", "resultObjectName" : null }}, "key" : "results" } ], "facts" : [ ] }
8. Experimental Features
8.1. Declarative Agenda
Declarative Agenda is experimental, and all aspects are highly likely to change in the future. @Eager and @Direct are temporary annotations to control the behaviour of rules, which will also change as Declarative Agenda evolves. Annotations instead of attributes where chosen, to reflect their experimental nature. |
The declarative agenda allows to use rules to control which other rules can fire and when. While this will add a lot more overhead than the simple use of salience, the advantage is it is declarative and thus more readable and maintainable and should allow more use cases to be achieved in a simpler fashion.
This feature is off by default and must be explicitly enabled, that is because it is considered highly experimental for the moment and will be subject to change, but can be activated on a given KieBase by adding the declarativeAgenda='enabled' attribute in the corresponding kbase tag of the kmodule.xml file as in the following example.
<kmodule xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://www.drools.org/xsd/kmodule">
<kbase name="DeclarativeKBase" declarativeAgenda="enabled">
<ksession name="KSession">
</kbase>
</kmodule>
The basic idea is:
-
All rule’s Matches are inserted into WorkingMemory as facts. So you can now do pattern matching against a Match. The rule’s metadata and declarations are available as fields on the Match object.
-
You can use the kcontext.blockMatch( Match match ) for the current rule to block the selected match. Only when that rule becomes false will the match be eligible for firing. If it is already eligible for firing and is later blocked, it will be removed from the agenda until it is unblocked.
-
A match may have multiple blockers and a count is kept. All blockers must became false for the counter to reach zero to enable the Match to be eligible for firing.
-
kcontext.unblockAllMatches( Match match ) is an over-ride rule that will remove all blockers regardless
-
An activation may also be cancelled, so it never fires with cancelMatch
-
An unblocked Match is added to the Agenda and obeys normal salience, agenda groups, ruleflow groups etc.
-
The @Direct annotations allows a rule to fire as soon as it’s matched, this is to be used for rules that block/unblock matches, it is not desirable for these rules to have side effects that impact else where.
void blockMatch(Match match);
void unblockAllMatches(Match match);
void cancelMatch(Match match);
Here is a basic example that will block all matches from rules that have metadata @department('sales'). They will stay blocked until the blockerAllSalesRules rule becomes false, i.e. "go2" is retracted.
rule rule1 @Eager @department('sales') when
$s : String( this == 'go1' )
then
list.add( kcontext.rule.name + ':' + $s );
end
rule rule2 @Eager @department('sales') when
$s : String( this == 'go1' )
then
list.add( kcontext.rule.name + ':' + $s );
end
rule blockerAllSalesRules @Direct @Eager when
$s : String( this == 'go2' )
$i : Match( department == 'sales' )
then
list.add( $i.rule.name + ':' + $s );
kcontext.blockMatch( $i );
end
Further than annotate the blocking rule with @Direct, it is also necessary to annotate all the rules that could be potentially blocked by it with @Eager. This is because, since the Match has to be evaluated by the pattern matching of the blocking rule, the potentially blocked ones cannot be evaluated lazily, otherwise won’t be any Match to be evaluated. |
This example shows how you can use active property to count the number of active or inactive (already fired) matches.
rule rule1 @Eager @department('sales') when
$s : String( this == 'go1' )
then
list.add( kcontext.rule.name + ':' + $s );
end
rule rule2 @Eager @department('sales') when
$s : String( this == 'go1' )
then
list.add( kcontext.rule.name + ':' + $s );
end
rule rule3 @Eager @department('sales') when
$s : String( this == 'go1' )
then
list.add( kcontext.rule.name + ':' + $s );
end
rule countActivateInActive @Direct @Eager when
$s : String( this == 'go2' )
$active : Number( this == 1 ) from accumulate( $a : Match( department == 'sales', active == true ), count( $a ) )
$inActive : Number( this == 2 ) from accumulate( $a : Match( department == 'sales', active == false ), count( $a ) )
then
kcontext.halt( );
end
8.2. Browsing graphs of objects with OOPath
When the field of a fact is a collection it is possible to bind and reason over all the items in that collection on by one using the from
keyword.
Nevertheless, when it is required to browse a graph of object the extensive use of the from
conditional element may result in a verbose and cubersome syntax like in the following example:
rule "Find all grades for Big Data exam" when
$student: Student( $plan: plan )
$exam: Exam( course == "Big Data" ) from $plan.exams
$grade: Grade() from $exam.grades
then /* RHS */ end
In this example it has been assumed to use a domain model consisting of a Student
who has a Plan
of study: a Plan
can have zero or more Exam
s and an Exam
zero or more Grade
s.
Note that only the root object of the graph (the Student
in this case) needs to be in the working memory in order to make this works.
By borrowing ideas from XPath, this syntax can be made more succinct, as XPath has a compact notation for navigating through related elements while handling collections and filtering constraints.
This XPath-inspired notation has been called OOPath
since it is explictly intended to browse graph of objects.
Using this notation the former example can be rewritten as it follows:
rule "Find all grades for Big Data exam" when
Student( $grade: /plan/exams[course == "Big Data"]/grades )
then /* RHS */ end
Formally, the core grammar of an OOPath
expression can be defined in EBNF notation in this way.
OOPExpr = [ID ( ":" | ":=" )] ( "/" | "?/" ) OOPSegment { ( "/" | "?/" | "." ) OOPSegment } ;
OOPSegment = ID ["#" ID] ["[" ( Number | Constraints ) "]"]
In practice an OOPath
expression has the following features.
-
It has to start with
/
or with a?/
in case of a completely non-reactive OOPath (see below). -
It can dereference a single property of an object with the
.
operator -
It can dereference a multiple property of an object using the
/
operator. If a collection is returned, it will iterate over the values in the collection -
While traversing referenced objects it can filter away those not satisfying one or more constraints, written as predicate expressions between square brackets like in:
Student( $grade: /plan/exams[ course == "Big Data" ]/grades )
-
During oopath traversal it is also possible to downcast the traversed object to a subclass of the class declared in the generic collection. In this way subsequent constraints can also safely access to properties declared only in that subclass like in:
Student( $grade: /plan/exams#AdvancedExam[ course == "Big Data", level > 3 ]/grades )
Objects that are not instances of the class specificed in this inline cast will be automatically filtered away.
-
A constraint can also have a beckreference to an object of the graph traversed before the currently iterated one. For example the following OOPath:
Student( $grade: /plan/exams/grades[ result > ../averageResult ] )
will match only the grades having a result above the average for the passed exam.
-
A constraint can also recursively be another OOPath as it follows:
Student( $exam: /plan/exams[ /grades[ result > 20 ] ] )
-
Items can also be accessed by their index by putting it between square brackets like in:
Student( $grade: /plan/exams[0]/grades )
To adhere to Java convention OOPath indexes are 0-based, compared to XPath 1-based
8.2.1. Reactive and Non-Reactive OOPath
At the moment Drools is not able to react to updates involving a deeply nested object traversed during the evaluation of an OOPath
expression.
To make these objects reactive to changes it is then necessary to make them extend the class org.drools.core.phreak.ReactiveObject
.
It is planned to overcome this limitation by implementing a mechanism that automatically instruments the classes belonging to a specific domain model.
Having extendend that class, the domain objects can notify the Drools engine when one of its field has been updated by invoking the inherited method notifyModification
as in the following example:
public void setCourse(String course) {
this.course = course;
notifyModification(this);
}
In this way when using an OOPath like the following:
Student( $grade: /plan/exams[ course == "Big Data" ]/grades )
if an exam is moved to a different course, the rule is re-triggered and the list of grades matching the rule recomputed.
It is also possible to have reactivity only in one subpart of the OOPath as in:
Student( $grade: /plan/exams[ course == "Big Data" ]?/grades )
Here, using the ?/
separator instead of the /
one, the Drools engine will react to a change made to an exam, or if an exam is added to the plan, but not if a new grade is added to an existing exam.
Of course if a OOPath chunk is not reactive, all remaining part of the OOPath from there till the end of the expression will be non-reactive as well.
For instance the following OOPath
Student( $grade: ?/plan/exams[ course == "Big Data" ]/grades )
will be completely non-reactive.
For this reason it is not allowed to use the ?/
separator more than once in the same OOPath so an expression like:
Student( $grade: /plan?/exams[ course == "Big Data" ]?/grades )
will cause a compile time error.
8.3. Traits
WARNING : this feature is still experimental and subject to changes
The same fact may have multiple dynamic types which do not fit naturally in a class hierarchy. Traits allow to model this very common scenario. A trait is an interface that can be applied (and eventually removed) to an individual object at runtime. To create a trait rather than a traditional bean, one has to declare them explicitly as in the following example:
declare trait GoldenCustomer
// fields will map to getters/setters
code : String
balance : long
discount : int
maxExpense : long
end
At runtime, this declaration results in an interface, which can be used to write patterns, but can not be instantiated directly. In order to apply a trait to an object, we provide the new don keyword, which can be used as simply as this:
when
$c : Customer()
then
GoldenCustomer gc = don( $c, GoldenCustomer.class );
end
when a core object dons a trait, a proxy class is created on the fly (one such class will be generated lazily for each core/trait class combination). The proxy instance, which wraps the core object and implements the trait interface, is inserted automatically and will possibly activate other rules. An immediate advantage of declaring and using interfaces, getting the implementation proxy for free from the Drools engine, is that multiple inheritance hierarchies can be exploited when writing rules. The core classes, however, need not implement any of those interfaces statically, also facilitating the use of legacy classes as cores. In fact, any object can don a trait, provided that they are declared as @Traitable. Notice that this annotation used to be optional, but now is mandatory.
import org.drools.core.factmodel.traits.Traitable;
declare Customer
@Traitable
code : String
balance : long
end
The only connection between core classes and trait interfaces is at the proxy level: a trait is not specifically tied to a core class. This means that the same trait can be applied to totally different objects. For this reason, the trait does not transparently expose the fields of its core object. So, when writing a rule using a trait interface, only the fields of the interface will be available, as usual. However, any field in the interface that corresponds to a core object field, will be mapped by the proxy class:
when
$o: OrderItem( $p : price, $code : custCode )
$c: GoldenCustomer( code == $code, $a : balance, $d: discount )
then
$c.setBalance( $a - $p*$d );
end
In this case, the code and balance would be read from the underlying Customer object. Likewise, the setAccount will modify the underlying object, preserving a strongly typed access to the data structures. A hard field must have the same name and type both in the core class and all donned interfaces. The name is used to establish the mapping: if two fields have the same name, then they must also have the same declared type. The annotation @org.drools.core.factmodel.traits.Alias allows to relax this restriction. If an @Alias is provided, its value string will be used to resolve mappings instead of the original field name. @Alias can be applied both to traits and core beans.
import org.drools.core.factmodel.traits.*;
declare trait GoldenCustomer
balance : long @Alias( "org.acme.foo.accountBalance" )
end
declare Person
@Traitable
name : String
savings : long @Alias( "org.acme.foo.accountBalance" )
end
when
GoldenCustomer( balance > 1000 ) // will react to new Person( 2000 )
then
end
More work is being done on relaxing this constraint (see the experimental section on "logical" traits later). Now, one might wonder what happens when a core class does NOT provide the implementation for a field defined in an interface. We call hard fields those trait fields which are also core fields and thus readily available, while we define soft those fields which are NOT provided by the core class. Hidden fields, instead, are fields in the core class not exposed by the interface.
So, while hard field management is intuitive, there remains the problem of soft and hidden fields. Hidden fields are normally only accessible using the core class directly. However, the "fields" Map can be used on a trait interface to access a hidden field. If the field can’t be resolved, null will be returned. Notice that this feature is likely to change in the future.
when
$sc : GoldenCustomer( fields[ "age" ] > 18 ) // age is declared by the underlying core class, but not by GoldenCustomer
then
Soft fields, instead, are stored in a Map-like data structure that is specific to each core object and referenced by the proxy(es), so that they are effectively shared even when an object dons multiple traits.
when
$sc : GoldenCustomer( $c : code, // hard getter
$maxExpense : maxExpense > 1000 // soft getter
)
then
$sc.setDiscount( ... ); // soft setter
end
A core object also holds a reference to all its proxies, so that it is possible to track which type(s) have been added to an object, using a sort of dynamic "instanceof" operator, which we called isA. The operator can accept a String, a class literal or a list of class literals. In the latter case, the constraint is satisfied only if all the traits have been donned.
$sc : GoldenCustomer( $maxExpense : maxExpense > 1000,
this isA "SeniorCustomer", this isA [ NationalCustomer.class, OnlineCustomer.class ]
)
Eventually, the business logic may require that a trait is removed from a wrapped object. To this end, we provide two options. The first is a "logical don", which will result in a logical insertion of the proxy resulting from the traiting operation. The TMS will ensure that the trait is removed when its logical support is removed in the first place.
then
don( $x, // core object
Customer.class, // trait class
true // optional flag for logical insertion
)
The second is the use of the "shed" keyword, which causes the removal of any type that is a subtype (or equivalent) of the one passed as an argument. Notice that, as of version 5.5, shed would only allow to remove a single specific trait.
then
Thing t = shed( $x, GoldenCustomer.class ) // also removes any trait that
This operation returns another proxy implementing the org.drools.core.factmodel.traits.Thing interface, where the getFields() and getCore() methods are defined. Internally, in fact, all declared traits are generated to extend this interface (in addition to any others specified). This allows to preserve the wrapper with the soft fields which would otherwise be lost.
A trait and its proxies are also correlated in another way. Starting from version 5.6, whenever a core object is "modified", its proxies are "modified" automatically as well, to allow trait-based patterns to react to potential changes in hard fields. Likewise, whenever a trait proxy (mached by a trait pattern) is modified, the modification is propagated to the core class and the other traits. Morover, whenever a don operation is performed, the core object is also modified automatically, to reevaluate any "isA" operation which may be triggered.
Potentially, this may result in a high number of modifications, impacting performance (and correctness) heavily. So two solutions are currently implemented. First, whenever a core object is modified, only the most specific traits (in the sense of inheritance between trait interfaces) are updated and an internal blocking mechanism is in place to ensure that each potentially matching pattern is evaluated once and only once. So, in the following situation:
declare trait GoldenCustomer end
declare trait NationalGoldenustomer extends GoldenCustomer end
declare trait SeniorGoldenCustomer extends GoldenCustomer end
a modification of an object that is both a GoldenCustomer, a NationalGoldenCustomer and a SeniorGoldenCustomer wold cause only the latter two proxies to be actually modified. The first would match any pattern for GoldenCustomer and NationalGoldenCustomer; the latter would instead be prevented from rematching GoldenCustomer, but would be allowed to match SeniorGoldenCustomer patterns. It is not necessary, instead, to modify the GoldenCustomer proxy since it is already covered by at least one other more specific trait.
The second method, up to the usr, is to mark traits as @PropertyReactive. Property reactivity is trait-enabled and takes into account the trait field mappings, so to block unnecessary propagations.
8.3.1. Cascading traits
WARNING : This feature is extremely experimental and subject to changes
Normally, a hard field must be exposed with its original type by all traits donned by an object, to prevent situations such as
declare Person
@Traitable
name : String
id : String
end
declare trait Customer
id : String
end
declare trait Patient
id : long // Person can't don Patient, or an exception will be thrown
end
Should a Person don both Customer and Patient, the type of the hard field id would be ambiguous. However, consider the following example, where GoldenCustomers refer their best friends so that they become Customers as well:
declare Person
@Traitable( logical=true )
bestFriend : Person
end
declare trait Customer end
declare trait GoldenCustomer extends Customer
refers : Customer @Alias( "bestFriend" )
end
Aside from the @Alias, a Person-as-GoldenCustomer’s best friend might be compatible with the requirements of the trait GoldenCustomer, provided that they are some kind of Customer themselves. Marking a Person as "logically traitable" - i.e. adding the annotation @Traitable( logical = true ) - will instruct the Drools engine to try and preserve the logical consistency rather than throwing an exception due to a hard field with different type declarations (Person vs Customer). The following operations would then work:
Person p1 = new Person();
Person p2 = new Person();
p1.setBestFriend( p2 );
...
Customer c2 = don( p2, Customer.class );
...
GoldenCustomer gc1 = don( p1, GoldenCustomer.class );
...
p1.getBestFriend(); // returns p2
gc1.getRefers(); // returns c2, a Customer proxy wrapping p2
Notice that, by the time p1 becomes GoldenCustomer, p2 must have already become a Customer themselves, otherwise a runtime exception will be thrown since the very definition of GoldenCustomer would have been violated.
In some cases, however, one might want to infer, rather than verify, that p2 is a Customer by virtue that p1 is a GoldenCustomer. This modality can be enabled by marking Customer as "logical", using the annotation @org.drools.core.factmodel.traits.Trait( logical = true ). In this case, should p2 not be a Customer by the time that p1 becomes a GoldenCustomer, it will be automatically don the trait Customer to preserve the logical integrity of the system.
Notice that the annotation on the core class enables the dynamic type management for its fields, whereas the annotation on the traits determines whether they will be enforced as integrity constraints or cascaded dynamically.
import org.drools.factmodel.traits.*;
declare trait Customer
@Trait( logical = true )
end
Drools Integration
Integration Documentation
9. Drools commands
9.1. Runtime commands in Drools
Drools supports runtime commands that you can send to KIE Server for asset-related operations, such as executing all rules or inserting or retracting objects in a KIE session. The full list of supported runtime commands is located in the org.drools.core.command.runtime
package in your Drools instance.
In the KIE Server REST API, you use the global org.drools.core.command.runtime
commands or the rule-specific org.drools.core.command.runtime.rule
commands as the request body for POST
requests to http://SERVER:PORT/kie-server/services/rest/server/containers/instances/{containerId}
. For more information about using the KIE Server REST API, see KIE Server REST API for KIE containers and business assets.
In the KIE Server Java client API, you can embed these commands in your Java application along with the relevant Java client. For example, for rule-related commands, you use the RuleServicesClient
Java client with the embedded commands. For more information about using the KIE Server Java client API, see KIE Server Java client API for KIE containers and business assets.
9.1.1. Sample runtime commands in Drools
The following are sample runtime commands that you can use with the KIE Server REST API or Java client API for asset-related operations in KIE Server:
-
BatchExecutionCommand
-
InsertObjectCommand
-
RetractCommand
-
ModifyCommand
-
GetObjectCommand
-
GetObjectsCommand
-
InsertElementsCommand
-
FireAllRulesCommand
-
QueryCommand
-
SetGlobalCommand
-
GetGlobalCommand
For the full list of supported runtime commands, see the org.drools.core.command.runtime
package in your Drools instance.
Each command in this section includes a REST request body example (JSON) for the KIE Server REST API and an embedded Java command example for the KIE Server Java client API. The Java examples use an object org.drools.compiler.test.Person
with the fields name
(String) and age
(Integer).
- BatchExecutionCommand
-
Contains multiple commands to be executed together.
Table 16. Command attributes Name Description Requirement commands
List of commands to be executed.
Required
lookup
Sets the KIE session ID on which the commands will be executed. For stateless KIE sessions, this attribute is required. For stateful KIE sessions, this attribute is optional and if not specified, the default KIE session is used.
Required for stateless KIE session, optional for stateful KIE session
Example JSON request body{ "lookup": "ksession1", "commands": [ { "insert": { "object": { "org.drools.compiler.test.Person": { "name": "john", "age": 25 } } } }, { "fire-all-rules": { "max": 10, "out-identifier": "firedActivations" } } ] }
Example Java commandBatchExecutionCommand command = new BatchExecutionCommand(); command.setLookup("ksession1"); InsertObjectCommand insertObjectCommand = new InsertObjectCommand(new Person("john", 25)); FireAllRulesCommand fireAllRulesCommand = new FireAllRulesCommand(); command.getCommands().add(insertObjectCommand); command.getCommands().add(fireAllRulesCommand); ksession.execute(command);
Example server response (JSON){ "response": [ { "type": "SUCCESS", "msg": "Container command-script-container successfully called.", "result": { "execution-results": { "results": [ { "value": 0, "key": "firedActivations" } ], "facts": [] } } } ] }
- InsertObjectCommand
-
Inserts an object into the KIE session.
Table 17. Command attributes Name Description Requirement object
The object to be inserted
Required
out-identifier
ID of the
FactHandle
created from the object insertion and added to the execution resultsOptional
return-object
Boolean to determine whether the object must be returned in the execution results (default:
true
)Optional
entry-point
Entry point for the insertion
Optional
Example JSON request body{ "commands": [ { "insert": { "entry-point": "my stream", "object": { "org.drools.compiler.test.Person": { "age": 25, "name": "john" } }, "out-identifier": "john", "return-object": false } } ] }
Example Java commandCommand insertObjectCommand = CommandFactory.newInsert(new Person("john", 25), "john", false, null); ksession.execute(insertObjectCommand);
Example server response (JSON){ "response": [ { "type": "SUCCESS", "msg": "Container command-script-container successfully called.", "result": { "execution-results": { "results": [], "facts": [ { "value": { "org.drools.core.common.DefaultFactHandle": { "external-form": "0:4:436792766:-2127720265:4:DEFAULT:NON_TRAIT:java.util.LinkedHashMap" } }, "key": "john" } ] } } } ] }
- RetractCommand
-
Retracts an object from the KIE session.
Table 18. Command attributes Name Description Requirement fact-handle
The
FactHandle
associated with the object to be retractedRequired
Example JSON request body{ "commands": [ { "retract": { "fact-handle": "0:4:436792766:-2127720265:4:DEFAULT:NON_TRAIT:java.util.LinkedHashMap" } } ] }
Example Java command: UseFactHandleFromString
RetractCommand retractCommand = new RetractCommand(); retractCommand.setFactHandleFromString("123:234:345:456:567");
Example Java command: UseFactHandle
from inserted objectRetractCommand retractCommand = new RetractCommand(factHandle);
Example server response (JSON){ "response": [ { "type": "SUCCESS", "msg": "Container employee-rostering successfully called.", "result": { "execution-results": { "results": [], "facts": [] } } } ] }
- ModifyCommand
-
Modifies a previously inserted object in the KIE session.
Table 19. Command attributes Name Description Requirement fact-handle
The
FactHandle
associated with the object to be modifiedRequired
setters
List of setters for object modifications
Required
Example JSON request body{ "commands": [ { "modify": { "fact-handle": "0:4:436792766:-2127720265:4:DEFAULT:NON_TRAIT:java.util.LinkedHashMap", "setters": { "accessor": "age", "value": 25 } } } ] }
Example Java commandModifyCommand modifyCommand = new ModifyCommand(factHandle); List<Setter> setters = new ArrayList<Setter>(); setters.add(new SetterImpl("age", "25")); modifyCommand.setSetters(setters);
Example server response (JSON){ "response": [ { "type": "SUCCESS", "msg": "Container employee-rostering successfully called.", "result": { "execution-results": { "results": [], "facts": [] } } } ] }
- GetObjectCommand
-
Retrieves an object from a KIE session.
Table 20. Command attributes Name Description Requirement fact-handle
The
FactHandle
associated with the object to be retrievedRequired
out-identifier
ID of the
FactHandle
created from the object insertion and added to the execution resultsOptional
Example JSON request body{ "commands": [ { "get-object": { "fact-handle": "0:4:436792766:-2127720265:4:DEFAULT:NON_TRAIT:java.util.LinkedHashMap", "out-identifier": "john" } } ] }
Example Java commandGetObjectCommand getObjectCommand = new GetObjectCommand(); getObjectCommand.setFactHandleFromString("123:234:345:456:567"); getObjectCommand.setOutIdentifier("john");
Example server response (JSON){ "response": [ { "type": "SUCCESS", "msg": "Container command-script-container successfully called.", "result": { "execution-results": { "results": [ { "value": null, "key": "john" } ], "facts": [] } } } ] }
- GetObjectsCommand
-
Retrieves all objects from the KIE session as a collection.
Table 21. Command attributes Name Description Requirement object-filter
Filter for the objects returned from the KIE session
Optional
out-identifier
Identifier to be used in the execution results
Optional
Example JSON request body{ "commands": [ { "get-objects": { "out-identifier": "objects" } } ] }
Example Java commandGetObjectsCommand getObjectsCommand = new GetObjectsCommand(); getObjectsCommand.setOutIdentifier("objects");
Example server response (JSON){ "response": [ { "type": "SUCCESS", "msg": "Container command-script-container successfully called.", "result": { "execution-results": { "results": [ { "value": [ { "org.apache.xerces.dom.ElementNSImpl": "<?xml version=\"1.0\" encoding=\"UTF-16\"?>\n<object xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:type=\"person\"><age>25</age><name>john</name>\n <\/object>" }, { "org.drools.compiler.test.Person": { "name": "john", "age": 25 } } ], "key": "objects" } ], "facts": [] } } } ] }
- InsertElementsCommand
-
Inserts a list of objects into the KIE session.
Table 22. Command attributes Name Description Requirement objects
The list of objects to be inserted into the KIE session
Required
out-identifier
ID of the
FactHandle
created from the object insertion and added to the execution resultsOptional
return-object
Boolean to determine whether the object must be returned in the execution results. Default value:
true
.Optional
entry-point
Entry point for the insertion
Optional
Example JSON request body{ "commands": [ { "insert-elements": { "objects": [ { "containedObject": { "@class": "org.drools.compiler.test.Person", "age": 25, "name": "john" } }, { "containedObject": { "@class": "Person", "age": 35, "name": "sarah" } } ] } } ] }
Example Java commandList<Object> objects = new ArrayList<Object>(); objects.add(new Person("john", 25)); objects.add(new Person("sarah", 35)); Command insertElementsCommand = CommandFactory.newInsertElements(objects);
Example server response (JSON){ "response": [ { "type": "SUCCESS", "msg": "Container command-script-container successfully called.", "result": { "execution-results": { "results": [], "facts": [ { "value": { "org.drools.core.common.DefaultFactHandle": { "external-form": "0:4:436792766:-2127720265:4:DEFAULT:NON_TRAIT:java.util.LinkedHashMap" } }, "key": "john" }, { "value": { "org.drools.core.common.DefaultFactHandle": { "external-form": "0:4:436792766:-2127720266:4:DEFAULT:NON_TRAIT:java.util.LinkedHashMap" } }, "key": "sarah" } ] } } } ] }
- FireAllRulesCommand
-
Executes all rules in the KIE session.
Table 23. Command attributes Name Description Requirement max
Maximum number of rules to be executed. The default is
-1
and does not put any restriction on execution.Optional
out-identifier
ID to be used for retrieving the number of fired rules in execution results.
Optional
agenda-filter
Agenda Filter to be used for rule execution.
Optional
Example JSON request body{ "commands" : [ { "fire-all-rules": { "max": 10, "out-identifier": "firedActivations" } } ] }
Example Java commandFireAllRulesCommand fireAllRulesCommand = new FireAllRulesCommand(); fireAllRulesCommand.setMax(10); fireAllRulesCommand.setOutIdentifier("firedActivations");
Example server response (JSON){ "response": [ { "type": "SUCCESS", "msg": "Container command-script-container successfully called.", "result": { "execution-results": { "results": [ { "value": 0, "key": "firedActivations" } ], "facts": [] } } } ] }
- QueryCommand
-
Executes a query defined in the KIE base.
Table 24. Command attributes Name Description Requirement name
Query name.
Required
out-identifier
ID of the query results. The query results are added in the execution results with this identifier.
Optional
arguments
List of objects to be passed as a query parameter.
Optional
Example JSON request body{ "commands": [ { "query": { "name": "persons", "arguments": [], "out-identifier": "persons" } } ] }
Example Java commandQueryCommand queryCommand = new QueryCommand(); queryCommand.setName("persons"); queryCommand.setOutIdentifier("persons");
Example server response (JSON){ "type": "SUCCESS", "msg": "Container stateful-session successfully called.", "result": { "execution-results": { "results": [ { "value": { "org.drools.core.runtime.rule.impl.FlatQueryResults": { "idFactHandleMaps": { "type": "LIST", "componentType": null, "element": [ { "type": "MAP", "componentType": null, "element": [ { "value": { "org.drools.core.common.DisconnectedFactHandle": { "id": 1, "identityHashCode": 1809949690, "objectHashCode": 1809949690, "recency": 1, "object": { "org.kie.server.testing.Person": { "fullname": "John Doe", "age": 47 } }, "entryPointId": "DEFAULT", "traitType": "NON_TRAIT", "external-form": "0:1:1809949690:1809949690:1:DEFAULT:NON_TRAIT:org.kie.server.testing.Person" } }, "key": "$person" } ] } ] }, "idResultMaps": { "type": "LIST", "componentType": null, "element": [ { "type": "MAP", "componentType": null, "element": [ { "value": { "org.kie.server.testing.Person": { "fullname": "John Doe", "age": 47 } }, "key": "$person" } ] } ] }, "identifiers": { "type": "SET", "componentType": null, "element": [ "$person" ] } } }, "key": "persons" } ], "facts": [] } } }
- SetGlobalCommand
-
Sets an object to a global state.
Table 25. Command attributes Name Description Requirement identifier
ID of the global variable defined in the KIE base
Required
object
Object to be set into the global variable
Optional
out
Boolean to exclude the global variable you set from the execution results
Optional
out-identifier
ID of the global execution result
Optional
Example JSON request body{ "commands": [ { "set-global": { "identifier": "helper", "object": { "org.kie.server.testing.Person": { "fullname": "kyle", "age": 30 } }, "out-identifier": "output" } } ] }
Example Java commandSetGlobalCommand setGlobalCommand = new SetGlobalCommand(); setGlobalCommand.setIdentifier("helper"); setGlobalCommand.setObject(new Person("kyle", 30)); setGlobalCommand.setOut(true); setGlobalCommand.setOutIdentifier("output");
Example server response (JSON){ "type": "SUCCESS", "msg": "Container stateful-session successfully called.", "result": { "execution-results": { "results": [ { "value": { "org.kie.server.testing.Person": { "fullname": "kyle", "age": 30 } }, "key": "output" } ], "facts": [] } } }
- GetGlobalCommand
-
Retrieves a previously defined global object.
Table 26. Command attributes Name Description Requirement identifier
ID of the global variable defined in the KIE base
Required
out-identifier
ID to be used in the execution results
Optional
Example JSON request body{ "commands": [ { "get-global": { "identifier": "helper", "out-identifier": "helperOutput" } } ] }
Example Java commandGetGlobalCommand getGlobalCommand = new GetGlobalCommand(); getGlobalCommand.setIdentifier("helper"); getGlobalCommand.setOutIdentifier("helperOutput");
Example server response (JSON){ "response": [ { "type": "SUCCESS", "msg": "Container command-script-container successfully called.", "result": { "execution-results": { "results": [ { "value": null, "key": "helperOutput" } ], "facts": [] } } } ] }
10. CDI
10.1. Introduction
CDI, Contexts and Dependency Injection, is Java specification that provides declarative controls and strucutres to an application. KIE can use it to automatically instantiate and bind things, without the need to use the programmatic API.
10.2. Annotations
@KContainer, @KBase and @KSession all support an optional 'name' attribute. CDI typically does "getOrCreate" when it injects, all injections receive the same instance for the same set of annotations. the 'name' annotation forces a unique instance for each name, although all instance for that name will be identity equals.
10.2.1. @KReleaseId
Used to bind an instance to a specific version of a KieModule. If kie-ci is on the classpath this will resolve dependencies automatically, downloading from remote repositories.
10.2.2. @KContainer
@KContainer is optional as it can be detected and added by the use of @Inject and variable type inferrence.
@Inject
private KieContainer kContainer;
@Inject
@KReleaseId(groupId = "jar1", artifactId = "art1", version = "1.1")
private KieContainer kContainer;
@Inject
@KContainer(name = "kc1")
@KReleaseId(groupId = "jar1", artifactId = "art1", version = "1.1")
private KieContainer kContainer;
10.2.3. @KBase
@KBase is optional as it can be detected and added by the use of @Inject and variable type inference.
The default argument, if given, maps to the value attribute and specifies the name of the KieBase from the kmodule.xml file.
@Inject
private KieBase kbase;
@Inject
@KReleaseId( groupId = "jar1", artifactId = "art1", version = "1.0")
private KieBase kbase;
@Inject
@KBase("kbase1")
@KReleaseId( groupId = "jar1", artifactId = "art1", version = "1.0")
private KieBase kbase1v10;
@Inject
@KBase("kbase1")
@KReleaseId( groupId = "jar1", artifactId = "art1", version = "1.1")
private KieBase kbase1v10;
@Inject
@KSession(value="kbase1", name="kb1")
@KReleaseId( groupId = "jar1", artifactId = "art1", version = "1.0")
private KieBase kbase1kb1;
@Inject
@KSession(value="kbase1", name="kb2")
@KReleaseId( groupId = "jar1", artifactId = "art1", version = "1.0")
private KieBase kbase1kb2;
10.2.4. @KSession for KieSession
@KSession is optional as it can be detected and added by the use of @Inject and variable type inference.
The default argument, if given, maps to the value attribute and specifies the name of the KieSession from the kmodule.xml file
@Inject
private KieSession ksession;
@Inject
@KReleaseId( groupId = "jar1", artifactId = "art1", version = "1.0")
private KieSession ksession;
@Inject
@KSession("ksession1")
@KReleaseId( groupId = "jar1", artifactId = "art1", version = "1.0")
private KieSession ksessionv10;
@Inject
@KSession("ksession1")
@KReleaseId( groupId = "jar1", artifactId = "art1", version = "1.1")
private KieSession ksessionv11;
@Inject
@KSession(value="ksession1", name="ks1")
@KReleaseId( groupId = "jar1", artifactId = "art1", version = "1.0")
private KieSession ksession1ks1
@Inject
@KSession(value="ksession1", name="ks2")
@KReleaseId( groupId = "jar1", artifactId = "art1", version = "1.0")
private KieSession ksession1ks2
10.2.5. @KSession for StatelessKieSession
@KSession is optional as it can be detected and added by the use of @Inject and variable type inference.
The default argument, if given, maps to the value attribute and specifies the name of the KieSession from the kmodule.xml file.
@Inject
private StatelessKieSession ksession;
@Inject
@KReleaseId( groupId = "jar1", artifactId = "art1", version = "1.0")
private StatelessKieSession ksession;
@Inject
@KSession("ksession1")
@KReleaseId( groupId = "jar1", rtifactId = "art1", version = "1.0")
private StatelessKieSession ksessionv10;
@Inject
@KSession("ksession1")
@KReleaseId( groupId = "jar1", rtifactId = "art1", version = "1.1")
private StatelessKieSession ksessionv11;
@Inject
@KSession(value="ksession1", name="ks1")
@KReleaseId( groupId = "jar1", artifactId = "art1", version = "1.0")
private StatelessKieSession ksession1ks1
@Inject
@KSession(value="ksession1", name="ks2")
@KReleaseId( groupId = "jar1", artifactId = "art1", version = "1.0")
private StatelessKieSession ksession1ks2
10.3. API Example Comparison
CDI can inject instances into fields, o