JBoss.orgCommunity Documentation

Drools Documentation

Version 6.4.0.Final


I. Welcome
1. Introduction
1.1. Introduction
1.2. Getting Involved
1.2.1. Sign up to jboss.org
1.2.2. Sign the Contributor Agreement
1.2.3. Submitting issues via JIRA
1.2.4. Fork GitHub
1.2.5. Writing Tests
1.2.6. Commit with Correct Conventions
1.2.7. Submit Pull Requests
1.3. Installation and Setup (Core and IDE)
1.3.1. Installing and using
1.3.2. Building from source
1.3.3. Eclipse
2. Release Notes
2.1. What is New and Noteworthy in Drools 6.4.0
2.1.1. Better Java 8 compatibility
2.1.2. More robust incremental compilation
2.1.3. Improved multi-threading behaviour
2.1.4. OOPath improvements
2.2. New and Noteworthy in KIE Workbench 6.4.0
2.2.1. New look and feel
2.2.2. Various UI improvements
2.2.3. New locales
2.2.4. Authoring - Imports - Consistent terminology
2.2.5. Disable automatic build
2.2.6. Support for SCP style git Repository URLs
2.2.7. Authoring - Duplicate GAV detection
2.2.8. New Execution Server Management User Interface
2.2.9. User and group management
2.3. What is New and Noteworthy in Drools 6.3.0
2.3.1. Browsing graphs of objects with OOPath
2.4. New and Noteworthy in KIE Workbench 6.3.0
2.4.1. Real Time Validation and Verification for the Decision Tables
2.4.2. Improved DRL Editor
2.4.3. Asset locking
2.4.4. Data Modeller Tool Windows
2.4.5. Generation of JPA enabled Data Models
2.4.6. Data Set Authoring
2.5. What is New and Noteworthy in Drools 6.2.0
2.5.1. Propagation modes
2.6. New and Noteworthy in KIE Workbench 6.2.0
2.6.1. Download Repository or Part of the Repository as a ZIP
2.6.2. Project Editor permissions
2.6.3. Unify validation style in Guided Decision Table Wizard.
2.6.4. Improved Wizards
2.6.5. Consistent behaviour of XLS, Guided Decision Tables and Guided Templates
2.6.6. Improved Metadata Tab
2.6.7. Improved Data Objects Editor
2.6.8. Execution Server Management UI
2.6.9. Social Activities
2.6.10. Contributors Dashboard
2.6.11. Package selector
2.6.12. Improved visual consistency
2.6.13. Guided Decision Tree Editor
2.6.14. Create Repository Wizard
2.6.15. Repository Structure Screen
2.7. New and Noteworthy in Integration 6.2.0
2.7.1. KIE Execution Server
2.8. What is New and Noteworthy in Drools 6.1.0
2.8.1. JMX support for KieScanner
2.9. New and Noteworthy in KIE Workbench 6.1.0
2.9.1. Data Modeler - round trip and source code preservation
2.9.2. Data Modeler - improved annotations
2.9.3. Standardization of the display of tabular data
2.9.4. Generation of modify(x) {...} blocks
2.10. New and Noteworthy in KIE API 6.0.0
2.10.1. New KIE name
2.10.2. Maven aligned projects and modules and Maven Deployment
2.10.3. Configuration and convention based projects
2.10.4. KieBase Inclusion
2.10.5. KieModules, KieContainer and KIE-CI
2.10.6. KieScanner
2.10.7. Hierarchical ClassLoader
2.10.8. Legacy API Adapter
2.10.9. KIE Documentation
2.11. What is New and Noteworthy in Drools 6.0.0
2.11.1. PHREAK - Lazy rule matching algorithm
2.11.2. Automatically firing timed rule in passive mode
2.11.3. Expression Timers
2.11.4. RuleFlowGroups and AgendaGroups are merged
2.12. New and Noteworthy in KIE Workbench 6.0.0
2.13. New and Noteworthy in Integration 6.0.0
2.13.1. CDI
2.13.2. Spring
2.13.3. Aries Blueprints
2.13.4. OSGi Ready
3. Compatibility matrix
II. KIE
4. KIE
4.1. Overview
4.1.1. Anatomy of Projects
4.1.2. Lifecycles
4.2. Build, Deploy, Utilize and Run
4.2.1. Introduction
4.2.2. Building
4.2.3. Deploying
4.2.4. Running
4.2.5. Installation and Deployment Cheat Sheets
4.2.6. Build, Deploy and Utilize Examples
4.3. Security
4.3.1. Security Manager
III. Drools Runtime and Language
5. Hybrid Reasoning
5.1. Artificial Intelligence
5.1.1. A Little History
5.1.2. Knowledge Representation and Reasoning
5.1.3. Rule Engines and Production Rule Systems (PRS)
5.1.4. Hybrid Reasoning Systems (HRS)
5.1.5. Expert Systems
5.1.6. Recommended Reading
5.2. Rete Algorithm
5.3. ReteOO Algorithm
5.4. PHREAK Algorithm
6. User Guide
6.1. The Basics
6.1.1. Stateless Knowledge Session
6.1.2. Stateful Knowledge Session
6.1.3. Methods versus Rules
6.1.4. Cross Products
6.2. Execution Control
6.2.1. Agenda
6.2.2. Rule Matches and Conflict Sets.
6.3. Inference
6.3.1. Bus Pass Example
6.4. Truth Maintenance with Logical Objects
6.4.1. Overview
6.5. Decision Tables in Spreadsheets
6.5.1. When to Use Decision Tables
6.5.2. Overview
6.5.3. How Decision Tables Work
6.5.4. Spreadsheet Syntax
6.5.5. Creating and integrating Spreadsheet based Decision Tables
6.5.6. Managing Business Rules in Decision Tables
6.5.7. Rule Templates
6.6. Logging
7. Running
7.1. KieRuntime
7.1.1. EntryPoint
7.1.2. RuleRuntime
7.1.3. StatefulRuleSession
7.2. Agenda
7.2.1. Conflict Resolution
7.2.2. AgendaGroup
7.2.3. ActivationGroup
7.2.4. RuleFlowGroup
7.3. Event Model
7.4. StatelessKieSession
7.4.1. Sequential Mode
7.5. Propagation modes
7.6. Commands and the CommandExecutor
8. Rule Language Reference
8.1. Overview
8.1.1. A rule file
8.1.2. What makes a rule
8.2. Keywords
8.3. Comments
8.3.1. Single line comment
8.3.2. Multi-line comment
8.4. Error Messages
8.4.1. Message format
8.4.2. Error Messages Description
8.4.3. Other Messages
8.5. Package
8.5.1. import
8.5.2. global
8.6. Function
8.7. Type Declaration
8.7.1. Declaring New Types
8.7.2. Declaring Metadata
8.7.3. Declaring Metadata for Existing Types
8.7.4. Parametrized constructors for declared types
8.7.5. Non Typesafe Classes
8.7.6. Accessing Declared Types from the Application Code
8.7.7. Type Declaration 'extends'
8.7.8. Traits
8.8. Rule
8.8.1. Rule Attributes
8.8.2. Timers and Calendars
8.8.3. Left Hand Side (when) syntax
8.8.4. The Right Hand Side (then)
8.8.5. Conditional named consequences
8.8.6. A Note on Auto-boxing and Primitive Types
8.9. Query
8.10. Domain Specific Languages
8.10.1. When to Use a DSL
8.10.2. DSL Basics
8.10.3. Adding Constraints to Facts
8.10.4. Developing a DSL
8.10.5. DSL and DSLR Reference
9. Complex Event Processing
9.1. Complex Event Processing
9.2. Drools Fusion
9.3. Event Semantics
9.4. Event Processing Modes
9.4.1. Cloud Mode
9.4.2. Stream Mode
9.5. Session Clock
9.5.1. Available Clock Implementations
9.6. Sliding Windows
9.6.1. Sliding Time Windows
9.6.2. Sliding Length Windows
9.6.3. Window Declaration
9.7. Streams Support
9.7.1. Declaring and Using Entry Points
9.8. Memory Management for Events
9.8.1. Explicit expiration offset
9.8.2. Inferred expiration offset
9.9. Temporal Reasoning
9.9.1. Temporal Operators
10. Experimental Features
10.1. Declarative Agenda
10.2. Browsing graphs of objects with OOPath
10.2.1. Reactive and Non-Reactive OOPath
IV. Drools Integration
11. Drools Commands
11.1. API
11.1.1. XStream
11.1.2. JSON
11.1.3. JAXB
11.2. Commands supported
11.2.1. BatchExecutionCommand
11.2.2. InsertObjectCommand
11.2.3. RetractCommand
11.2.4. ModifyCommand
11.2.5. GetObjectCommand
11.2.6. InsertElementsCommand
11.2.7. FireAllRulesCommand
11.2.8. StartProcessCommand
11.2.9. SignalEventCommand
11.2.10. CompleteWorkItemCommand
11.2.11. AbortWorkItemCommand
11.2.12. QueryCommand
11.2.13. SetGlobalCommand
11.2.14. GetGlobalCommand
11.2.15. GetObjectsCommand
12. CDI
12.1. Introduction
12.2. Annotations
12.2.1. @KReleaseId
12.2.2. @KContainer
12.2.3. @KBase
12.2.4. @KSession for KieSession
12.2.5. @KSession for StatelessKieSession
12.3. API Example Comparison
13. Integration with Spring
13.1. Important Changes for Drools 6.0
13.2. Integration with Drools Expert
13.2.1. KieModule
13.2.2. KieBase
13.2.3. IMPORTANT NOTE
13.2.4. KieSessions
13.2.5. Kie:ReleaseId
13.2.6. Kie:Import
13.2.7. Annotations
13.2.8. Event Listeners
13.2.9. Loggers
13.2.10. Defining Batch Commands
13.2.11. Persistence
13.2.12. Leveraging Other Spring Features
13.3. Integration with jBPM Human Task
13.3.1. How to configure Spring with jBPM Human task
14. Android Integration
14.1. Integration with Drools Expert
14.1.1. Pre-serialized Rules
14.1.2. KieContainer API with drools-compiler dependency
14.2. Integration with Roboguice
14.2.1. Pre-serialized Rules with Roboguice
14.2.2. KieContainer with drools-compiler dependency and Roboguice
15. Apache Camel Integration
15.1. Camel
16. Drools Camel Server
16.1. Introduction
16.2. Deployment
16.3. Configuration
16.3.1. REST/Camel Services configuration
17. JMX monitoring with RHQ/JON
17.1. Introduction
17.1.1. Enabling JMX monitoring in a Drools application
17.1.2. Installing and running the RHQ/JON plugin
V. Drools Workbench
18. Workbench (General)
18.1. Installation
18.1.1. War installation
18.1.2. Workbench data
18.1.3. System properties
18.1.4. Trouble shooting
18.2. Quick Start
18.2.1. Add repository
18.2.2. Add project
18.2.3. Define Data Model
18.2.4. Define Rule
18.2.5. Build and Deploy
18.3. Administration
18.3.1. Administration overview
18.3.2. Organizational unit
18.3.3. Repositories
18.4. Configuration
18.4.1. Basic user management
18.4.2. Roles
18.4.3. Restricting access to repositories
18.4.4. Command line config tool
18.5. Introduction
18.5.1. Log in and log out
18.5.2. Home screen
18.5.3. Workbench concepts
18.5.4. Initial layout
18.6. Changing the layout
18.6.1. Resizing
18.6.2. Repositioning
18.7. Authoring (General)
18.7.1. Artifact Repository
18.7.2. Asset Editor
18.7.3. Tags Editor
18.7.4. Project Explorer
18.7.5. Project Editor
18.7.6. Validation
18.7.7. Data Modeller
18.7.8. Data Sets
18.8. User and group management
18.8.1. Introduction
18.8.2. Security management providers
18.8.3. Installation and setup
18.8.4. Usage
18.9. Embedding Workbench In Your Application
18.10. Asset Management
18.10.1. Asset Management Overview
18.10.2. Managed vs Unmanaged Repositories
18.10.3. Asset Management Processes
18.10.4. Usage Flow
18.10.5. Repository Structure
18.10.6. Managed Repositories Operations
18.11. Execution Server Management UI
18.11.1. Server Templates
18.11.2. Container
18.11.3. Remote Server
19. Authoring Rule Assets
19.1. Creating a package
19.1.1. Empty package
19.1.2. Copy, Rename and Delete Packages
19.2. Business rules with the guided editor
19.2.1. Parts of the Guided Rule Editor
19.2.2. The "WHEN" (left-hand side) of a Rule
19.2.3. The "THEN" (right-hand side) of a Rule
19.2.4. Optional attributes
19.2.5. Pattern/Action toolbar
19.2.6. User driven drop down lists
19.2.7. Augmenting with DSL sentences
19.2.8. A more complex example:
19.3. Templates of assets/rules
19.3.1. Creating a rule template
19.3.2. Define the template
19.3.3. Defining the template data
19.3.4. Generated DRL
19.4. Guided decision tables (web based)
19.4.1. Types of decision table
19.4.2. Main components\concepts
19.4.3. Defining a web based decision table
19.4.4. Rule definition
19.4.5. Audit Log
19.4.6. Real Time Validation and Verification
19.5. Guided Decision Trees
19.5.1. The initial editor layout
19.5.2. First steps
19.5.3. Editing Data Object nodes
19.5.4. Editing Field Constraint nodes
19.5.5. Editing Action nodes
19.5.6. Managing the tree
19.6. Spreadsheet decision tables
19.7. Scorecards
19.7.1. (a) Setup Parameters
19.7.2. (b) Characteristics
19.8. Test Scenario
19.8.1. Knowledge Session Selector
19.8.2. Given Section
19.8.3. Expect Section
19.8.4. Global Section
19.8.5. New Input Section
19.9. Functions
19.10. DSL editor
19.11. Data enumerations (drop down list configurations)
19.11.1. Advanced enumeration concepts
19.12. Technical rules (DRL)
20. Workbench Integration
20.1. REST
20.1.1. Job calls
20.1.2. Repository calls
20.1.3. Organizational unit calls
20.1.4. Maven calls
20.1.5. REST summary
20.2. Keycloak SSO integration
20.2.1. Scenario
20.2.2. Install and setup a Keycloak server
20.2.3. Create and setup the demo realm
20.2.4. Install and setup jBPM Workbench
20.2.5. Securing workbench remote services via Keycloak
20.2.6. Execution server
20.2.7. Consuming remote services
21. Workbench High Availability
21.1.
21.1.1. VFS clustering
21.1.2. jBPM clustering
VI. KIE Server
22. KIE Execution Server
22.1. Overview
22.1.1. Glossary
22.2. Installing the KIE Server
22.2.1. Bootstrap switches
22.2.2. Installation details for different containers
22.3. Kie Server setup
22.3.1. Managed Kie Server
22.3.2. Unmanaged KIE Execution Server
22.4. Creating a Kie Container
22.5. Managing Containers
22.5.1. Starting a Container
22.5.2. Stopping and Deleting a Container
22.5.3. Updating a Container
22.6. Kie Server REST API
22.6.1. [GET] /
22.6.2. [POST] /
22.6.3. [GET] /containers
22.6.4. ⁠[GET] /containers/{id}
22.6.5. [PUT] /containers/{id}
22.6.6. [DELETE] /containers/{id}
22.6.7. [POST] /containers/instances/{id}
22.6.8. [GET] /containers/{id}/release-id
22.6.9. [POST] /containers/{id}/release-id
22.6.10. [GET] /containers/{id}/scanner
22.6.11. [POST] /containers/{id}/scanner
22.6.12. Native REST client for Execution Server
22.7. OptaPlanner REST API
22.7.1. [GET] /containers/{containerId}/solvers
22.7.2. [PUT] /containers/{containerId}/solvers/{solverId}
22.7.3. [GET] /containers/{containerId}/solvers/{solverId}
22.7.4. [POST] /containers/{containerId}/solvers/{solverId}
22.7.5. [GET] /containers/{containerId}/solvers/{solverId}/bestsolution
22.7.6. [DELETE] /containers/{containerId}/solvers/{solverId}
22.8. Controller REST API
22.8.1. [GET] /management/servers
22.8.2. [GET] /management/server/{id}
22.8.3. [PUT] /management/server/{id}
22.8.4. [DELETE] /management/server/{id}
22.8.5. [GET] /management/server/{id}/containers
22.8.6. [GET] /management/server/{id}/containers/{containerId}
22.8.7. [PUT] /management/server/{id}/containers/{containerId}
22.8.8. [DELETE] /management/server/{id}/containers/{containerId}
22.8.9. [POST] /management/server/{id}/containers/{containerId}/status/started
22.8.10. [POST] /management/server/{id}/containers/{containerId}/status/stopped
22.9. Kie Server Java Client API
22.9.1. Maven Configuration
22.9.2. Client Configuration
22.9.3. Server Response
22.9.4. Server Capabilities
22.9.5. Kie Containers
22.9.6. Managing Containers
22.9.7. Available Clients for the Decision Server
22.9.8. Sending commands to the server
22.9.9. Listing available business processes
VII. Drools Examples
23. Examples
23.1. Getting the Examples
23.2. Hello World
23.3. State Example
23.3.1. Understanding the State Example
23.4. Fibonacci Example
23.5. Banking Tutorial
23.6. Pricing Rule Decision Table Example
23.6.1. Executing the example
23.6.2. The decision table
23.7. Pet Store Example
23.8. Honest Politician Example
23.9. Sudoku Example
23.9.1. Sudoku Overview
23.9.2. Running the Example
23.9.3. Java Source and Rules Overview
23.9.4. Sudoku Validator Rules (validate.drl)
23.9.5. Sudoku Solving Rules (sudoku.drl)
23.10. Number Guess
23.11. Conway's Game Of Life
23.12. Invaders
23.12.1. Invaders1Main
23.12.2. Invaders2Main
23.12.3. Invaders3Main
23.12.4. Invaders4Main
23.12.5. Invaders5Main
23.12.6. Invaders6Main
23.12.7. Invaders4Main
23.13. Adventures with Drools
23.13.1. Using the game.
23.13.2. The code
23.14. Pong
23.15. Wumpus World
23.16. Miss Manners and Benchmarking
23.16.1. Introduction
23.16.2. In depth Discussion
23.16.3. Output Summary
23.17. Backward-Chaining
23.17.1. Backward-Chaining Systems
23.17.2. Cloning Transitive Closures
23.17.3. Defining a Query
23.17.4. Transitive Closure Example
23.17.5. Reactive Transitive Queries
23.17.6. Queries with Unbound Arguments
23.17.7. Multiple Unbound Arguments

Welcome and Release Notes

Table of Contents

1. Introduction
1.1. Introduction
1.2. Getting Involved
1.2.1. Sign up to jboss.org
1.2.2. Sign the Contributor Agreement
1.2.3. Submitting issues via JIRA
1.2.4. Fork GitHub
1.2.5. Writing Tests
1.2.6. Commit with Correct Conventions
1.2.7. Submit Pull Requests
1.3. Installation and Setup (Core and IDE)
1.3.1. Installing and using
1.3.2. Building from source
1.3.3. Eclipse
2. Release Notes
2.1. What is New and Noteworthy in Drools 6.4.0
2.1.1. Better Java 8 compatibility
2.1.2. More robust incremental compilation
2.1.3. Improved multi-threading behaviour
2.1.4. OOPath improvements
2.2. New and Noteworthy in KIE Workbench 6.4.0
2.2.1. New look and feel
2.2.2. Various UI improvements
2.2.3. New locales
2.2.4. Authoring - Imports - Consistent terminology
2.2.5. Disable automatic build
2.2.6. Support for SCP style git Repository URLs
2.2.7. Authoring - Duplicate GAV detection
2.2.8. New Execution Server Management User Interface
2.2.9. User and group management
2.3. What is New and Noteworthy in Drools 6.3.0
2.3.1. Browsing graphs of objects with OOPath
2.4. New and Noteworthy in KIE Workbench 6.3.0
2.4.1. Real Time Validation and Verification for the Decision Tables
2.4.2. Improved DRL Editor
2.4.3. Asset locking
2.4.4. Data Modeller Tool Windows
2.4.5. Generation of JPA enabled Data Models
2.4.6. Data Set Authoring
2.5. What is New and Noteworthy in Drools 6.2.0
2.5.1. Propagation modes
2.6. New and Noteworthy in KIE Workbench 6.2.0
2.6.1. Download Repository or Part of the Repository as a ZIP
2.6.2. Project Editor permissions
2.6.3. Unify validation style in Guided Decision Table Wizard.
2.6.4. Improved Wizards
2.6.5. Consistent behaviour of XLS, Guided Decision Tables and Guided Templates
2.6.6. Improved Metadata Tab
2.6.7. Improved Data Objects Editor
2.6.8. Execution Server Management UI
2.6.9. Social Activities
2.6.10. Contributors Dashboard
2.6.11. Package selector
2.6.12. Improved visual consistency
2.6.13. Guided Decision Tree Editor
2.6.14. Create Repository Wizard
2.6.15. Repository Structure Screen
2.7. New and Noteworthy in Integration 6.2.0
2.7.1. KIE Execution Server
2.8. What is New and Noteworthy in Drools 6.1.0
2.8.1. JMX support for KieScanner
2.9. New and Noteworthy in KIE Workbench 6.1.0
2.9.1. Data Modeler - round trip and source code preservation
2.9.2. Data Modeler - improved annotations
2.9.3. Standardization of the display of tabular data
2.9.4. Generation of modify(x) {...} blocks
2.10. New and Noteworthy in KIE API 6.0.0
2.10.1. New KIE name
2.10.2. Maven aligned projects and modules and Maven Deployment
2.10.3. Configuration and convention based projects
2.10.4. KieBase Inclusion
2.10.5. KieModules, KieContainer and KIE-CI
2.10.6. KieScanner
2.10.7. Hierarchical ClassLoader
2.10.8. Legacy API Adapter
2.10.9. KIE Documentation
2.11. What is New and Noteworthy in Drools 6.0.0
2.11.1. PHREAK - Lazy rule matching algorithm
2.11.2. Automatically firing timed rule in passive mode
2.11.3. Expression Timers
2.11.4. RuleFlowGroups and AgendaGroups are merged
2.12. New and Noteworthy in KIE Workbench 6.0.0
2.13. New and Noteworthy in Integration 6.0.0
2.13.1. CDI
2.13.2. Spring
2.13.3. Aries Blueprints
2.13.4. OSGi Ready
3. Compatibility matrix

It's been a busy year since the last 5.x series release and so much has change.

One of the biggest complaints during the 5.x series was the lack of defined methodology for deployment. The mechanism used by Drools and jBPM was very flexible, but it was too flexible. A big focus for 6.0 was streamlining the build, deploy and loading(utilization) aspects of the system. Building and deploying now align with Maven and the utilization is now convention and configuration oriented, instead of programmatic, with sane default to minimise the configuration.

The workbench has been rebuilt from the ground up, inspired by Eclipse, to provide a flexible and better integrated solution; with panels and perspectives via plugins. The base workbench has been spun off into a standalone project called UberFire, so that anyone now can build high quality web based workbenches. In the longer term it will facilitate user customised Drools and jBPM installations.

Git replaces JCR as the content repository, offering a fast and scalable back-end storage for content that has strong tooling support. There has been a refocus on simplicity away from databases with an aim of storing everything as text file, even meta data is just a file. The database is just there to provide fast indexing and search via Lucene. This will allow repositories now to be synced and published with established infrastructure, like GitHub.

jBPM has been dramatically beefed up, thanks to the Polymita acquisition, with human tasks, form builders, class modellers, execution servers and runtime management. All fully integrated into the new workbench.

OptaPlanner is now a top level project and getting full time attention.

A new umbrella name, KIE (Knowledge Is Everything), has been introduced to bring our related technologies together under one roof. It also acts as the core shared around for our projects. So expect to see it a lot.

We are often asked "How do I get involved". Luckily the answer is simple, just write some code and submit it :) There are no hoops you have to jump through or secret handshakes. We have a very minimal "overhead" that we do request to allow for scalable project development. Below we provide a general overview of the tools and "workflow" we request, along with some general advice.

If you contribute some good work, don't forget to blog about it :)

Drools provides an Eclipse-based IDE (which is optional), but at its core only Java 1.5 (Java SE) is required.

A simple way to get started is to download and install the Eclipse plug-in - this will also require the Eclipse GEF framework to be installed (see below, if you don't have it installed already). This will provide you with all the dependencies you need to get going: you can simply create a new rule project and everything will be done for you. Refer to the chapter on the Rule Workbench and IDE for detailed instructions on this. Installing the Eclipse plug-in is generally as simple as unzipping a file into your Eclipse plug-in directory.

Use of the Eclipse plug-in is not required. Rule files are just textual input (or spreadsheets as the case may be) and the IDE (also known as the Rule Workbench) is just a convenience. People have integrated the rule engine in many ways, there is no "one size fits all".

Alternatively, you can download the binary distribution, and include the relevant JARs in your projects classpath.

Drools is broken down into a few modules, some are required during rule development/compiling, and some are required at runtime. In many cases, people will simply want to include all the dependencies at runtime, and this is fine. It allows you to have the most flexibility. However, some may prefer to have their "runtime" stripped down to the bare minimum, as they will be deploying rules in binary form - this is also possible. The core runtime engine can be quite compact, and only requires a few 100 kilobytes across 3 JAR files.

The following is a description of the important libraries that make up JBoss Drools

There are quite a few other dependencies which the above components require, most of which are for the drools-compiler, drools-jsr94 or drools-decisiontables module. Some key ones to note are "POI" which provides the spreadsheet parsing ability, and "antlr" which provides the parsing for the rule language itself.

NOTE: if you are using Drools in J2EE or servlet containers and you come across classpath issues with "JDT", then you can switch to the janino compiler. Set the system property "drools.compiler": For example: -Ddrools.compiler=JANINO.

For up to date info on dependencies in a release, consult the released POMs, which can be found on the Maven repository.

The JARs are also available in the central Maven repository (and also in the JBoss Maven repository).

If you use Maven, add KIE and Drools dependencies in your project's pom.xml like this:

  <dependencyManagement>
    <dependencies>
      <dependency>
        <groupId>org.drools</groupId>
        <artifactId>drools-bom</artifactId>
        <type>pom</type>
        <version>...</version>
        <scope>import</scope>
      </dependency>
      ...
    </dependencies>
  </dependencyManagement>
  <dependencies>
    <dependency>
      <groupId>org.kie</groupId>
      <artifactId>kie-api</artifactId>
    </dependency>
    <dependency>
      <groupId>org.drools</groupId>
      <artifactId>drools-compiler</artifactId>
      <scope>runtime</scope>
    </dependency>
    ...
  <dependencies>

This is similar for Gradle, Ivy and Buildr. To identify the latest version, check the Maven repository.

If you're still using Ant (without Ivy), copy all the JARs from the download zip's binaries directory and manually verify that your classpath doesn't contain duplicate JARs.

The rule workbench (for Eclipse) requires that you have Eclipse 3.4 or greater, as well as Eclipse GEF 3.4 or greater. You can install it either by downloading the plug-in or using the update site.

Another option is to use the JBoss IDE, which comes with all the plug-in requirements pre packaged, as well as a choice of other tools separate to rules. You can choose just to install rules from the "bundle" that JBoss IDE ships with.

Download the Drools Eclipse IDE plugin from the link below. Unzip the downloaded file in your main eclipse folder (do not just copy the file there, extract it so that the feature and plugin JARs end up in the features and plugin directory of eclipse) and (re)start Eclipse.

http://www.drools.org/download/download.html

To check that the installation was successful, try opening the Drools perspective: Click the 'Open Perspective' button in the top right corner of your Eclipse window, select 'Other...' and pick the Drools perspective. If you cannot find the Drools perspective as one of the possible perspectives, the installation probably was unsuccessful. Check whether you executed each of the required steps correctly: Do you have the right version of Eclipse (3.4.x)? Do you have Eclipse GEF installed (check whether the org.eclipse.gef_3.4.*.jar exists in the plugins directory in your eclipse root folder)? Did you extract the Drools Eclipse plugin correctly (check whether the org.drools.eclipse_*.jar exists in the plugins directory in your eclipse root folder)? If you cannot find the problem, try contacting us (e.g. on irc or on the user mailing list), more info can be found no our homepage here:

http://www.drools.org/

A Drools runtime is a collection of JARs on your file system that represent one specific release of the Drools project JARs. To create a runtime, you must point the IDE to the release of your choice. If you want to create a new runtime based on the latest Drools project JARs included in the plugin itself, you can also easily do that. You are required to specify a default Drools runtime for your Eclipse workspace, but each individual project can override the default and select the appropriate runtime for that project specifically.

You are required to define one or more Drools runtimes using the Eclipse preferences view. To open up your preferences, in the menu Window select the Preferences menu item. A new preferences dialog should show all your preferences. On the left side of this dialog, under the Drools category, select "Installed Drools runtimes". The panel on the right should then show the currently defined Drools runtimes. If you have not yet defined any runtimes, it should like something like the figure below.

To define a new Drools runtime, click on the add button. A dialog as shown below should pop up, requiring the name for your runtime and the location on your file system where it can be found.

In general, you have two options:

After clicking the OK button, the runtime should show up in your table of installed Drools runtimes, as shown below. Click on checkbox in front of the newly created runtime to make it the default Drools runtime. The default Drools runtime will be used as the runtime of all your Drools project that have not selected a project-specific runtime.

You can add as many Drools runtimes as you need. For example, the screenshot below shows a configuration where three runtimes have been defined: a Drools 4.0.7 runtime, a Drools 5.0.0 runtime and a Drools 5.0.0.SNAPSHOT runtime. The Drools 5.0.0 runtime is selected as the default one.

Note that you will need to restart Eclipse if you changed the default runtime and you want to make sure that all the projects that are using the default runtime update their classpath accordingly.

Whenever you create a Drools project (using the New Drools Project wizard or by converting an existing Java project to a Drools project using the "Convert to Drools Project" action that is shown when you are in the Drools perspective and you right-click an existing Java project), the plugin will automatically add all the required JARs to the classpath of your project.

When creating a new Drools project, the plugin will automatically use the default Drools runtime for that project, unless you specify a project-specific one. You can do this in the final step of the New Drools Project wizard, as shown below, by deselecting the "Use default Drools runtime" checkbox and selecting the appropriate runtime in the drop-down box. If you click the "Configure workspace settings ..." link, the workspace preferences showing the currently installed Drools runtimes will be opened, so you can add new runtimes there.

You can change the runtime of a Drools project at any time by opening the project properties (right-click the project and select Properties) and selecting the Drools category, as shown below. Check the "Enable project specific settings" checkbox and select the appropriate runtime from the drop-down box. If you click the "Configure workspace settings ..." link, the workspace preferences showing the currently installed Drools runtimes will be opened, so you can add new runtimes there. If you deselect the "Enable project specific settings" checkbox, it will use the default runtime as defined in your global preferences.

2.1. What is New and Noteworthy in Drools 6.4.0
2.1.1. Better Java 8 compatibility
2.1.2. More robust incremental compilation
2.1.3. Improved multi-threading behaviour
2.1.4. OOPath improvements
2.2. New and Noteworthy in KIE Workbench 6.4.0
2.2.1. New look and feel
2.2.2. Various UI improvements
2.2.3. New locales
2.2.4. Authoring - Imports - Consistent terminology
2.2.5. Disable automatic build
2.2.6. Support for SCP style git Repository URLs
2.2.7. Authoring - Duplicate GAV detection
2.2.8. New Execution Server Management User Interface
2.2.9. User and group management
2.3. What is New and Noteworthy in Drools 6.3.0
2.3.1. Browsing graphs of objects with OOPath
2.4. New and Noteworthy in KIE Workbench 6.3.0
2.4.1. Real Time Validation and Verification for the Decision Tables
2.4.2. Improved DRL Editor
2.4.3. Asset locking
2.4.4. Data Modeller Tool Windows
2.4.5. Generation of JPA enabled Data Models
2.4.6. Data Set Authoring
2.5. What is New and Noteworthy in Drools 6.2.0
2.5.1. Propagation modes
2.6. New and Noteworthy in KIE Workbench 6.2.0
2.6.1. Download Repository or Part of the Repository as a ZIP
2.6.2. Project Editor permissions
2.6.3. Unify validation style in Guided Decision Table Wizard.
2.6.4. Improved Wizards
2.6.5. Consistent behaviour of XLS, Guided Decision Tables and Guided Templates
2.6.6. Improved Metadata Tab
2.6.7. Improved Data Objects Editor
2.6.8. Execution Server Management UI
2.6.9. Social Activities
2.6.10. Contributors Dashboard
2.6.11. Package selector
2.6.12. Improved visual consistency
2.6.13. Guided Decision Tree Editor
2.6.14. Create Repository Wizard
2.6.15. Repository Structure Screen
2.7. New and Noteworthy in Integration 6.2.0
2.7.1. KIE Execution Server
2.8. What is New and Noteworthy in Drools 6.1.0
2.8.1. JMX support for KieScanner
2.9. New and Noteworthy in KIE Workbench 6.1.0
2.9.1. Data Modeler - round trip and source code preservation
2.9.2. Data Modeler - improved annotations
2.9.3. Standardization of the display of tabular data
2.9.4. Generation of modify(x) {...} blocks
2.10. New and Noteworthy in KIE API 6.0.0
2.10.1. New KIE name
2.10.2. Maven aligned projects and modules and Maven Deployment
2.10.3. Configuration and convention based projects
2.10.4. KieBase Inclusion
2.10.5. KieModules, KieContainer and KIE-CI
2.10.6. KieScanner
2.10.7. Hierarchical ClassLoader
2.10.8. Legacy API Adapter
2.10.9. KIE Documentation
2.11. What is New and Noteworthy in Drools 6.0.0
2.11.1. PHREAK - Lazy rule matching algorithm
2.11.2. Automatically firing timed rule in passive mode
2.11.3. Expression Timers
2.11.4. RuleFlowGroups and AgendaGroups are merged
2.12. New and Noteworthy in KIE Workbench 6.0.0
2.13. New and Noteworthy in Integration 6.0.0
2.13.1. CDI
2.13.2. Spring
2.13.3. Aries Blueprints
2.13.4. OSGi Ready

The general look and feel in the entire workbench has been updated to adopt PatternFly. The update brings a cleaner, lightweight and more consistent user experience throughout every screen. Allowing users focus on the data and the tasks by removing all uncessary visual elements. Interactions and behaviors remain mostly unchanged, limiting the scope of this change to visual updates.


When the field of a fact is a collection it is possible to bind and reason over all the items in that collection on by one using the from keyword. Nevertheless, when it is required to browse a graph of object the extensive use of the from conditional element may result in a verbose and cubersome syntax like in the following example:


In this example it has been assumed to use a domain model consisting of a Student who has a Plan of study: a Plan can have zero or more Exams and an Exam zero or more Grades. Note that only the root object of the graph (the Student in this case) needs to be in the working memory in order to make this works.

By borrowing ideas from XPath, this syntax can be made more succinct, as XPath has a compact notation for navigating through related elements while handling collections and filtering constraints. This XPath-inspired notation has been called OOPath since it is explictly intended to browse graph of objects. Using this notation the former example can be rewritten as it follows:


Formally, the core grammar of an OOPath expression can be defined in EBNF notation in this way.

OOPExpr = "/" OOPSegment { ( "/" | "." ) OOPSegment } ;
OOPSegment = [ID ( ":" | ":=" )] ID ["[" Number "]"] ["{" Constraints "}"];

In practice an OOPath expression has the following features.

  • It has to start with /.

  • It can dereference a single property of an object with the . operator

  • It can dereference a multiple property of an object using the / operator. If a collection is returned, it will iterate over the values in the collection

  • While traversing referenced objects it can filter away those not satisfying one or more constraints, written as predicate expressions between curly brackets like in:

    Student( $grade: /plan/exams{course == "Big Data"}/grades )
  • Items can also be accessed by their index by putting it between square brackets like in:

    Student( $grade: /plan/exams[0]/grades )

    To adhere to Java convention OOPath indexes are 0-based, compared to XPath 1-based

The introduction of PHREAK as default algorithm for the Drools engine made the rules' evaluation lazy. This new Drools lazy behavior allowed a relevant performance boost but, in some very specific cases, breaks the semantic of a few Drools features.

More precisely in some circumstances it is necessary to propagate the insertion of new fact into th session immediately. For instance Drools allows a query to be executed in pull only (or passive) mode by prepending a '?' symbol to its invocation as in the following example:


In this case, since the query is passive, it shouldn't react to the insertion of a String matching the join condition in the query itself. In other words this sequence of commands

KieSession ksession = ...
ksession.insert(1);
ksession.insert("1");
ksession.fireAllRules();

shouldn't cause the rule R to fire because the String satisfying the query condition has been inserted after the Integer and the passive query shouldn't react to this insertion. Conversely the rule should fire if the insertion sequence is inverted because the insertion of the Integer, when the passive query can be satisfied by the presence of an already existing String, will trigger it.

Unfortunately the lazy nature of PHREAK doesn't allow the engine to make any distinction regarding the insertion sequence of the two facts, so the rule will fire in both cases. In circumstances like this it is necessary to evaluate the rule eagerly as done by the old RETEOO-based engine.

In other cases it is required that the propagation is eager, meaning that it is not immedate, but anyway has to happen before the engine/agenda starts scheduled evaluations. For instance this is necessary when a rule has the no-loop or the lock-on-active attribute and in fact when this happens this propagation mode is automatically enforced by the engine.

To cover these use cases, and in all other situations where an immediate or eager rule evaluation is required, it is possible to declaratively specify so by annotating the rule itself with @Propagation(Propagation.Type), where Propagation.Type is an enumeration with 3 possible values:

  • IMMEDIATE means that the propagation is performed immediately.

  • EAGER means that the propagation is performed lazily but eagerly evaluated before scheduled evaluations.

  • LAZY means that the propagation is totally lazy and this is default PHREAK behaviour

This means that the following drl:


will make the rule R to fire if and only if the Integer is inserted after the String, thus behaving in accordance with the semantic of the passive query.

A new KIE Execution Server was created with the goal of supporting the deployment of kjars and the automatic creation of REST endpoints for remote rules execution. This initial implementation supports provisioning and execution of kjars via REST without any glue code.

A user interface was also integrated into the workbench for remote provisioning. See the workbench's New&Noteworthy for details.

Figure 2.47. Kie Server interface

@Path("/server")
public interface KieServer {
    
    @GET
    @Produces({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON})
    public Response getInfo();
    
    @POST
    @Consumes({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON})
    @Produces({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON})
    public Response execute( CommandScript command );
    
    @GET
    @Path("containers")
    @Produces({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON})
    public Response listContainers();
    
    @GET
    @Path("containers/{id}")
    @Produces({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON})
    public Response getContainerInfo( @PathParam("id") String id );
    
    @PUT
    @Path("containers/{id}")
    @Consumes({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON})
    @Produces({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON})
    public Response createContainer( @PathParam("id") String id, KieContainerResource container );
    
    @DELETE
    @Path("containers/{id}")
    @Produces({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON})
    public Response disposeContainer( @PathParam("id") String id );
    
    @POST
    @Path("containers/{id}")
    @Consumes({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON})
    @Produces({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON})
    public Response execute( @PathParam("id") String id, String cmdPayload );
    
    @GET
    @Path("containers/{id}/release-id")
    @Produces({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON})
    public Response getReleaseId( @PathParam("id") String id);

    @POST
    @Path("containers/{id}/release-id")
    @Consumes({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON})
    @Produces({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON})
    public Response updateReleaseId( @PathParam("id") String id, ReleaseId releaseId );
    
    @GET
    @Path("containers/{id}/scanner")
    @Produces({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON})
    public Response getScannerInfo( @PathParam("id") String id );
    
    @POST
    @Path("containers/{id}/scanner")
    @Consumes({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON})
    @Produces({MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON})
    public Response updateScanner( @PathParam("id") String id, KieScannerResource resource );
    
} 

It is now possible to define both the delay and interval of an interval timer as an expression instead of a fixed value. To do that it is necessary to declare the timer as an expression one (indicated by "expr:") as in the following example:


The expressions, $d and $p in this case, can use any variable defined in the pattern matching part of the rule and can be any String that can be parsed in a time duration or any numeric value that will be internally converted in a long representing a duration expressed in milliseconds.

Both interval and expression timers can have 3 optional parameters named "start", "end" and "repeat-limit". When one or more of these parameters are used the first part of the timer definition must be followed by a semicolon ';' and the parameters have to be separated by a comma ',' as in the following example:


The value for start and end parameters can be a Date, a String representing a Date or a long, or more in general any Number, that will be transformed in a Java Date applying the following conversion:

new Date( ((Number) n).longValue() )

Conversely the repeat-limit can be only an integer and it defines the maximum number of repetitions allowed by the timer. If both the end and the repeat-limit parameters are set the timer will stop when the first of the two will be matched.

The using of the start parameter implies the definition of a phase for the timer, where the beginning of the phase is given by the start itself plus the eventual delay. In other words in this case the timed rule will then be scheduled at times:

start + delay + n*period

for up to repeat-limit times and no later than the end timestamp (whichever first). For instance the rule having the following interval timer

timer ( int: 30s 1m; start="3-JAN-2010" )

will be scheduled at the 30th second of every minute after the midnight of the 3-JAN-2010. This also means that if for example you turn the system on at midnight of the 3-FEB-2010 it won't be scheduled immediately but will preserve the phase defined by the timer and so it will be scheduled for the first time 30 seconds after the midnight. If for some reason the system is paused (e.g. the session is serialized and then deserialized after a while) the rule will be scheduled only once to recover from missing activations (regardless of how many activations we missed) and subsequently it will be scheduled again in phase with the timer.

The workbench has had a big overhaul using a new base project called UberFire. UberFire is inspired by Eclipse and provides a clean, extensible and flexible framework for the workbench. The end result is not only a richer experience for our end users, but we can now develop more rapidly with a clean component based architecture. If you like he Workbench experience you can use UberFire today to build your own web based dashboard and console efforts.

As well as the move to a UberFire the other biggest change is the move from JCR to Git; there is an utility project to help with migration. Git is the most scalable and powerful source repository bar none. JGit provides a solid OSS implementation for Git. This addresses the continued performance problems with the various JCR implementations, which would slow down once the number of files and number of versions become too high. There has been a big "low tech" drive, to remove complexity. Everything is now stored as a file, including meta data. The database is only there to provide fast indexing and search. So importing and exporting is all standard Git and external sites, like GitHub, can be used to exchange repositories.

In 5.x developers would work with their own source repository and then push JCR, via the team provider. This team provider was not full featured and not available outside Eclipse. Git enables our repository to work any existing Git tool or team provider. While not yet supported in the UI, this will be added over time, it is possible to connect to the repo and tag and branch and restore things.


The Guvnor brand leaked too much from its intended role; such as the authoring metaphors, like Decision Tables, being considered Guvnor components instead of Drools components. This wasn't helped by the monolithic projects structure used in 5.x for Guvnor. In 6.0 Guvnor 's focus has been narrowed to encapsulates the set of UberFire plugins that provide the basis for building a web based IDE. Such as Maven integration for building and deploying, management of Maven repositories and activity notifications via inboxes. Drools and jBPM build workbench distributions using Uberfire as the base and including a set of plugins, such as Guvnor, along with their own plugins for things like decision tables, guided editors, BPMN2 designer, human tasks.

The "Model Structure" diagram outlines the new project anatomy. The Drools workbench is called KIE-Drools-WB. KIE-WB is the uber workbench that combines all the Guvnor, Drools and jBPM plugins. The jBPM-WB is ghosted out, as it doesn't actually exist, being made redundant by KIE-WB.


Important

KIE Drools Workbench and KIE Workbench share a common set of components for generic workbench functionality such as Project navigation, Project definitions, Maven based Projects, Maven Artifact Repository. These common features are described in more detail throughout this documentation.

The two primary distributions consist of:

  • KIE Drools Workbench

    • Drools Editors, for rules and supporting assets.

    • jBPM Designer, for Rule Flow and supporting assets.

  • KIE Workbench

    • Drools Editors, for rules and supporting assets.

    • jBPM Designer, for BPMN2 and supporting assets.

    • jBPM Console, runtime and Human Task support.

    • jBPM Form Builder.

    • BAM.

Workbench highlights:

  • New flexible Workbench environment, with perspectives and panels.

  • New packaging and build system following KIE API.

    • Maven based projects.

    • Maven Artifact Repository replaces Global Area, with full dependency support.

  • New Data Modeller replaces the declarative Fact Model Editor; bringing authoring of Java classes to the authoring environment. Java classes are packaged into the project and can be used within rules, processes etc and externally in your own applications.

  • Virtual File System replaces JCR with a default Git based implementation.

    • Default Git based implementation supports remote operations.

    • External modifications appear within the Workbench.

  • Incremental Build system showing, near real-time validation results of your project and assets.

The editors themselves are largely unchanged; however of note imports have moved from the package definition to individual editors so you need only import types used for an asset and not the package as a whole.

The process of researching an integration knowledge solution for Drools and jBPM has simply used the "droolsjbpm" group name. This name permeates GitHub accounts and Maven POMs. As scopes broadened and new projects were spun KIE, an acronym for Knowledge Is Everything, was chosen as the new group name. The KIE name is also used for the shared aspects of the system; such as the unified build, deploy and utilization.

KIE currently consists of the following subprojects:


OptaPlanner, a local search and optimization tool, has been spun off from Drools Planner and is now a top level project with Drools and jBPM. This was a natural evolution as Optaplanner, while having strong Drools integration, has long been independant of Drools.

From the Polymita acquisition, along with other things, comes the powerful Dashboard Builder which provides powerful reporting capabilities. Dashboard Builder is currently a temporary name and after the 6.0 release a new name will be chosen. Dashboard Builder is completely independant of Drools and jBPM and will be used by many projects at JBoss, and hopefully outside of JBoss :)

UberFire is the new base workbench project, spun off from the ground up rewrite. UberFire provides Eclipse-like workbench capabilities, with panels and perspectives from plugins. The project is independant of Drools and jBPM and anyone can use it as a basis of building flexible and powerful workbenches. UberFire will be used for console and workbench development throughout JBoss.

It was determined that the Guvnor brand leaked too much from its intended role; such as the authoring metaphors, like Decision Tables, being considered Guvnor components instead of Drools components. This wasn't helped by the monolithic projects structure used in 5.x for Guvnor. In 6.0 Guvnor's focus has been narrowed to encapsulate the set of UberFire plugins that provide the basis for building a web based IDE. Such as Maven integration for building and deploying, management of Maven repositories and activity notifications via inboxes. Drools and jBPM build workbench distributions using Uberfire as the base and including a set of plugins, such as Guvnor, along with their own plugins for things like decision tables, guided editors, BPMN2 designer, human tasks. The Drools workbench is called Drools-WB. KIE-WB is the uber workbench that combined all the Guvnor, Drools and jBPM plugins. The jBPM-WB is ghosted out, as it doesn't actually exist, being made redundant by KIE-WB.

6.0 introduces a new configuration and convention approach to building knowledge bases, instead of using the programmatic builder approach in 5.x. The builder is still available to fall back on, as it's used for the tooling integration.

Building now uses Maven, and aligns with Maven practices. A KIE project or module is simply a Maven Java project or module; with an additional metadata file META-INF/kmodule.xml. The kmodule.xml file is the descriptor that selects resources to knowledge bases and configures those knowledge bases and sessions. There is also alternative XML support via Spring and OSGi BluePrints.

While standard Maven can build and package KIE resources, it will not provide validation at build time. There is a Maven plugin which is recommended to use to get build time validation. The plugin also generates many classes, making the runtime loading faster too.

The example project layout and Maven POM descriptor is illustrated in the screenshot


KIE uses defaults to minimise the amount of configuration. With an empty kmodule.xml being the simplest configuration. There must always be a kmodule.xml file, even if empty, as it's used for discovery of the JAR and its contents.

Maven can either 'mvn install' to deploy a KieModule to the local machine, where all other applications on the local machine use it. Or it can 'mvn deploy' to push the KieModule to a remote Maven repository. Building the Application will pull in the KieModule and populate the local Maven repository in the process.


JARs can be deployed in one of two ways. Either added to the classpath, like any other JAR in a Maven dependency listing, or they can be dynamically loaded at runtime. KIE will scan the classpath to find all the JARs with a kmodule.xml in it. Each found JAR is represented by the KieModule interface. The terms classpath KieModule and dynamic KieModule are used to refer to the two loading approaches. While dynamic modules supports side by side versioning, classpath modules do not. Further once a module is on the classpath, no other version may be loaded dynamically.

Detailed references for the API are included in the next sections, the impatient can jump straight to the examples section, which is fairly self-explanatory on the different use cases.


A Kie Project has the structure of a normal Maven project with the only peculiarity of including a kmodule.xml file defining in a declaratively way the KieBases and KieSessions that can be created from it. This file has to be placed in the resources/META-INF folder of the Maven project while all the other Kie artifacts, such as DRL or a Excel files, must be stored in the resources folder or in any other subfolder under it.

Since meaningful defaults have been provided for all configuration aspects, the simplest kmodule.xml file can contain just an empty kmodule tag like the following:


In this way the kmodule will contain one single default KieBase. All Kie assets stored under the resources folder, or any of its subfolders, will be compiled and added to it. To trigger the building of these artifacts it is enough to create a KieContainer for them.


For this simple case it is enough to create a KieContainer that reads the files to be built from the classpath:


KieServices is the interface from where it possible to access all the Kie building and runtime facilities:


In this way all the Java sources and the Kie resources are compiled and deployed into the KieContainer which makes its contents available for use at runtime.

As explained in the former section, the kmodule.xml file is the place where it is possible to declaratively configure the KieBase(s) and KieSession(s) that can be created from a KIE project.

In particular a KieBase is a repository of all the application's knowledge definitions. It will contain rules, processes, functions, and type models. The KieBase itself does not contain data; instead, sessions are created from the KieBase into which data can be inserted and from which process instances may be started. Creating the KieBase can be heavy, whereas session creation is very light, so it is recommended that KieBase be cached where possible to allow for repeated session creation. However end-users usually shouldn't worry about it, because this caching mechanism is already automatically provided by the KieContainer.


Conversely the KieSession stores and executes on the runtime data. It is created from the KieBase or more easily can be created directly from the KieContainer if it has been defined in the kmodule.xml file


The kmodule.xml allows to define and configure one or more KieBases and for each KieBase all the different KieSessions that can be created from it, as showed by the follwing example:


Here the <configuration> tag contains a list of key-value pairs that are the optional properties used to configure the KieBases building process. For instance this sample kmodule.xml file defines an additional custom operator named supersetOf and implemented by the org.mycompany.SupersetOfEvaluatorDefinition class.

After this 2 KieBases have been defined and it is possible to instance 2 different types of KieSessions from the first one, while only one from the second. A list of the attributes that can be defined on the kbase tag, together with their meaning and default values follows:


Similarly all attributes of the ksession tag (except of course the name) have meaningful default. They are listed and described in the following table:


As outlined in the former kmodule.xml sample, it is also possible to declaratively create on each KieSession a file (or a console) logger, one or more WorkItemHandlers and some listeners that can be of 3 different types: ruleRuntimeEventListener, agendaEventListener and processEventListener

Having defined a kmodule.xml like the one in the former sample, it is now possible to simply retrieve the KieBases and KieSessions from the KieContainer using their names.


It has to be noted that since KSession2_1 and KSession2_2 are of 2 different types (the first is stateful, while the second is stateless) it is necessary to invoke 2 different methods on the KieContainer according to their declared type. If the type of the KieSession requested to the KieContainer doesn't correspond with the one declared in the kmodule.xml file the KieContainer will throw a RuntimeException. Also since a KieBase and a KieSession have been flagged as default is it possible to get them from the KieContainer without passing any name.


Since a Kie project is also a Maven project the groupId, artifactId and version declared in the pom.xml file are used to generate a ReleaseId that uniquely identifies this project inside your application. This allows creation of a new KieContainer from the project by simply passing its ReleaseId to the KieServices.


It is also possible to define the KieBases and KieSessions belonging to a KieModule programmatically instead of the declarative definition in the kmodule.xml file. The same programmatic API also allows in explicitly adding the file containing the Kie artifacts instead of automatically read them from the resources folder of your project. To do that it is necessary to create a KieFileSystem, a sort of virtual file system, and add all the resources contained in your project to it.


Like all other Kie core components you can obtain an instance of the KieFileSystem from the KieServices. The kmodule.xml configuration file must be added to the filesystem. This is a mandatory step. Kie also provides a convenient fluent API, implemented by the KieModuleModel, to programmatically create this file.


To do this in practice it is necessary to create a KieModuleModel from the KieServices, configure it with the desired KieBases and KieSessions, convert it in XML and add the XML to the KieFileSystem. This process is shown by the following example:


At this point it is also necessary to add to the KieFileSystem, through its fluent API, all others Kie artifacts composing your project. These artifacts have to be added in the same position of a corresponding usual Maven project.


This example shows that it is possible to add the Kie artifacts both as plain Strings and as Resources. In the latter case the Resources can be created by the KieResources factory, also provided by the KieServices. The KieResources provides many convenient factory methods to convert an InputStream, a URL, a File, or a String representing a path of your file system to a Resource that can be managed by the KieFileSystem.


Normally the type of a Resource can be inferred from the extension of the name used to add it to the KieFileSystem. However it also possible to not follow the Kie conventions about file extensions and explicitly assign a specific ResourceType to a Resource as shown below:


Add all the resources to the KieFileSystem and build it by passing the KieFileSystem to a KieBuilder


When the contents of a KieFileSystem are successfully built, the resulting KieModule is automatically added to the KieRepository. The KieRepository is a singleton acting as a repository for all the available KieModules.


After this it is possible to create through the KieServices a new KieContainer for that KieModule using its ReleaseId. However, since in this case the KieFileSystem doesn't contain any pom.xml file (it is possible to add one using the KieFileSystem.writePomXML method), Kie cannot determine the ReleaseId of the KieModule and assign to it a default one. This default ReleaseId can be obtained from the KieRepository and used to identify the KieModule inside the KieRepository itself. The following example shows this whole process.


At this point it is possible to get KieBases and create new KieSessions from this KieContainer exactly in the same way as in the case of a KieContainer created directly from the classpath.

It is a best practice to check the compilation results. The KieBuilder reports compilation results of 3 different severities: ERROR, WARNING and INFO. An ERROR indicates that the compilation of the project failed and in the case no KieModule is produced and nothing is added to the KieRepository. WARNING and INFO results can be ignored, but are available for inspection.


The KieScanner allows continuous monitoring of your Maven repository to check whether a new release of a Kie project has been installed. A new release is deployed in the KieContainer wrapping that project. The use of the KieScanner requires kie-ci.jar to be on the classpath.


A KieScanner can be registered on a KieContainer as in the following example.


In this example the KieScanner is configured to run with a fixed time interval, but it is also possible to run it on demand by invoking the scanNow() method on it. If the KieScanner finds, in the Maven repository, an updated version of the Kie project used by that KieContainer it automatically downloads the new version and triggers an incremental build of the new project. At this point, existing KieBases and KieSessions under the control of KieContainer will get automatically upgraded with it - specifically, those KieBases obtained with getKieBase() along with their related KieSessions, and any KieSession obtained directly with KieContainer.newKieSession() thus referencing the default KieBase. Additionally, from this moment on, all the new KieBases and KieSessions created from that KieContainer will use the new project version. Please notice however any existing KieBase which was obtained via newKieBase() before the KieScanner upgrade, and any of its related KieSessions, will not get automatically upgraded; this is because KieBases obtained via newKieBase() are not under the direct control of the KieContainer.

The KieScanner will only pickup changes to deployed jars if it is using a SNAPSHOT, version range, the LATEST, or the RELEASE setting. Fixed versions will not automatically update at runtime.

Maven supports a number of mechanisms to manage versioning and dependencies within applications. Modules can be published with specific version numbers, or they can use the SNAPSHOT suffix. Dependencies can specify version ranges to consume, or take avantage of SNAPSHOT mechanism.

StackOverflow provides a very good description for this, which is reproduced below.

http://stackoverflow.com/questions/30571/how-do-i-tell-maven-to-use-the-latest-version-of-a-dependency

If you always want to use the newest version, Maven has two keywords you can use as an alternative to version ranges. You should use these options with care as you are no longer in control of the plugins/dependencies you are using.

When you depend on a plugin or a dependency, you can use the a version value of LATEST or RELEASE. LATEST refers to the latest released or snapshot version of a particular artifact, the most recently deployed artifact in a particular repository. RELEASE refers to the last non-snapshot release in the repository. In general, it is not a best practice to design software which depends on a non-specific version of an artifact. If you are developing software, you might want to use RELEASE or LATEST as a convenience so that you don't have to update version numbers when a new release of a third-party library is released. When you release software, you should always make sure that your project depends on specific versions to reduce the chances of your build or your project being affected by a software release not under your control. Use LATEST and RELEASE with caution, if at all.

See the POM Syntax section of the Maven book for more details.

http://books.sonatype.com/mvnref-book/reference/pom-relationships-sect-pom-syntax.html

http://books.sonatype.com/mvnref-book/reference/pom-relationships-sect-project-dependencies.html

Here's an example illustrating the various options. In the Maven repository, com.foo:my-foo has the following metadata:

<metadata>
  <groupId>com.foo</groupId>
  <artifactId>my-foo</artifactId>
  <version>2.0.0</version>
  <versioning>
    <release>1.1.1</release>
    <versions>
      <version>1.0</version>
      <version>1.0.1</version>
      <version>1.1</version>
      <version>1.1.1</version>
      <version>2.0.0</version>
    </versions>
    <lastUpdated>20090722140000</lastUpdated>
  </versioning>
</metadata>

If a dependency on that artifact is required, you have the following options (other version ranges can be specified of course, just showing the relevant ones here): Declare an exact version (will always resolve to 1.0.1):

<version>[1.0.1]</version>

Declare an explicit version (will always resolve to 1.0.1 unless a collision occurs, when Maven will select a matching version):

<version>1.0.1</version>

Declare a version range for all 1.x (will currently resolve to 1.1.1):

<version>[1.0.0,2.0.0)</version>

Declare an open-ended version range (will resolve to 2.0.0):

<version>[1.0.0,)</version>

Declare the version as LATEST (will resolve to 2.0.0):

<version>LATEST</version>

Declare the version as RELEASE (will resolve to 1.1.1):

<version>RELEASE</version>

Note that by default your own deployments will update the "latest" entry in the Maven metadata, but to update the "release" entry, you need to activate the "release-profile" from the Maven super POM. You can do this with either "-Prelease-profile" or "-DperformRelease=true"

The maven settings.xml is used to configure Maven execution. Detailed instructions can be found at the Maven website:

http://maven.apache.org/settings.html

The settings.xml file can be located in 3 locations, the actual settings used is a merge of those 3 locations.

  • The Maven install: $M2_HOME/conf/settings.xml

  • A user's install: ${user.home}/.m2/settings.xml

  • Folder location specified by the system property kie.maven.settings.custom

The settings.xml is used to specify the location of remote repositories. It is important that you activate the profile that specifies the remote repository, typically this can be done using "activeByDefault":

<profiles>
  <profile>
    <id>profile-1</id>
    <activation>
      <activeByDefault>true</activeByDefault>
    </activation>
    ...
  </profile>
</profiles>
    

Maven provides detailed documentation on using multiple remote repositories:

http://maven.apache.org/guides/mini/guide-multiple-repositories.html

The event package provides means to be notified of rule engine events, including rules firing, objects being asserted, etc. This allows separation of logging and auditing activities from the main part of your application (and the rules).

The KieRuntimeEventManager interface is implemented by the KieRuntime which provides two interfaces, RuleRuntimeEventManager and ProcessEventManager. We will only cover the RuleRuntimeEventManager here.


The RuleRuntimeEventManager allows for listeners to be added and removed, so that events for the working memory and the agenda can be listened to.


The following code snippet shows how a simple agenda listener is declared and attached to a session. It will print matches after they have fired.


Drools also provides DebugRuleRuntimeEventListener and DebugAgendaEventListener which implement each method with a debug print statement. To print all Working Memory events, you add a listener like this:


All emitted events implement the KieRuntimeEvent interface which can be used to retrieve the actual KnowlegeRuntime the event originated from.


The events currently supported are:

  • MatchCreatedEvent

  • MatchCancelledEvent

  • BeforeMatchFiredEvent

  • AfterMatchFiredEvent

  • AgendaGroupPushedEvent

  • AgendaGroupPoppedEvent

  • ObjectInsertEvent

  • ObjectDeletedEvent

  • ObjectUpdatedEvent

  • ProcessCompletedEvent

  • ProcessNodeLeftEvent

  • ProcessNodeTriggeredEvent

  • ProcessStartEvent

KIE has the concept of stateful or stateless sessions. Stateful sessions have already been covered, which use the standard KieRuntime, and can be worked with iteratively over time. Stateless is a one-off execution of a KieRuntime with a provided data set. It may return some results, with the session being disposed at the end, prohibiting further iterative interactions. You can think of stateless as treating an engine like a function call with optional return results.

The foundation for this is the CommandExecutor interface, which both the stateful and stateless interfaces extend. This returns an ExecutionResults:



The CommandExecutor allows for commands to be executed on those sessions, the only difference being that the StatelessKieSession executes fireAllRules() at the end before disposing the session. The commands can be created using the CommandExecutor .The Javadocs provide the full list of the allowed comands using the CommandExecutor.

setGlobal and getGlobal are two commands relevant to both Drools and jBPM.

Set Global calls setGlobal underneath. The optional boolean indicates whether the command should return the global's value as part of the ExecutionResults. If true it uses the same name as the global name. A String can be used instead of the boolean, if an alternative name is desired.


Allows an existing global to be returned. The second optional String argument allows for an alternative return name.


All the above examples execute single commands. The BatchExecution represents a composite command, created from a list of commands. It will iterate over the list and execute each command in turn. This means you can insert some objects, start a process, call fireAllRules and execute a query, all in a single execute(...) call, which is quite powerful.

The StatelessKieSession will execute fireAllRules() automatically at the end. However the keen-eyed reader probably has already noticed the FireAllRules command and wondered how that works with a StatelessKieSession. The FireAllRules command is allowed, and using it will disable the automatic execution at the end; think of using it as a sort of manual override function.

Any command, in the batch, that has an out identifier set will add its results to the returned ExecutionResults instance. Let's look at a simple example to see how this works. The example presented includes command from the Drools and jBPM, for the sake of illustration. They are covered in more detail in the Drool and jBPM specific sections.


In the above example multiple commands are executed, two of which populate the ExecutionResults. The query command defaults to use the same identifier as the query name, but it can also be mapped to a different identifier.

All commands support XML and jSON marshalling using XStream, as well as JAXB marshalling. This is covered in section Commands API.

The StatelessKieSession wraps the KieSession, instead of extending it. Its main focus is on the decision service type scenarios. It avoids the need to call dispose(). Stateless sessions do not support iterative insertions and the method call fireAllRules() from Java code; the act of calling execute() is a single-shot method that will internally instantiate a KieSession, add all the user data and execute user commands, call fireAllRules(), and then call dispose(). While the main way to work with this class is via the BatchExecution (a subinterface of Command) as supported by the CommandExecutor interface, two convenience methods are provided for when simple object insertion is all that's required. The CommandExecutor and BatchExecution are talked about in detail in their own section.


Our simple example shows a stateless session executing a given collection of Java objects using the convenience API. It will iterate the collection, inserting each element in turn.


If this was done as a single Command it would be as follows:


If you wanted to insert the collection itself, and the collection's individual elements, then CommandFactory.newInsert(collection) would do the job.

Methods of the CommandFactory create the supported commands, all of which can be marshalled using XStream and the BatchExecutionHelper. BatchExecutionHelper provides details on the XML format as well as how to use Drools Pipeline to automate the marshalling of BatchExecution and ExecutionResults.

StatelessKieSession supports globals, scoped in a number of ways. We cover the non-command way first, as commands are scoped to a specific execution call. Globals can be resolved in three ways.

The CommandExecutor interface also offers the ability to export data via "out" parameters. Inserted facts, globals and query results can all be returned.


The KieMarshallers are used to marshal and unmarshal KieSessions.


An instance of the KieMarshallers can be retrieved from the KieServices. A simple example is shown below:


However, with marshalling, you will need more flexibility when dealing with referenced user data. To achieve this use the ObjectMarshallingStrategy interface. Two implementations are provided, but users can implement their own. The two supplied strategies are IdentityMarshallingStrategy and SerializeMarshallingStrategy. SerializeMarshallingStrategy is the default, as shown in the example above, and it just calls the Serializable or Externalizable methods on a user instance. IdentityMarshallingStrategy creates an integer id for each user object and stores them in a Map, while the id is written to the stream. When unmarshalling it accesses the IdentityMarshallingStrategy map to retrieve the instance. This means that if you use the IdentityMarshallingStrategy, it is stateful for the life of the Marshaller instance and will create ids and keep references to all objects that it attempts to marshal. Below is the code to use an Identity Marshalling Strategy.


Im most cases, a single strategy is insufficient. For added flexibility, the ObjectMarshallingStrategyAcceptor interface can be used. This Marshaller has a chain of strategies, and while reading or writing a user object it iterates the strategies asking if they accept responsibility for marshalling the user object. One of the provided implementations is ClassFilterAcceptor. This allows strings and wild cards to be used to match class names. The default is "*.*", so in the above example the Identity Marshalling Strategy is used which has a default "*.*" acceptor.

Assuming that we want to serialize all classes except for one given package, where we will use identity lookup, we could do the following:


Note that the acceptance checking order is in the natural order of the supplied elements.

Also note that if you are using scheduled matches (i.e. some of your rules use timers or calendars) they are marshallable only if, before you use it, you configure your KieSession to use a trackable timer job factory manager as follows:


Longterm out of the box persistence with Java Persistence API (JPA) is possible with Drools. It is necessary to have some implementation of the Java Transaction API (JTA) installed. For development purposes the Bitronix Transaction Manager is suggested, as it's simple to set up and works embedded, but for production use JBoss Transactions is recommended.


To use a JPA, the Environment must be set with both the EntityManagerFactory and the TransactionManager. If rollback occurs the ksession state is also rolled back, hence it is possible to continue to use it after a rollback. To load a previously persisted KieSession you'll need the id, as shown below:


To enable persistence several classes must be added to your persistence.xml, as in the example below:


The jdbc JTA data source would have to be configured first. Bitronix provides a number of ways of doing this, and its documentation should be consulted for details. For a quick start, here is the programmatic approach:


Bitronix also provides a simple embedded JNDI service, ideal for testing. To use it, add a jndi.properties file to your META-INF folder and add the following line to it:


The best way to learn the new build system is by example. The source project "drools-examples-api" contains a number of examples, and can be found at GitHub:

https://github.com/droolsjbpm/drools/tree/6.0.x/drools-examples-api

Each example is described below, the order starts with the simplest (most of the options are defaulted) and working its way up to more complex use cases.

The Deploy use cases shown below all involve mvn install. Remote deployment of JARs in Maven is well covered in Maven literature. Utilize refers to the initial act of loading the resources and providing access to the KIE runtimes. Where as Run refers to the act of interacting with those runtimes.

kmodule.xml will produce a single named KieBase, 'kbase2' that includes all files found under resources path, be it DRL, BPMN2, XLS etc. Further it will include all the resources found from the KieBase 'kbase1', due to the use of the 'includes' attribute. KieSession 'ksession2' is associated with that KieBase and can be created by name.


This example requires that the previous example, 'named-kiesession', is built and installed to the local Maven repository first. Once installed it can be included as a dependency, using the standard Maven <dependencies> element.


Once 'named-kiesession' is built and installed this example can be built and installed as normal. Again the act of installing, will force the unit tests to run, demonstrating the use case.


ks.getKieClasspathContainer() returns the KieContainer that contains the KieBases deployed onto the environment classpath. This time the KieSession uses the name 'ksession2'. You do not need to lookup the KieBase first, as it knows which KieBase 'ksession1' is assocaited with. Notice two rules fire this time, showing that KieBase 'kbase2' has included the resources from the dependency KieBase 'kbase1'.


kmodule.xml produces 6 different named KieBases. 'kbase1' includes all resources from the KieModule. The other KieBases include resources from other selected folders, via the 'packages' attribute. Note the use of wildcard '*', to select this package and all packages below it.



Only part of the example is included below, as there is a test method per KieSession, but each one is a repetition of the other, with different list expectations.


The pom.xml must include kie-ci as a depdency, to ensure Maven is available at runtime. As this uses Maven under the hood you can also use the standard Maven settings.xml file.



In the previous examples the classpath KieContainer used. This example creates a dynamic KieContainer as specified by the ReleaseId. The ReleaseId uses Maven conventions for group id, artifact id and version. It also obeys LATEST and SNAPSHOT for versions.



This programmatically builds a KieModule. It populates the model that represents the ReleaseId and kmodule.xml, as well as add the relevant resources. A pom.xml is generated from the ReleaseId.

Example 4.62. Utilize and Run - Java

KieServices ks = KieServices.Factory.get();
KieFileSystem kfs = ks.newKieFileSystem();

Resource ex1Res = ks.getResources().newFileSystemResource(getFile("named-kiesession"));
Resource ex2Res = ks.getResources().newFileSystemResource(getFile("kiebase-inclusion"));

ReleaseId rid = ks.newReleaseId("org.drools", "kiemodulemodel-example", "6.0.0-SNAPSHOT");
kfs.generateAndWritePomXML(rid);

KieModuleModel kModuleModel = ks.newKieModuleModel();
kModuleModel.newKieBaseModel("kiemodulemodel")
            .addInclude("kiebase1")
            .addInclude("kiebase2")
            .newKieSessionModel("ksession6");

kfs.writeKModuleXML(kModuleModel.toXML());
kfs.write("src/main/resources/kiemodulemodel/HAL6.drl", getRule());

KieBuilder kb = ks.newKieBuilder(kfs);
kb.setDependencies(ex1Res, ex2Res);
kb.buildAll(); // kieModule is automatically deployed to KieRepository if successfully built.
if (kb.getResults().hasMessages(Level.ERROR)) {
    throw new RuntimeException("Build Errors:\n" + kb.getResults().toString());
}

KieContainer kContainer = ks.newKieContainer(rid);

KieSession kSession = kContainer.newKieSession("ksession6");
kSession.setGlobal("out", out);

Object msg1 = createMessage(kContainer, "Dave", "Hello, HAL. Do you read me, HAL?");
kSession.insert(msg1);
kSession.fireAllRules();

Object msg2 = createMessage(kContainer, "Dave", "Open the pod bay doors, HAL.");
kSession.insert(msg2);
kSession.fireAllRules();

Object msg3 = createMessage(kContainer, "Dave", "What's the problem?");
kSession.insert(msg3);
kSession.fireAllRules();

The KIE engine is a platform for the modelling and execution of business behavior, using a multitude of declarative abstractions and metaphores, like rules, processes, decision tables and etc.

Many times, the authoring of these metaphores is done by third party groups, be it a different group inside the same company, a group from a partner company, or even anonymous third parties on the internet.

Rules and Processes are designed to execute arbitrary code in order to do their job, but in such cases it might be necessary to constrain what they can do. For instance, it is unlikely a rule should be allowed to create a classloader (what could open the system to an attack) and certainly it should not be allowed to make a call to System.exit().

The Java Platform provides a very comprehensive and well defined security framework that allows users to define policies for what a system can do. The KIE platform leverages that framework and allow application developers to define a specific policy to be applied to any execution of user provided code, be it in rules, processes, work item handlers and etc.

Rules and processes can run with very restrict permissions, but the engine itself needs to perform many complex operations in order to work. Examples are: it needs to create classloaders, read system properties, access the file system, etc.

Once a security manager is installed, though, it will apply restrictions to all the code executing in the JVM according to the defined policy. For that reason, KIE allows the user to define two different policy files: one for the engine itself and one for the assets deployed into and executed by the engine.

One easy way to setup the enviroment is to give the engine itself a very permissive policy, while providing a constrained policy for rules and processes.

Policy files follow the standard policy file syntax as described in the Java documentation. For more details, see:

http://docs.oracle.com/javase/6/docs/technotes/guides/security/PolicyFiles.html#FileSyntax

A permissive policy file for the engine can look like the following:


An example security policy for rules could be:


Please note that depending on what the rules and processes are supposed to do, many more permissions might need to be granted, like accessing files in the filesystem, databases, etc.

In order to use these policy files, all that is necessary is to execute the application with these files as parameters to the JVM. Three parameters are required:


For instance:

java -Djava.security.manager -Djava.security.policy=global.policy -Dkie.security.policy=rules.policy foo.bar.MyApp

Drools is a powerful Hybrid Reasoning System.

Table of Contents

5. Hybrid Reasoning
5.1. Artificial Intelligence
5.1.1. A Little History
5.1.2. Knowledge Representation and Reasoning
5.1.3. Rule Engines and Production Rule Systems (PRS)
5.1.4. Hybrid Reasoning Systems (HRS)
5.1.5. Expert Systems
5.1.6. Recommended Reading
5.2. Rete Algorithm
5.3. ReteOO Algorithm
5.4. PHREAK Algorithm
6. User Guide
6.1. The Basics
6.1.1. Stateless Knowledge Session
6.1.2. Stateful Knowledge Session
6.1.3. Methods versus Rules
6.1.4. Cross Products
6.2. Execution Control
6.2.1. Agenda
6.2.2. Rule Matches and Conflict Sets.
6.3. Inference
6.3.1. Bus Pass Example
6.4. Truth Maintenance with Logical Objects
6.4.1. Overview
6.5. Decision Tables in Spreadsheets
6.5.1. When to Use Decision Tables
6.5.2. Overview
6.5.3. How Decision Tables Work
6.5.4. Spreadsheet Syntax
6.5.5. Creating and integrating Spreadsheet based Decision Tables
6.5.6. Managing Business Rules in Decision Tables
6.5.7. Rule Templates
6.6. Logging
7. Running
7.1. KieRuntime
7.1.1. EntryPoint
7.1.2. RuleRuntime
7.1.3. StatefulRuleSession
7.2. Agenda
7.2.1. Conflict Resolution
7.2.2. AgendaGroup
7.2.3. ActivationGroup
7.2.4. RuleFlowGroup
7.3. Event Model
7.4. StatelessKieSession
7.4.1. Sequential Mode
7.5. Propagation modes
7.6. Commands and the CommandExecutor
8. Rule Language Reference
8.1. Overview
8.1.1. A rule file
8.1.2. What makes a rule
8.2. Keywords
8.3. Comments
8.3.1. Single line comment
8.3.2. Multi-line comment
8.4. Error Messages
8.4.1. Message format
8.4.2. Error Messages Description
8.4.3. Other Messages
8.5. Package
8.5.1. import
8.5.2. global
8.6. Function
8.7. Type Declaration
8.7.1. Declaring New Types
8.7.2. Declaring Metadata
8.7.3. Declaring Metadata for Existing Types
8.7.4. Parametrized constructors for declared types
8.7.5. Non Typesafe Classes
8.7.6. Accessing Declared Types from the Application Code
8.7.7. Type Declaration 'extends'
8.7.8. Traits
8.8. Rule
8.8.1. Rule Attributes
8.8.2. Timers and Calendars
8.8.3. Left Hand Side (when) syntax
8.8.4. The Right Hand Side (then)
8.8.5. Conditional named consequences
8.8.6. A Note on Auto-boxing and Primitive Types
8.9. Query
8.10. Domain Specific Languages
8.10.1. When to Use a DSL
8.10.2. DSL Basics
8.10.3. Adding Constraints to Facts
8.10.4. Developing a DSL
8.10.5. DSL and DSLR Reference
9. Complex Event Processing
9.1. Complex Event Processing
9.2. Drools Fusion
9.3. Event Semantics
9.4. Event Processing Modes
9.4.1. Cloud Mode
9.4.2. Stream Mode
9.5. Session Clock
9.5.1. Available Clock Implementations
9.6. Sliding Windows
9.6.1. Sliding Time Windows
9.6.2. Sliding Length Windows
9.6.3. Window Declaration
9.7. Streams Support
9.7.1. Declaring and Using Entry Points
9.8. Memory Management for Events
9.8.1. Explicit expiration offset
9.8.2. Inferred expiration offset
9.9. Temporal Reasoning
9.9.1. Temporal Operators
10. Experimental Features
10.1. Declarative Agenda
10.2. Browsing graphs of objects with OOPath
10.2.1. Reactive and Non-Reactive OOPath

Over the last few decades artificial intelligence (AI) became an unpopular term, with the well-known "AI Winter". There were large boasts from scientists and engineers looking for funding, which never lived up to expectations, resulting in many failed projects. Thinking Machines Corporation and the 5th Generation Computer (5GP) project probably exemplify best the problems at the time.

Thinking Machines Corporation was one of the leading AI firms in 1990, it had sales of nearly $65 million. Here is a quote from its brochure:

Some day we will build a thinking machine. It will be a truly intelligent machine. One that can see and hear and speak. A machine that will be proud of us.

Yet 5 years later it filed for bankruptcy protection under Chapter 11. The site inc.com has a fascinating article titled "The Rise and Fall of Thinking Machines". The article covers the growth of the industry and how a cosy relationship with Thinking Machines and DARPA over-heated the market, to the point of collapse. It explains how and why commerce moved away from AI and towards more practical number-crunching super computers.

The 5th Generation Computer project was a USD 400 million project in Japan to build a next generation computer. Valves (or Tubes) was the first generation, transistors the second, integrated circuits the third and finally microprocessors was the fourth. The fifth was intended to be a machine capable of effective Artificial Intelligence. This project spurred an "arms" race with the UK and USA, that caused much of the AI bubble. The 5GP would provide massive multi-cpu parallel processing hardware along with powerful knowledge representation and reasoning software via Prolog; a type of expert system. By 1992 the project was considered a failure and cancelled. It was the largest and most visible commercial venture for Prolog, and many of the failures are pinned on the problems of trying to run a logic based programming language concurrently on multi CPU hardware with effective results. Some believe that the failure of the 5GP project tainted Prolog and relegated it to academia, see "Whatever Happened to Prolog" by John C. Dvorak.

However while research funding dried up and the term AI became less used, many green shoots where planted and continued more quietly under discipline specific names: cognitive systems, machine learning, intelligent systems, knowledge representation and reasoning. Offshoots of these then made their way into commercial systems, such as expert systems in the Business Rules Management System (BRMS) market.

Imperative, system based languages, languages such as C, C++, Java and C#/.Net have dominated the last 20 years, enabled by the practicality of the languages and ability to run with good performance on commodity hardware. However many believe there is a renaissance underway in the field of AI, spurred by advances in hardware capabilities and AI research. In 2005 Heather Havenstein authored "Spring comes to AI winter" which outlines a case for this resurgence. Norvig and Russel dedicate several pages to what factors allowed the industry to overcome it's problems and the research that came about as a result:

 

Recent years have seen a revolution in both the content and the methodology of work in artificial intelligence. It is now more common to build on existing theories than to propose brand-new ones, to base claims on rigorous theorems or hard experimental evidence rather than on intuition, and to show relevance to real-world applications rather than toy examples.

 
 --Artificial Intelligence: A Modern Approach

Computer vision, neural networks, machine learning and knowledge representation and reasoning (KRR) have made great strides towards becoming practical in commercial environments. For example, vision-based systems can now fully map out and navigate their environments with strong recognition skills. As a result we now have self-driving cars about to enter the commercial market. Ontological research, based around description logic, has provided very rich semantics to represent our world. Algorithms such as the tableaux algorithm have made it possible to use those rich semantics effectively in large complex ontologies. Early KRR systems, like Prolog in 5GP, were dogged by the limited semantic capabilities and memory restrictions on the size of those ontologies.

In A Little History talks about AI as a broader subject and touches on Knowledge Representation and Reasoning (KRR) and also Expert Systems, I'll come back to Expert Systems later.

KRR is about how we represent our knowledge in symbolic form, i.e. how we describe something. Reasoning is about how we go about the act of thinking using this knowledge. System based object-oriented languages, like C++, Java and C#, have data definitions called classes for describing the composition and behaviour of modeled entities. In Java we call exemplars of these described things beans or instances. However those classification systems are limited to ensure computational efficiency. Over the years researchers have developed increasingly sophisticated ways to represent our world. Many of you may already have heard of OWL (Web Ontology Language). There is always a gap between what can be theoretically represented and what can be used computationally in practically timely manner, which is why OWL has different sub-languages from Lite to Full. It is not believed that any reasoning system can support OWL Full. However, algorithmic advances continue to narrow that gap and improve the expressiveness available to reasoning engines.

There are also many approaches to how these systems go about thinking. You may have heard discussions comparing the merits of forward chaining, which is reactive and data driven, with backward chaining, which is passive and query driven. Many other types of reasoning techniques exist, each of which enlarges the scope of the problems we can tackle declaratively. To list just a few: imperfect reasoning (fuzzy logic, certainty factors), defeasible logic, belief systems, temporal reasoning and correlation. You don't need to understand all these terms to understand and use Drools. They are just there to give an idea of the range of scope of research topics, which is actually far more extensive, and continues to grow as researchers push new boundaries.

KRR is often referred to as the core of Artificial Intelligence. Even when using biological approaches like neural networks, which model the brain and are more about pattern recognition than thinking, they still build on KRR theory. My first endeavours with Drools were engineering oriented, as I had no formal training or understanding of KRR. Learning KRR has allowed me to get a much wider theoretical background. Allowing me to better understand both what I've done and where I'm going, as it underpins nearly all of the theoretical side to our Drools R&D. It really is a vast and fascinating subject that will pay dividends for those who take the time to learn. I know it did and still does for me. Bracham and Levesque have written a seminal piece of work, called "Knowledge Representation and Reasoning" that is a must read for anyone wanting to build strong foundations. I would also recommend the Russel and Norvig book "Artificial Intelligence, a modern approach" which also covers KRR.

We've now covered a brief history of AI and learnt that the core of AI is formed around KRR. We've shown than KRR is a vast and fascinating subject which forms the bulk of the theory driving Drools R&D.

The rule engine is the computer program that delivers KRR functionality to the developer. At a high level it has three components:

As previously mentioned the ontology is the representation model we use for our "things". It could use records or Java classes or full-blown OWL based ontologies. The rules perform the reasoning, i.e., they facilitate "thinking". The distinction between rules and ontologies blurs a little with OWL based ontologies, whose richness is rule based.

The term "rules engine" is quite ambiguous in that it can be any system that uses rules, in any form, that can be applied to data to produce outcomes. This includes simple systems like form validation and dynamic expression engines. The book "How to Build a Business Rules Engine" (2004) by Malcolm Chisholm exemplifies this ambiguity. The book is actually about how to build and alter a database schema to hold validation rules. The book then shows how to generate Visual Basic code from those validation rules to validate data entry. While perfectly valid, this is very different to what we are talking about.

Drools started life as a specific type of rule engine called a Production Rule System (PRS) and was based around the Rete algorithm (usually pronounced as two syllables, e.g., REH-te or RAY-tay). The Rete algorithm, developed by Charles Forgy in 1974, forms the brain of a Production Rule System and is able to scale to a large number of rules and facts. A Production Rule is a two-part structure: the engine matches facts and data against Production Rules - also called Productions or just Rules - to infer conclusions which result in actions.

when
    <conditions>
then
    <actions>;

The process of matching the new or existing facts against Production Rules is called pattern matching, which is performed by the inference engine. Actions execute in response to changes in data, like a database trigger; we say this is a data driven approach to reasoning. The actions themselves can change data, which in turn could match against other rules causing them to fire; this is referred to as forward chaining

Drools 5.x implements and extends the Rete algorithm. This extended Rete algorithm is named ReteOO, signifying that Drools has an enhanced and optimized implementation of the Rete algorithm for object oriented systems. Other Rete based engines also have marketing terms for their proprietary enhancements to Rete, like RetePlus and Rete III. The most common enhancements are covered in "Production Matching for Large Learning Systems" (1995) by Robert B. Doorenbos' thesis, which presents Rete/UL. Drools 6.x introduces a new lazy algorithm named PHREAK; which is covered in more detail in the PHEAK algorithm section.

The Rules are stored in the Production Memory and the facts that the Inference Engine matches against are kept in the Working Memory. Facts are asserted into the Working Memory where they may then be modified or retracted. A system with a large number of rules and facts may result in many rules being true for the same fact assertion; these rules are said to be in conflict. The Agenda manages the execution order of these conflicting rules using a Conflict Resolution strategy.


You may have read discussions comparing the merits of forward chaining (reactive and data driven) or backward chaining (passive query). Here is a quick explanation of these two main types of reasoning.

Forward chaining is "data-driven" and thus reactionary, with facts being asserted into working memory, which results in one or more rules being concurrently true and scheduled for execution by the Agenda. In short, we start with a fact, it propagates through the rules, and we end in a conclusion.


Backward chaining is "goal-driven", meaning that we start with a conclusion which the engine tries to satisfy. If it can't, then it searches for conclusions that it can satisfy. These are known as subgoals, that will help satisfy some unknown part of the current goal. It continues this process until either the initial conclusion is proven or there are no more subgoals. Prolog is an example of a Backward Chaining engine. Drools can also do backward chaining, which we refer to as derivation queries.


Historically you would have to make a choice between systems like OPS5 (forward) or Prolog (backward). Nowadays many modern systems provide both types of reasoning capabilities. There are also many other types of reasoning techniques, each of which enlarges the scope of the problems we can tackle declaratively. To list just a few: imperfect reasoning (fuzzy logic, certainty factors), defeasible logic, belief systems, temporal reasoning and correlation. Modern systems are merging these capabilities, and others not listed, to create hybrid reasoning systems (HRS).

While Drools started out as a PRS, 5.x introduced Prolog style backward chaining reasoning as well as some functional programming styles. For this reason we now prefer the term Hybrid Reasoning System when describing Drools.

Drools currently provides crisp reasoning, but imperfect reasoning is almost ready. Initially this will be imperfect reasoning with fuzzy logic; later we'll add support for other types of uncertainty. Work is also under way to bring OWL based ontological reasoning, which will integrate with our traits system. We also continue to improve our functional programming capabilities.

The Rete algorithm was invented by Dr. Charles Forgy and documented in his PhD thesis in 1978-79. A simplified version of the paper was published in 1982 (http://citeseer.ist.psu.edu/context/505087/0). The latin word "rete" means "net" or "network". The Rete algorithm can be broken into 2 parts: rule compilation and runtime execution.

The compilation algorithm describes how the Rules in the Production Memory are processed to generate an efficient discrimination network. In non-technical terms, a discrimination network is used to filter data as it propagates through the network. The nodes at the top of the network would have many matches, and as we go down the network, there would be fewer matches. At the very bottom of the network are the terminal nodes. In Dr. Forgy's 1982 paper, he described 4 basic nodes: root, 1-input, 2-input and terminal.


The root node is where all objects enter the network. From there, it immediately goes to the ObjectTypeNode. The purpose of the ObjectTypeNode is to make sure the engine doesn't do more work than it needs to. For example, say we have 2 objects: Account and Order. If the rule engine tried to evaluate every single node against every object, it would waste a lot of cycles. To make things efficient, the engine should only pass the object to the nodes that match the object type. The easiest way to do this is to create an ObjectTypeNode and have all 1-input and 2-input nodes descend from it. This way, if an application asserts a new Account, it won't propagate to the nodes for the Order object. In Drools when an object is asserted it retrieves a list of valid ObjectTypesNodes via a lookup in a HashMap from the object's Class; if this list doesn't exist it scans all the ObjectTypeNodes finding valid matches which it caches in the list. This enables Drools to match against any Class type that matches with an instanceof check.


ObjectTypeNodes can propagate to AlphaNodes, LeftInputAdapterNodes and BetaNodes. AlphaNodes are used to evaluate literal conditions. Although the 1982 paper only covers equality conditions, many RETE implementations support other operations. For example, Account.name == "Mr Trout" is a literal condition. When a rule has multiple literal conditions for a single object type, they are linked together. This means that if an application asserts an Account object, it must first satisfy the first literal condition before it can proceed to the next AlphaNode. In Dr. Forgy's paper, he refers to these as IntraElement conditions. The following diagram shows the AlphaNode combinations for Cheese( name == "cheddar", strength == "strong" ):


Drools extends Rete by optimizing the propagation from ObjectTypeNode to AlphaNode using hashing. Each time an AlphaNode is added to an ObjectTypeNode it adds the literal value as a key to the HashMap with the AlphaNode as the value. When a new instance enters the ObjectType node, rather than propagating to each AlphaNode, it can instead retrieve the correct AlphaNode from the HashMap,thereby avoiding unnecessary literal checks.

There are two two-input nodes, JoinNode and NotNode, and both are types of BetaNodes. BetaNodes are used to compare 2 objects, and their fields, to each other. The objects may be the same or different types. By convention we refer to the two inputs as left and right. The left input for a BetaNode is generally a list of objects; in Drools this is a Tuple. The right input is a single object. Two Nodes can be used to implement 'exists' checks. BetaNodes also have memory. The left input is called the Beta Memory and remembers all incoming tuples. The right input is called the Alpha Memory and remembers all incoming objects. Drools extends Rete by performing indexing on the BetaNodes. For instance, if we know that a BetaNode is performing a check on a String field, as each object enters we can do a hash lookup on that String value. This means when facts enter from the opposite side, instead of iterating over all the facts to find valid joins, we do a lookup returning potentially valid candidates. At any point a valid join is found the Tuple is joined with the Object; which is referred to as a partial match; and then propagated to the next node.


To enable the first Object, in the above case Cheese, to enter the network we use a LeftInputNodeAdapter - this takes an Object as an input and propagates a single Object Tuple.

Terminal nodes are used to indicate a single rule having matched all its conditions; at this point we say the rule has a full match. A rule with an 'or' conditional disjunctive connective results in subrule generation for each possible logically branch; thus one rule can have multiple terminal nodes.

Drools also performs node sharing. Many rules repeat the same patterns, and node sharing allows us to collapse those patterns so that they don't have to be re-evaluated for every single instance. The following two rules share the first pattern, but not the last:

rule
when
    Cheese( $cheddar : name == "cheddar" )
    $person : Person( favouriteCheese == $cheddar )
then
    System.out.println( $person.getName() + " likes cheddar" );
end
rule
when
    Cheese( $cheddar : name == "cheddar" )
    $person : Person( favouriteCheese != $cheddar )
then
    System.out.println( $person.getName() + " does not like cheddar" );
end

As you can see below, the compiled Rete network shows that the alpha node is shared, but the beta nodes are not. Each beta node has its own TerminalNode. Had the second pattern been the same it would have also been shared.


The ReteOO was developed throughout the 3, 4 and 5 series releases. It takes the RETE algorithm and applies well known enhancements, all of which are covered by existing academic literature:

Node sharing

Alpha indexing

Beta indexing

Tree based graphs

Modify-in-place

Property reactive

Sub-networks

Backward Chaining

Lazy Truth Maintenance

Heap based agenda

Dynamic Rules

Drools 6 introduces a new algorithm, that attempts to address some of the core issues of RETE. The algorithm is not a rewrite form scratch and incorporates all of the existing code from ReteOO, and all its enhancements. While PHREAK is an evolution of the RETE algorithm, it is no longer classified as a RETE implementation. In the same way that once an animal evolves beyond a certain point and key characteristics are changed, the animal becomes classified as new species. There are two key RETE characteristics that strongly identify any derivative strains, regardless of optimizations. That it is an eager, data oriented algorithm. Where all work is doing done the insert, update or delete actions; eagerly producing all partial matches for all rules. PHREAK in contrast is characterised as a lazy, goal oriented algorithm; where partial matching is aggressively delayed.

This eagerness of RETE can lead to a lot of churn in large systems, and much wasted work. Where wasted work is classified as matching efforts that do not result in a rule firing.

PHREAK was heavily inspired by a number of algorithms; including (but not limited to) LEAPS, RETE/UL and Collection-Oriented Match. PHREAK has all enhancements listed in the ReteOO section. In addition it adds the following set of enhancements, which are explained in more detail in the following paragraphs.

When the PHREAK engine is started all rules are said to be unlinked, no rule evaluation can happen while rules are unlinked. The insert, update and deletes actions are queued before entering the beta network. A simple heuristic, based on the rule most likely to result in firings, is used to select the next rule for evaluation; this delays the evaluation and firing of the other rules. Only once a rule has all right inputs populated will the rule be considered linked in, although no work is yet done. Instead a goal is created, that represents the rule, and placed into a priority queue; which is ordered by salience. Each queue itself is associated with an AgendaGroup. Only the active AgendaGroup will inspect its queue, popping the goal for the rule with the highest salience and submitting it for evaluation. So the work done shifts from the insert, update, delete phase to the fireAllRules phase. Only the rule for which the goal was created is evaluated, other potential rule evaluations from those facts are delayed. While individual rules are evaluated, node sharing is still achieved through the process of segmentation, which is explained later.

Each successful join attempt in RETE produces a tuple (or token, or partial match) that will be propagated to the child nodes. For this reason it is characterised as a tuple oriented algorithm. For each child node that it reaches it will attempt to join with the other side of the node, again each successful join attempt will be propagated straight away. This creates a descent recursion effect. Thrashing the network of nodes as it ripples up and down, left and right from the point of entry into the beta network to all the reachable leaf nodes.

PHREAK propagation is set oriented (or collection-oriented), instead of tuple oriented. For the rule being evaluated it will visit the first node and process all queued insert, update and deletes. The results are added to a set and the set is propagated to the child node. In the child node all queued inset, update and deletes are processed, adding the results to the same set. Once finished that set is propagated to the next child node, and so on until the terminal node is reached. This creates a single pass, pipeline type effect, that is isolated to the current rule being evaluated. This creates a batch process effect which can provide performance advantages for certain rule constructs; such as sub-networks with accumulates. In the future it will leans itself to being able to exploit multi-core machines in a number of ways.

The Linking and Unlinking uses a layered bit mask system, based on a network segmentation. When the rule network is built segments are created for nodes that are shared by the same set of rules. A rule itself is made up from a path of segments, although if there is no sharing that will be a single segment. A bit-mask offset is assigned to each node in the segment. Also another bit mask (the layering) is assigned to each segment in the rule's path. When there is at least one input (data propagation) the node's bit is set to on. When each node has its bit set to on the segment's bit is also set to on. Conversely if any node's bit is set to off, the segment is then also set to off. If each segment in the rule's path is set to on, the rule is said to be linked in and a goal is created to schedule the rule for evaluation. The same bit-mask technique is used to also track dirty node, segments and rules; this allows for a rule already link in to be scheduled for evaluation if it's considered dirty since it was last evaluated.

This ensures that no rule will ever evaluate partial matches, if it's impossible for it to result in rule instances because one of the joins has no data. This is possible in RETE and it will merrily churn away producing martial match attempts for all nodes, even if the last join is empty.

While the incremental rule evaluation always starts from the root node, the dirty bit masks are used to allow nodes and segments that are not dirty to be skipped.

Using the existence of at at least one items of data per node, is a fairly basic heuristic. Future work would attempt to delay the linking even further; using techniques such as arc consistency to determine whether or not matching will result in rule instance firings.

Where as RETE has just a singe unit of memory, the node memory, PHREAK has 3 levels of memory. This allows for much more contextual understanding during evaluation of a Rule.


Example 1 shows a single rule, with three patterns; A, B and C. It forms a single segment, with bits 1, 2 and 4 for the nodes. The single segment has a bit offset of 1.


Example 2 demonstrates what happens when another rule is added that shares the pattern A. A is placed in its own segment, resulting in two segments per rule. Those two segments form a path, for their respective rules. The first segment is shared by both paths. When A is linked the segment becomes linked, it then iterates each path the segment is shared by, setting the bit 1 to on. If B and C are later turned on, the second segment for path R1 is linked in; this causes bit 2 to be turned on for R1. With bit 1 and bit 2 set to on for R1, the rule is now linked and a goal created to schedule the rule for later evaluation and firing.

When a rule is evaluated it is the segments that allow the results of matching to be shared. Each segment has a staging memory to queue all insert, update and deletes for that segment. If R1 was to evaluated it would process A and result in a set of tuples. The algorithm detects that there is a segmentation split and will create peered tuples for each insert, update and delete in the set and add them to R2's staging memory. Those tuples will be merged with any existing staged tuples and wait for R2 to eventually be evaluated.


Example 3 adds a third rule and demonstrates what happens when A and B are shared. Only the bits for the segments are shown this time. Demonstrating that R4 has 3 segments, R3 has 3 segments and R1 has 2 segments. A and B are shared by R1, R3 and R4. While D is shared by R3 and R4.


Sub-networks are formed when a Not, Exists or Accumulate node contain more than one element. In Example 4 "B not( C )" forms the sub network, note that "not(C)" is a single element and does not require a sub network and is merged inside of the Not node.

The sub network gets its own segment. R1 still has a path of two segments. The sub network forms another "inner" path. When the sub network is linked in, it will link in the outer segment.


Example 5 shows that the sub-network nodes can be shard by a rule that does not have a sub-network. This results in the sub-network segment being split into two.


Not nodes with constraints and accumulate nodes have special behaviour and can never unlink a segment, and are always considered to have their bits on.

All rule evaluations are incremental, and will not waste work recomputing matches that it has already produced.

The evaluation algorithm is stack based, instead of method recursion. Evaluation can be paused and resumed at any time, via the use of a StackEntry to represent current node being evaluated.

When a rule evaluation reaches a sub-network a StackEntry is created for the outer path segment and the sub-network segment. The sub-network segment is evaluated first, when the set reaches the end of the sub-network path it is merged into a staging list for the outer node it feeds into. The previous StackEntry is then resumed where it can process the results of the sub network. This has the added benefit that all work is processed in a batch, before propagating to the child node; which is much more efficient for accumulate nodes.

The same stack system can be used for efficient backward chaining. When a rule evaluation reaches a query node it again pauses the current evaluation, by placing it on the stack. The query is then evaluated which produces a result set, which is saved in a memory location for the resumed StackEntry to pick up and propagate to the child node. If the query itself called other queries the process would repeat, with the current query being paused and a new evaluation setup for the current query node.

One final point on performance. One single rule in general will not evaluate any faster with PHREAK than it does with RETE. For a given rule and same data set, which using a root context object to enable and disable matching, both attempt the same amount of matches and produce the same number of rule instances, and take roughly the same time. Except for the use case with subnetworks and accumulates.

PHREAK can however be considered more forgiving that RETE for poorly written rule bases and with a more graceful degradation of performance as the number of rules and complexity increases.

RETE will also churn away producing partial machines for rules that do not have data in all the joins; where as PHREAK will avoid this.

So it's not that PHREAK is faster than RETE, it just won't slow down as much as your system grows :)

AgendaGroups did not help in RETE performance, as all rules where evaluated at all times, regardless of the group. The same is true for salience. Which is why root context objects are often used, to limit matching attempts. PHREAK only evaluates rules for the active AgendaGroup, and within that group will attempt to avoid evaluation of rules (via salience) that do not result in rule instance firings.

With PHREAK AgendaGroups and salience now become useful performance tools. The root context objects are no longer needed and potentially counter productive to performance, as they force the flushing and recreation of matches for rules.

So where do we get started? There are so many use cases and so much functionality in a rule engine such as Drools that it becomes beguiling. Have no fear my intrepid adventurer, the complexity is layered and you can ease yourself in with simple use cases.

Stateless session, not utilising inference, forms the simplest use case. A stateless session can be called like a function passing it some data and then receiving some results back. Some common use cases for stateless sessions are, but not limited to:

So let's start with a very simple example using a driving license application.

public class Applicant {
    private String name;
    private int age;
    private boolean valid;
    // getter and setter methods here
}

Now that we have our data model we can write our first rule. We assume that the application uses rules to reject invalid applications. As this is a simple validation use case we will add a single rule to disqualify any applicant younger than 18.

package com.company.license

rule "Is of valid age"
when
    $a : Applicant( age < 18 )
then
    $a.setValid( false );
end

To make the engine aware of data, so it can be processed against the rules, we have to insert the data, much like with a database. When the Applicant instance is inserted into the engine it is evaluated against the constraints of the rules, in this case just two constraints for one rule. We say two because the type Applicant is the first object type constraint, and age < 18 is the second field constraint. An object type constraint plus its zero or more field constraints is referred to as a pattern. When an inserted instance satisfies both the object type constraint and all the field constraints, it is said to be matched. The $a is a binding variable which permits us to reference the matched object in the consequence. There its properties can be updated. The dollar character ('$') is optional, but it helps to differentiate variable names from field names. The process of matching patterns against the inserted data is, not surprisingly, often referred to as pattern matching.

To use this rule it is necessary to put it a Drools file, just a plain text file with .drl extension , short for "Drools Rule Language". Let's call this file licenseApplication.drl, and store it in a Kie Project. A Kie Project has the structure of a normal Maven project with an additional file (kmodule.xml) defining the KieBases and KieSessions that can be created. This file has to be placed in the resources/META-INF folder of the Maven project while all the other Drools artifacts, such as the licenseApplication.drl containing the former rule, must be stored in the resources folder or in any other subfolder under it.

Since meaningful defaults have been provided for all configuration aspects, the simplest kmodule.xml file can contain just an empty kmodule tag like the following:

<?xml version="1.0" encoding="UTF-8"?>
<kmodule xmlns="http://www.drools.org/xsd/kmodule"/>

At this point it is possible to create a KieContainer that reads the files to be built, from the classpath.

KieServices kieServices = KieServices.Factory.get();
KieContainer kContainer = kieServices.getKieClasspathContainer();

The above code snippet compiles all the DRL files found on the classpath and put the result of this compilation, a KieModule, in the KieContainer. If there are no errors, we are now ready to create our session from the KieContainer and execute against some data:

StatelessKieSession kSession = kContainer.newStatelessKieSession();
Applicant applicant = new Applicant( "Mr John Smith", 16 );
assertTrue( applicant.isValid() );
ksession.execute( applicant );
assertFalse( applicant.isValid() );

The preceding code executes the data against the rules. Since the applicant is under the age of 18, the application is marked as invalid.

So far we've only used a single instance, but what if we want to use more than one? We can execute against any object implementing Iterable, such as a collection. Let's add another class called Application, which has the date of the application, and we'll also move the boolean valid field to the Application class.

public class Applicant {
    private String name;
    private int age;
    // getter and setter methods here
}

public class Application {
    private Date dateApplied;
    private boolean valid;
    // getter and setter methods here
}

We will also add another rule to validate that the application was made within a period of time.

package com.company.license

rule "Is of valid age"
when
    Applicant( age < 18 )
    $a : Application()     
then
    $a.setValid( false );
end

rule "Application was made this year"
when
    $a : Application( dateApplied > "01-jan-2009" )     
then
    $a.setValid( false );
end

Unfortunately a Java array does not implement the Iterable interface, so we have to use the JDK converter method Arrays.asList(...). The code shown below executes against an iterable list, where all collection elements are inserted before any matched rules are fired.

StatelessKieSession kSession = kContainer.newStatelessKieSession();
Applicant applicant = new Applicant( "Mr John Smith", 16 );
Application application = new Application();
assertTrue( application.isValid() );
ksession.execute( Arrays.asList( new Object[] { application, applicant } ) );
assertFalse( application.isValid() );

The two execute methods execute(Object object) and execute(Iterable objects) are actually convenience methods for the interface BatchExecutor's method execute(Command command).

The KieCommands commands factory, obtainable from the KieServices like all other factories of the KIE API, is used to create commands, so that the following is equivalent to execute(Iterable it):

ksession.execute( kieServices.getCommands().newInsertElements( Arrays.asList( new Object[] { application, applicant } ) );

Batch Executor and Command Factory are particularly useful when working with multiple Commands and with output identifiers for obtaining results.

KieCommands kieCommands = kieServices.getCommands();
List<Command> cmds = new ArrayList<Command>();
cmds.add( kieCommands.newInsert( new Person( "Mr John Smith" ), "mrSmith", true, null ) );
cmds.add( kieCommands.newInsert( new Person( "Mr John Doe" ), "mrDoe", true, null ) );
BatchExecutionResults results = ksession.execute( kieCommands.newBatchExecution( cmds ) );
assertEquals( new Person( "Mr John Smith" ), results.getValue( "mrSmith" ) );

CommandFactory supports many other Commands that can be used in the BatchExecutor like StartProcess, Query, and SetGlobal.

Stateful Sessions are long lived and allow iterative changes over time. Some common use cases for Stateful Sessions are, but not limited to:

In contrast to a Stateless Session, the dispose() method must be called afterwards to ensure there are no memory leaks, as the KieBase contains references to Stateful Knowledge Sessions when they are created. Since Stateful Knowledge Session is the most commonly used session type it is just named KieSession in the KIE API. KieSession also supports the BatchExecutor interface, like StatelessKieSession, the only difference being that the FireAllRules command is not automatically called at the end for a Stateful Session.

We illustrate the monitoring use case with an example for raising a fire alarm. Using just four classes, we represent rooms in a house, each of which has one sprinkler. If a fire starts in a room, we represent that with a single Fire instance.

public class Room {
    private String name
    // getter and setter methods here
}
public class Sprinkler {
    private Room room;
    private boolean on;
    // getter and setter methods here
}
public class Fire {
    private Room room;
    // getter and setter methods here
}
public class Alarm {
}

In the previous section on Stateless Sessions the concepts of inserting and matching against data were introduced. That example assumed that only a single instance of each object type was ever inserted and thus only used literal constraints. However, a house has many rooms, so rules must express relationships between objects, such as a sprinkler being in a certain room. This is best done by using a binding variable as a constraint in a pattern. This "join" process results in what is called cross products, which are covered in the next section.

When a fire occurs an instance of the Fire class is created, for that room, and inserted into the session. The rule uses a binding on the room field of the Fire object to constrain matching to the sprinkler for that room, which is currently off. When this rule fires and the consequence is executed the sprinkler is turned on.

rule "When there is a fire turn on the sprinkler"
when
    Fire($room : room)
    $sprinkler : Sprinkler( room == $room, on == false )
then
    modify( $sprinkler ) { setOn( true ) };
    System.out.println( "Turn on the sprinkler for room " + $room.getName() );
end

Whereas the Stateless Session uses standard Java syntax to modify a field, in the above rule we use the modify statement, which acts as a sort of "with" statement. It may contain a series of comma separated Java expressions, i.e., calls to setters of the object selected by the modify statement's control expression. This modifies the data, and makes the engine aware of those changes so it can reason over them once more. This process is called inference, and it's essential for the working of a Stateful Session. Stateless Sessions typically do not use inference, so the engine does not need to be aware of changes to data. Inference can also be turned off explicitly by using the sequential mode.

So far we have rules that tell us when matching data exists, but what about when it does not exist? How do we determine that a fire has been extinguished, i.e., that there isn't a Fire object any more? Previously the constraints have been sentences according to Propositional Logic, where the engine is constraining against individual instances. Drools also has support for First Order Logic that allows you to look at sets of data. A pattern under the keyword not matches when something does not exist. The rule given below turns the sprinkler off as soon as the fire in that room has disappeared.

rule "When the fire is gone turn off the sprinkler"
when
    $room : Room( )
    $sprinkler : Sprinkler( room == $room, on == true )
    not Fire( room == $room )
then
    modify( $sprinkler ) { setOn( false ) };
    System.out.println( "Turn off the sprinkler for room " + $room.getName() );
end

While there is one sprinkler per room, there is just a single alarm for the building. An Alarm object is created when a fire occurs, but only one Alarm is needed for the entire building, no matter how many fires occur. Previously not was introduced to match the absence of a fact; now we use its complement exists which matches for one or more instances of some category.

rule "Raise the alarm when we have one or more fires"
when
    exists Fire()
then
    insert( new Alarm() );
    System.out.println( "Raise the alarm" );
end

Likewise, when there are no fires we want to remove the alarm, so the not keyword can be used again.

rule "Cancel the alarm when all the fires have gone"
when
    not Fire()
    $alarm : Alarm()
then
    delete( $alarm );
    System.out.println( "Cancel the alarm" );
end

Finally there is a general health status message that is printed when the application first starts and after the alarm is removed and all sprinklers have been turned off.

rule "Status output when things are ok"
when
    not Alarm()
    not Sprinkler( on == true ) 
then
    System.out.println( "Everything is ok" );
end

As we did in the Stateless Session example, the above rules should be placed in a single DRL file and saved into the resouces folder of your Maven project or any of its subfolder. As before, we can then obtain a KieSession from the KieContainer. The only difference is that this time we create a Stateful Session, whereas before we created a Stateless Session.

KieServices kieServices = KieServices.Factory.get();
KieContainer kContainer = kieServices.getKieClasspathContainer();
KieSession ksession = kContainer.newKieSession();

With the session created it is now possible to iteratively work with it over time. Four Room objects are created and inserted, as well as one Sprinkler object for each room. At this point the engine has done all of its matching, but no rules have fired yet. Calling ksession.fireAllRules() allows the matched rules to fire, but without a fire that will just produce the health message.

String[] names = new String[]{"kitchen", "bedroom", "office", "livingroom"};
Map<String,Room> name2room = new HashMap<String,Room>();
for( String name: names ){
    Room room = new Room( name );
    name2room.put( name, room );
    ksession.insert( room );
    Sprinkler sprinkler = new Sprinkler( room );
    ksession.insert( sprinkler );
}

ksession.fireAllRules();
> Everything is ok

We now create two fires and insert them; this time a reference is kept for the returned FactHandle. A Fact Handle is an internal engine reference to the inserted instance and allows instances to be retracted or modified at a later point in time. With the fires now in the engine, once fireAllRules() is called, the alarm is raised and the respective sprinklers are turned on.

Fire kitchenFire = new Fire( name2room.get( "kitchen" ) );
Fire officeFire = new Fire( name2room.get( "office" ) );

FactHandle kitchenFireHandle = ksession.insert( kitchenFire );
FactHandle officeFireHandle = ksession.insert( officeFire );

ksession.fireAllRules();
> Raise the alarm
> Turn on the sprinkler for room kitchen
> Turn on the sprinkler for room office

After a while the fires will be put out and the Fire instances are retracted. This results in the sprinklers being turned off, the alarm being cancelled, and eventually the health message is printed again.

ksession.delete( kitchenFireHandle );
ksession.delete( officeFireHandle );

ksession.fireAllRules();
> Cancel the alarm
> Turn off the sprinkler for room office
> Turn off the sprinkler for room kitchen
> Everything is ok

Everyone still with me? That wasn't so hard and already I'm hoping you can start to see the value and power of a declarative rule system.

Earlier the term "cross product" was mentioned, which is the result of a join. Imagine for a moment that the data from the fire alarm example were used in combination with the following rule where there are no field constraints:

rule "Show Sprinklers" when
    $room : Room()
    $sprinkler : Sprinkler()
then
    System.out.println( "room:" + $room.getName() +
                        " sprinkler:" + $sprinkler.getRoom().getName() );
end

In SQL terms this would be like doing select * from Room, Sprinkler and every row in the Room table would be joined with every row in the Sprinkler table resulting in the following output:

room:office sprinkler:office
room:office sprinkler:kitchen
room:office sprinkler:livingroom
room:office sprinkler:bedroom
room:kitchen sprinkler:office
room:kitchen sprinkler:kitchen
room:kitchen sprinkler:livingroom
room:kitchen sprinkler:bedroom
room:livingroom sprinkler:office
room:livingroom sprinkler:kitchen
room:livingroom sprinkler:livingroom
room:livingroom sprinkler:bedroom
room:bedroom sprinkler:office
room:bedroom sprinkler:kitchen
room:bedroom sprinkler:livingroom
room:bedroom sprinkler:bedroom

These cross products can obviously become huge, and they may very well contain spurious data. The size of cross products is often the source of performance problems for new rule authors. From this it can be seen that it's always desirable to constrain the cross products, which is done with the variable constraint.

rule
when
    $room : Room()
    $sprinkler : Sprinkler( room == $room )
then
    System.out.println( "room:" + $room.getName() +
                        " sprinkler:" + $sprinkler.getRoom().getName() );
end

This results in just four rows of data, with the correct Sprinkler for each Room. In SQL (actually HQL) the corresponding query would be select * from Room, Sprinkler where Room == Sprinkler.room.

room:office sprinkler:office
room:kitchen sprinkler:kitchen
room:livingroom sprinkler:livingroom
room:bedroom sprinkler:bedroom

So far the data and the matching process has been simple and small. To mix things up a bit a new example will be explored that handles cashflow calculations over date periods. The state of the engine will be illustratively shown at key stages to help get a better understanding of what is actually going on under the hood. Three classes will be used, as shown below. This will help us grow our understanding of pattern matching and joins further. We will then use this to illustrate different techniques for execution control.

public class CashFlow {
    private Date   date;
    private double amount;
    private int    type;
    long           accountNo;
    // getter and setter methods here
}

public class Account {
    private long   accountNo;
    private double balance;
    // getter and setter methods here
}

public AccountPeriod {
    private Date start;
    private Date end;
    // getter and setter methods here
}

By now you already know how to create KieBases and how to instantiate facts to populate the KieSession, so tables will be used to show the state of the inserted data, as it makes things clearer for illustration purposes. The tables below show that a single fact was inserted for the Account. Also inserted are a series of debits and credits as CashFlow objects for that account, extending over two quarters.


Two rules can be used to determine the debit and credit for that quarter and update the Account balance. The two rules below constrain the cashflows for an account for a given time period. Notice the "&&" which use short cut syntax to avoid repeating the field name twice.

rule "increase balance for credits"
when
  ap : AccountPeriod()
  acc : Account( $accountNo : accountNo )
  CashFlow( type == CREDIT,
            accountNo == $accountNo,
            date >= ap.start && <= ap.end,
            $amount : amount )
then
  acc.balance  += $amount;
end
rule "decrease balance for debits" 
when 
  ap : AccountPeriod() 
  acc : Account( $accountNo : accountNo ) 
  CashFlow( type == DEBIT, 
            accountNo == $accountNo,
            date >= ap.start && <= ap.end, 
            $amount : amount ) 
then 
  acc.balance -= $amount; 
end

Earlier we showed how rules would equate to SQL, which can often help people with an SQL background to understand rules. The two rules above can be represented with two views and a trigger for each view, as below:

select * from Account acc,
              Cashflow cf,
              AccountPeriod ap      
where acc.accountNo == cf.accountNo and 
      cf.type == CREDIT and
      cf.date >= ap.start and 
      cf.date <= ap.end
select * from Account acc, 
              Cashflow cf,
              AccountPeriod ap 
where acc.accountNo == cf.accountNo and 
      cf.type == DEBIT and
      cf.date >= ap.start and 
      cf.date <= ap.end
trigger : acc.balance += cf.amount
trigger : acc.balance -= cf.amount

If the AccountPeriod is set to the first quarter we constrain the rule "increase balance for credits" to fire on two rows of data and "decrease balance for debits" to act on one row of data.


The two cashflow tables above represent the matched data for the two rules. The data is matched during the insertion stage and, as you discovered in the previous chapter, does not fire straight away, but only after fireAllRules() is called. Meanwhile, the rule plus its matched data is placed on the Agenda and referred to as an RuIe Match or Rule Instance. The Agenda is a table of Rule Matches that are able to fire and have their consequences executed, as soon as fireAllRules() is called. Rule Matches on the Agenda are referred to as a conflict set and their execution is determine by a conflict resolution strategy. Notice that the order of execution so far is considered arbitrary.


After all of the above activations are fired, the account has a balance of -25.


If the AccountPeriod is updated to the second quarter, we have just a single matched row of data, and thus just a single Rule Match on the Agenda.

The firing of that Activation results in a balance of 25.



Drools also features ruleflow-group attributes which allows workflow diagrams to declaratively specify when rules are allowed to fire. The screenshot below is taken from Eclipse using the Drools plugin. It has two ruleflow-group nodes which ensures that the calculation rules are executed before the reporting rules.

The use of the ruleflow-group attribute in a rule is shown below.

rule "increase balance for credits"
  ruleflow-group "calculation"
when
  ap : AccountPeriod()
  acc : Account( $accountNo : accountNo )
  CashFlow( type == CREDIT,
            accountNo == $accountNo,
            date >= ap.start && <= ap.end,
            $amount : amount )
then
  acc.balance  += $amount;
end
rule "Print balance for AccountPeriod"
  ruleflow-group "report"
when
  ap : AccountPeriod()
  acc : Account()
then
  System.out.println( acc.accountNo +
                      " : " + acc.balance );    
end

Inference has a bad name these days, as something not relevant to business use cases and just too complicated to be useful. It is true that contrived and complicated examples occur with inference, but that should not detract from the fact that simple and useful ones exist too. But more than this, correct use of inference can crate more agile and less error prone business rules, which are easier to maintain.

So what is inference? Something is inferred when we gain knowledge of something from using previous knowledge. For example, given a Person fact with an age field and a rule that provides age policy control, we can infer whether a Person is an adult or a child and act on this.

rule "Infer Adult"
when
  $p : Person( age >= 18 )
then
  insert( new IsAdult( $p ) )
end

Due to the preceding rule, every Person who is 18 or over will have an instance of IsAdult inserted for them. This fact is special in that it is known as a relation. We can use this inferred relation in any rule:

$p : Person()
IsAdult( person == $p )

So now we know what inference is, and have a basic example, how does this facilitate good rule design and maintenance?

Let's take a government department that are responsible for issuing ID cards when children become adults, henceforth referred to as ID department. They might have a decision table that includes logic like this, which says when an adult living in London is 18 or over, issue the card:

However the ID department does not set the policy on who an adult is. That's done at a central government level. If the central government were to change that age to 21, this would initiate a change management process. Someone would have to liaise with the ID department and make sure their systems are updated, in time for the law going live.

This change management process and communication between departments is not ideal for an agile environment, and change becomes costly and error prone. Also the card department is managing more information than it needs to be aware of with its "monolithic" approach to rules management which is "leaking" information better placed elsewhere. By this I mean that it doesn't care what explicit "age >= 18" information determines whether someone is an adult, only that they are an adult.

In contrast to this, let's pursue an approach where we split (de-couple) the authoring responsibilities, so that both the central government and the ID department maintain their own rules.

It's the central government's job to determine who is an adult. If they change the law they just update their central repository with the new rules, which others use:

The IsAdult fact, as discussed previously, is inferred from the policy rules. It encapsulates the seemingly arbitrary piece of logic "age >= 18" and provides semantic abstractions for its meaning. Now if anyone uses the above rules, they no longer need to be aware of explicit information that determines whether someone is an adult or not. They can just use the inferred fact:

While the example is very minimal and trivial it illustrates some important points. We started with a monolithic and leaky approach to our knowledge engineering. We created a single decision table that had all possible information in it and that leaks information from central government that the ID department did not care about and did not want to manage.

We first de-coupled the knowledge process so each department was responsible for only what it needed to know. We then encapsulated this leaky knowledge using an inferred fact IsAdult. The use of the term IsAdult also gave a semantic abstraction to the previously arbitrary logic "age >= 18".

So a general rule of thumb when doing your knowledge engineering is:

After regular inserts you have to retract facts explicitly. With logical assertions, the fact that was asserted will be automatically retracted when the conditions that asserted it in the first place are no longer true. Actually, it's even cleverer then that, because it will be retracted only if there isn't any single condition that supports the logical assertion.

Normal insertions are said to be stated, i.e., just like the intuitive meaning of "stating a fact" implies. Using a HashMap and a counter, we track how many times a particular equality is stated; this means we count how many different instances are equal.

When we logically insert an object during a RHS execution we are said to justify it, and it is considered to be justified by the firing rule. For each logical insertion there can only be one equal object, and each subsequent equal logical insertion increases the justification counter for this logical assertion. A justification is removed by the LHS of the creating rule becoming untrue, and the counter is decreased accordingly. As soon as we have no more justifications the logical object is automatically retracted.

If we try to logically insert an object when there is an equal stated object, this will fail and return null. If we state an object that has an existing equal object that is justified we override the Fact; how this override works depends on the configuration setting WM_BEHAVIOR_PRESERVE. When the property is set to discard we use the existing handle and replace the existing instance with the new Object, which is the default behavior; otherwise we override it to stated but we create an new FactHandle.

This can be confusing on a first read, so hopefully the flow charts below help. When it says that it returns a new FactHandle, this also indicates the Object was propagated through the network.



The previous example was issuing ID cards to over 18s, in this example we now issue bus passes, either a child or adult pass.

rule "Issue Child Bus Pass" when
    $p : Person( age < 16 )
then
    insert(new ChildBusPass( $p ) );
end
 
rule "Issue Adult Bus Pass" when
    $p : Person( age >= 16 )
then
    insert(new AdultBusPass( $p ) );
end

As before the above example is considered monolithic, leaky and providing poor separation of concerns.

As before we can provide a more robust application with a separation of concerns using inference. Notice this time we don't just insert the inferred object, we use "insertLogical":

rule "Infer Child" when
    $p : Person( age < 16 )
then
    insertLogical( new IsChild( $p ) )
end
rule "Infer Adult" when
    $p : Person( age >= 16 )
then
    insertLogical( new IsAdult( $p ) )
end

A "insertLogical" is part of the Drools Truth Maintenance System (TMS). When a fact is logically inserted, this fact is dependant on the truth of the "when" clause. It means that when the rule becomes false the fact is automatically retracted. This works particularly well as the two rules are mutually exclusive. So in the above rules if the person is under 16 it inserts an IsChild fact, once the person is 16 or over the IsChild fact is automatically retracted and the IsAdult fact inserted.

Returning to the code to issue bus passes, these two rules can + logically insert the ChildBusPass and AdultBusPass facts, as the TMS + supports chaining of logical insertions for a cascading set of retracts.

rule "Issue Child Bus Pass" when
    $p : Person( )
         IsChild( person == $p )
then
    insertLogical(new ChildBusPass( $p ) );
end
 
rule "Issue Adult Bus Pass" when
    $p : Person( age >= 16 )
         IsAdult( person =$p )
then
    insertLogical(new AdultBusPass( $p ) );
end

Now when a person changes from being 15 to 16, not only is the IsChild fact automatically retracted, so is the person's ChildBusPass fact. For bonus points we can combine this with the 'not' conditional element to handle notifications, in this situation, a request for the returning of the pass. So when the TMS automatically retracts the ChildBusPass object, this rule triggers and sends a request to the person:

rule "Return ChildBusPass Request "when
    $p : Person( )
         not( ChildBusPass( person == $p ) )
then
    requestChildBusPass( $p );
end

Decision tables are a "precise yet compact" (ref. Wikipedia) way of representing conditional logic, and are well suited to business level rules.

Drools supports managing rules in a spreadsheet format. Supported formats are Excel (XLS), and CSV, which means that a variety of spreadsheet programs (such as Microsoft Excel, OpenOffice.org Calc amongst others) can be utilized. It is expected that web based decision table editors will be included in a near future release.

Decision tables are an old concept (in software terms) but have proven useful over the years. Very briefly speaking, in Drools decision tables are a way to generate rules driven from the data entered into a spreadsheet. All the usual features of a spreadsheet for data capture and manipulation can be taken advantage of.

Here are some examples of real world decision tables (slightly edited to protect the innocent).




In the above examples, the technical aspects of the decision table have been collapsed away (using a standard spreadsheet feature).

The rules start from row 17, with each row resulting in a rule. The conditions are in columns C, D, E, etc., the actions being off-screen. The values in the cells are quite simple, and their meaning is indicated by the headers in Row 16. Column B is just a description. It is customary to use color to make it obvious what the different areas of the table mean.

Note

Note that although the decision tables look like they process top down, this is not necessarily the case. Ideally, rules are authored without regard for the order of rows, simply because this makes maintenance easier, as rows will not need to be shifted around all the time.

As each row is a rule, the same principles apply. As the rule engine processes the facts, any rules that match may fire. (Some people are confused by this. It is possible to clear the agenda when a rule fires and simulate a very simple decision table where only the first match effects an action.) Also note that you can have multiple tables on one spreadsheet. This way, rules can be grouped where they share common templates, yet at the end of the day they are all combined into one rule package. Decision tables are essentially a tool to generate DRL rules automatically.


The key point to keep in mind is that in a decision table each row is a rule, and each column in that row is either a condition or action for that rule.


The spreadsheet looks for the RuleTable keyword to indicate the start of a rule table (both the starting row and column). Other keywords are also used to define other package level attributes (covered later). It is important to keep the keywords in one column. By convention the second column ("B") is used for this, but it can be any column (convention is to leave a margin on the left for notes). In the following diagram, C is actually the column where it starts. Everything to the left of this is ignored.

If we expand the hidden sections, it starts to make more sense how it works; note the keywords in column C.


Now the hidden magic which makes it work can be seen. The RuleSet keyword indicates the name to be used in the rule package that will encompass all the rules. This name is optional, using a default, but it must have the RuleSet keyword in the cell immediately to the right.

The other keywords visible in Column C are Import and Sequential which will be covered later. The RuleTable keyword is important as it indicates that a chunk of rules will follow, based on some rule templates. After the RuleTable keyword there is a name, used to prefix the names of the generated rules. The sheet name and row numbers are appended to guarantee unique rule names.

The column of RuleTable indicates the column in which the rules start; columns to the left are ignored.

Note

In general the keywords make up name-value pairs.

Referring to row 14 (the row immediately after RuleTable), the keywords CONDITION and ACTION indicate that the data in the columns below are for either the LHS or the RHS parts of a rule. There are other attributes on the rule which can also be optionally set this way.

Row 15 contains declarations of ObjectTypes. The content in this row is optional, but if this option is not in use, the row must be left blank; however this option is usually found to be quite useful. When using this row, the values in the cells below (row 16) become constraints on that object type. In the above case, it generates Person(age=="42") and Cheese(type=="stilton"), where 42 and "stilton" come from row 18. In the above example, the "==" is implicit; if just a field name is given the translator assumes that it is to generate an exact match.

Note

An ObjectType declaration can span columns (via merged cells), meaning that all columns below the merged range are to be combined into one set of constraints within a single pattern matching a single fact at a time, as opposed to non-merged cells containing the same ObjectType, but resulting in different patterns, potentially matching different or identical facts.

Row 16 contains the rule templates themselves. They can use the "$param" placeholder to indicate where data from the cells below should be interpolated. (For multiple insertions, use "$1", "$2", etc., indicating parameters from a comma-separated list in a cell below.) Row 17 is ignored; it may contain textual descriptions of the column's purpose.

Rows 18 and 19 show data, which will be combined (interpolated) with the templates in row 15, to generate rules. If a cell contains no data, then its template is ignored. (This would mean that some condition or action does not apply for that rule row.) Rule rows are read until there is a blank row. Multiple RuleTables can exist in a sheet. Row 20 contains another keyword, and a value. The row positions of keywords like this do not matter (most people put them at the top) but their column should be the same one where the RuleTable or RuleSet keywords should appear. In our case column C has been chosen to be significant, but any other column could be used instead.

In the above example, rules would be rendered like the following (as it uses the "ObjectType" row):

//row 18
rule "Cheese_fans_18"
when
    Person(age=="42")
    Cheese(type=="stilton")
then
    list.add("Old man stilton");
end

Note

The constraints age=="42" and type=="stilton" are interpreted as single constraints, to be added to the respective ObjectType in the cell above. If the cells above were spanned, then there could be multiple constraints on one "column".

Warning

Very large decision tables may have very large memory requirements.

Entries in a Rule Set area may define DRL constructs (except rules), and specify rule attributes. While entries for constructs may be used repeatedly, each rule attribute may be given at most once, and it applies to all rules unless it is overruled by the same attribute being defined within the Rule Table area.

Entries must be given in a vertically stacked sequence of cell pairs. The first one contains a keyword and the one to its right the value, as shown in the table below. This sequence of cell pairs may be interrupted by blank rows or even a Rule Table, as long as the column marked by RuleSet is upheld as the one containing the keyword.


Warning

In some locales, MS Office, LibreOffice and OpenOffice will encode a double quoth " differently, which will cause a compilation error. The difference is often hard to see. For example: “A” will fail, but "A" will work.

For defining rule attributes that apply to all rules in the generated DRL file you can use any of the entries in the following table. Notice, however, that the proper keyword must be used. Also, each of these attributes may be used only once.

Important

Rule attributes specified in a Rule Set area will affect all rule assets in the same package (not only in the spreadsheet). Unless you are sure that the spreadsheet is the only one rule asset in the package, the recommendation is to specify rule attributes not in a Rule Set area but in a Rule Table columns for each rule instead.


All Rule Tables begin with a cell containing "RuleTable", optionally followed by a string within the same cell. The string is used as the initial part of the name for all rules derived from this Rule Table, with the row number appended for distinction. (This automatic naming can be overridden by using a NAME column.) All other cells defining rules of this Rule Table are below and to the right of this cell.

The next row defines the column type, with each column resulting in a part of the condition or the consequence, or providing some rule attribute, the rule name or a comment. The table below shows which column headers are available; additional columns may be used according to the table showing rule attribute entries given in the preceding section. Note that each attribute column may be used at most once. For a column header, either use the keyword or any other word beginning with the letter given in the "Initial" column of these tables.


Given a column headed CONDITION, the cells in successive lines result in a conditional element.

  • Text in the first cell below CONDITION develops into a pattern for the rule condition, with the snippet in the next line becoming a constraint. If the cell is merged with one or more neighbours, a single pattern with multiple constraints is formed: all constraints are combined into a parenthesized list and appended to the text in this cell. The cell may be left blank, which means that the code snippet in the next row must result in a valid conditional element on its own.

    To include a pattern without constraints, you can write the pattern in front of the text for another pattern.

    The pattern may be written with or without an empty pair of parentheses. A "from" clause may be appended to the pattern.

    If the pattern ends with "eval", code snippets are supposed to produce boolean expressions for inclusion into a pair of parentheses after "eval".

  • Text in the second cell below CONDITION is processed in two steps.

    1. The code snippet in this cell is modified by interpolating values from cells farther down in the column. If you want to create a constraint consisting of a comparison using "==" with the value from the cells below, the field selector alone is sufficient. Any other comparison operator must be specified as the last item within the snippet, and the value from the cells below is appended. For all other constraint forms, you must mark the position for including the contents of a cell with the symbol $param. Multiple insertions are possible by using the symbols $1, $2, etc., and a comma-separated list of values in the cells below.

      A text according to the pattern forall(delimiter){snippet} is expanded by repeating the snippet once for each of the values of the comma-separated list of values in each of the cells below, inserting the value in place of the symbol $ and by joining these expansions by the given delimiter. Note that the forall construct may be surrounded by other text.

    2. If the cell in the preceding row is not empty, the completed code snippet is added to the conditional element from that cell. A pair of parentheses is provided automatically, as well as a separating comma if multiple constraints are added to a pattern in a merged cell.

      If the cell above is empty, the interpolated result is used as is.

  • Text in the third cell below CONDITION is for documentation only. It should be used to indicate the column's purpose to a human reader.

  • From the fourth row on, non-blank entries provide data for interpolation as described above. A blank cell results in the omission of the conditional element or constraint for this rule.

Given a column headed ACTION, the cells in successive lines result in an action statement.

  • Text in the first cell below ACTION is optional. If present, it is interpreted as an object reference.

  • Text in the second cell below ACTION is processed in two steps.

    1. The code snippet in this cell is modified by interpolating values from cells farther down in the column. For a singular insertion, mark the position for including the contents of a cell with the symbol $param. Multiple insertions are possible by using the symbols $1, $2, etc., and a comma-separated list of values in the cells below.

      A method call without interpolation can be achieved by a text without any marker symbols. In this case, use any non-blank entry in a row below to include the statement.

      The forall construct is available here, too.

    2. If the first cell is not empty, its text, followed by a period, the text in the second cell and a terminating semicolon are stringed together, resulting in a method call which is added as an action statement for the consequence.

      If the cell above is empty, the interpolated result is used as is.

  • Text in the third cell below ACTION is for documentation only. It should be used to indicate the column's purpose to a human reader.

  • From the fourth row on, non-blank entries provide data for interpolation as described above. A blank cell results in the omission of the action statement for this rule.

Note

Using $1 instead of $param works in most cases, but it will fail if the replacement text contains a comma: then, only the part preceding the first comma is inserted. Use this "abbreviation" judiciously.

Given a column headed METADATA, the cells in successive lines result in a metadata annotation for the generated rules.

  • Text in the first cell below METADATA is ignored.

  • Text in the second cell below METADATA is subject to interpolation, as described above, using values from the cells in the rule rows. The metadata marker character @ is prefixed automatically, and thus it should not be included in the text for this cell.

  • Text in the third cell below METADATA is for documentation only. It should be used to indicate the column's purpose to a human reader.

  • From the fourth row on, non-blank entries provide data for interpolation as described above. A blank cell results in the omission of the metadata annotation for this rule.

The various interpolations are illustrated in the following example.


The next example demonstrates the joint effect of a cell defining the pattern type and the code snippet below it.

This spreadsheet section shows how the Person type declaration spans 2 columns, and thus both constraints will appear as Person(age == ..., type == ...). Since only the field names are present in the snippet, they imply an equality test.

In the following example the marker symbol $param is used.

The result of this column is the pattern Person(age == "42")). You may have noticed that the marker and the operator "==" are redundant.

The next example illustrates that a trailing insertion marker can be omitted.

Here, appending the value from the cell is implied, resulting in Person(age < "42")).

You can provide the definition of a binding variable, as in the example below. .

Here, the result is c: Cheese(type == "stilton"). Note that the quotes are provided automatically. Actually, anything can be placed in the object type row. Apart from the definition of a binding variable, it could also be an additional pattern that is to be inserted literally.

A simple construction of an action statement with the insertion of a single value is shown below.

The cell below the ACTION header is left blank. Using this style, anything can be placed in the consequence, not just a single method call. (The same technique is applicable within a CONDITION column as well.)

Below is a comprehensive example, showing the use of various column headers. It is not an error to have no value below a column header (as in the NO-LOOP column): here, the attribute will not be applied in any of the rules.


And, finally, here is an example of Import, Variables and Functions.


Multiple package names within the same cell must be separated by a comma. Also, the pairs of type and variable names must be comma-separated. Functions, however, must be written as they appear in a DRL file. This should appear in the same column as the "RuleSet" keyword; it could be above, between or below all the rule rows.

Related to decision tables (but not necessarily requiring a spreadsheet) are "Rule Templates" (in the drools-templates module). These use any tabular data source as a source of rule data - populating a template to generate many rules. This can allow both for more flexible spreadsheets, but also rules in existing databases for instance (at the cost of developing the template up front to generate the rules).

With Rule Templates the data is separated from the rule and there are no restrictions on which part of the rule is data-driven. So whilst you can do everything you could do in decision tables you can also do the following:

As an example, a more classic decision table is shown, but without any hidden rows for the rule meta data (so the spreadsheet only contains the raw data to generate the rules).


See the ExampleCheese.xls in the examples download for the above spreadsheet.

If this was a regular decision table there would be hidden rows before row 1 and between rows 1 and 2 containing rule metadata. With rule templates the data is completely separate from the rules. This has two handy consequences - you can apply multiple rule templates to the same data and your data is not tied to your rules at all. So what does the template look like?


1  template header
2  age
3  type
4  log
5
6  package org.drools.examples.templates;
7
8  global java.util.List list;
9
10 template "cheesefans"
11
12 rule "Cheese fans_@{row.rowNumber}"
13 when
14    Person(age == @{age})
15    Cheese(type == "@{type}")
16 then
17    list.add("@{log}");
18 end
19
20 end template

Annotations to the preceding program listing:

  • Line 1: All rule templates start with template header.

  • Lines 2-4: Following the header is the list of columns in the order they appear in the data. In this case we are calling the first column age, the second type and the third log.

  • Line 5: An empty line signifies the end of the column definitions.

  • Lines 6-9: Standard rule header text. This is standard rule DRL and will appear at the top of the generated DRL. Put the package statement and any imports and global and function definitions into this section.

  • Line 10: The keyword template signals the start of a rule template. There can be more than one template in a template file, but each template should have a unique name.

  • Lines 11-18: The rule template - see below for details.

  • Line 20: The keywords end template signify the end of the template.

The rule templates rely on MVEL to do substitution using the syntax @{token_name}. There is currently one built-in expression, @{row.rowNumber} which gives a unique number for each row of data and enables you to generate unique rule names. For each row of data a rule will be generated with the values in the data substituted for the tokens in the template.

A rule template has to be included in a file with extension .drt and associated to the corresponding decision table when defining the kbase in the kmodule.xml file as in the following example

<?xml version="1.0" encoding="UTF-8"?>
<kmodule xmlns="http://drools.org/xsd/kmodule">
  <kbase name="TemplatesKB" packages="org.drools.examples.templates">
    <ruleTemplate dtable="org/drools/examples/templates/ExampleCheese.xls"
                  template="org/drools/examples/templates/Cheese.drt"
                  row="2" col="2"/>
      <ksession name="TemplatesKS"/>
      </kbase>
</kmodule>

With the example data above the following rule file would be generated:


package org.drools.examples.templates;

global java.util.List list;

rule "Cheese fans_1"
when
  Person(age == 42)
  Cheese(type == "stilton")
then
  list.add("Old man stilton");
end

rule "Cheese fans_2"
when
  Person(age == 21)
  Cheese(type == "cheddar")
then
  list.add("Young man cheddar");
end

At this point the KieSession named "TemplatesKS" and containing the rules generated from the template can be simply created from the KieContainer and used as any other KieSession.


KieSession ksession = kc.newKieSession( "TemplatesKS" );

//now create some test data
ksession.insert( new Cheese( "stilton", 42 ) );
ksession.insert( new Person( "michael", "stilton", 42 ) );
final List<String> list = new ArrayList<String>();
ksession.setGlobal( "list", list );

ksession.fireAllRules();

One way to illuminate the black box that is a rule engine, is to play with the logging level.

Everything is logged to SLF4J, which is a simple logging facade that can delegate any log to Logback, Apache Commons Logging, Log4j or java.util.logging. Add a dependency to the logging adaptor for your logging framework of choice. If you're not using any logging framework yet, you can use Logback by adding this Maven dependency:

    <dependency>
      <groupId>ch.qos.logback</groupId>
      <artifactId>logback-classic</artifactId>
      <version>1.x</version>
    </dependency>

Note

If you're developing for an ultra light environment, use slf4j-nop or slf4j-simple instead.

Configure the logging level on the package org.drools. For example:

In Logback, configure it in your logback.xml file:

<configuration>

    <logger name="org.drools" level="debug"/>

    ...

<configuration>

In Log4J, configure it in your log4j.xml file:

<log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/">

    <category name="org.drools">
      <priority value="debug" />
    </category>

    ...

</log4j:configuration>

Ths sections extends the KIE Running section, which should be read first, with specifics for the Drools runtime.

The EntryPoint provides the methods around inserting, updating and deleting facts. The term "entry point" is related to the fact that we have multiple partitions in a Working Memory and you can choose which one you are inserting into. The use of multiple entry points is more common in event processing use cases, but they can be used by pure rule applications as well.

The KieRuntime interface provides the main interaction with the engine. It is available in rule consequences and process actions. In this manual the focus is on the methods and interfaces related to rules, and the methods pertaining to processes will be ignored for now. But you'll notice that the KieRuntime inherits methods from both the WorkingMemory and the ProcessRuntime, thereby providing a unified API to work with processes and rules. When working with rules, three interfaces form the KieRuntime: EntryPoint, WorkingMemory and the KieRuntime itself.


In order for a fact to be evaluated against the rules in a KieBase, it has to be inserted into the session. This is done by calling the method insert(yourObject). When a fact is inserted into the session, some of its properties might be immediately evaluated (eager evaluation) and some might be deferred for later evaluation (lazy evaluation). The exact behaviour depends on the rules engine algorithm being used.

When an Object is inserted it returns a FactHandle. This FactHandle is the token used to represent your inserted object within the WorkingMemory. It is also used for interactions with the WorkingMemory when you wish to delete or modify an object.

Cheese stilton = new Cheese("stilton");
FactHandle stiltonHandle = ksession.insert( stilton );      

As mentioned in the KieBase section, a Working Memory may operate in two assertion modes: either equality or identity. Identity is the default.

Identity means that the Working Memory uses an IdentityHashMap to store all asserted objects. New instance assertions always result in the return of new FactHandle, but if an instance is asserted again then it returns the original fact handle, i.e., it ignores repeated insertions for the same object.

Equality means that the Working Memory uses a HashMap to store all asserted objects. An object instance assertion will only return a new FactHandle if the inserted object is not equal (according to its equal()/hashcode() methods) to an already existing fact.

The RuleRuntime provides access to the Agenda, permits query executions, and lets you access named Entry Points.


The Agenda is a Rete feature. During actions on the WorkingMemory, rules may become fully matched and eligible for execution; a single Working Memory Action can result in multiple eligible rules. When a rule is fully matched a Match is created, referencing the rule and the matched facts, and placed onto the Agenda. The Agenda controls the execution order of these Matches using a Conflict Resolution strategy.

The engine cycles repeatedly through two phases:


The process repeats until the agenda is clear, in which case control returns to the calling application. When Working Memory Actions are taking place, no rules are being fired.



Agenda groups are a way to partition rules (matches, actually) on the agenda. At any one time, only one group has "focus" which means that matches for rules in that group only will take effect. You can also have rules with "auto focus" which means that the focus is taken for its agenda group when that rule's conditions are true.

Agenda groups are known as "modules" in CLIPS terminology. While it best to design rules that do not need control flow, this is not always possible. Agenda groups provide a handy way to create a "flow" between grouped rules. You can switch the group which has focus either from within the rule engine, or via the API. If your rules have a clear need for multiple "phases" or "sequences" of processing, consider using agenda-groups for this purpose.

Each time setFocus() is called it pushes that Agenda Group onto a stack. When the focus group is empty it is popped from the stack and the focus group that is now on top evaluates. An Agenda Group can appear in multiple locations on the stack. The default Agenda Group is "MAIN", with all rules which do not specify an Agenda Group being in this group. It is also always the first group on the stack, given focus initially, by default.

ksession.getAgenda().getAgendaGroup( "Group A" ).setFocus();

The clear() method can be used to cancel all the activations generated by the rules belonging to a given Agenda Group before one has had a chance to fire.

ksession.getAgenda().getAgendaGroup( "Group A" ).clear();

Note that, due to the lazy nature of the phreak algorithm used by Drools, the activations are by default materialized only at firing time, but it is possible to anticipate the evaluation and then the activation of a given rule at the moment when a fact is inserted into the session by annotating it with @Propagation(IMMEDIATE) as explained in the Propagation modes section.

The StatelessKieSession wraps the KieSession, instead of extending it. Its main focus is on decision service type scenarios. It avoids the need to call dispose(). Stateless sessions do not support iterative insertions and the method call fireAllRules() from Java code; the act of calling execute() is a single-shot method that will internally instantiate a KieSession, add all the user data and execute user commands, call fireAllRules(), and then call dispose(). While the main way to work with this class is via the BatchExecution (a subinterface of Command) as supported by the CommandExecutor interface, two convenience methods are provided for when simple object insertion is all that's required. The CommandExecutor and BatchExecution are talked about in detail in their own section.


Our simple example shows a stateless session executing a given collection of Java objects using the convenience API. It will iterate the collection, inserting each element in turn.


If this was done as a single Command it would be as follows:


If you wanted to insert the collection itself, and the collection's individual elements, then CommandFactory.newInsert(collection) would do the job.

Methods of the CommandFactory create the supported commands, all of which can be marshalled using XStream and the BatchExecutionHelper. BatchExecutionHelper provides details on the XML format as well as how to use Drools Pipeline to automate the marshalling of BatchExecution and ExecutionResults.

StatelessKieSession supports globals, scoped in a number of ways. I'll cover the non-command way first, as commands are scoped to a specific execution call. Globals can be resolved in three ways.

The CommandExecutor interface also offers the ability to export data via "out" parameters. Inserted facts, globals and query results can all be returned.


With Rete you have a stateful session where objects can be asserted and modified over time, and where rules can also be added and removed. Now what happens if we assume a stateless session, where after the initial data set no more data can be asserted or modified and rules cannot be added or removed? Certainly it won't be necessary to re-evaluate rules, and the engine will be able to operate in a simplified way.

The LeftInputAdapterNode no longer creates a Tuple, adding the Object, and then propagate the Tuple – instead a Command object is created and added to a list in the Working Memory. This Command object holds a reference to the LeftInputAdapterNode and the propagated object. This stops any left-input propagations at insertion time, so that we know that a right-input propagation will never need to attempt a join with the left-inputs (removing the need for left-input memory). All nodes have their memory turned off, including the left-input Tuple memory but excluding the right-input object memory, which means that the only node remembering an insertion propagation is the right-input object memory. Once all the assertions are finished and all right-input memories populated, we can then iterate the list of LeftInputAdatperNode Command objects calling each in turn. They will propagate down the network attempting to join with the right-input objects, but they won't be remembered in the left input as we know there will be no further object assertions and thus propagations into the right-input memory.

There is no longer an Agenda, with a priority queue to schedule the Tuples; instead, there is simply an elements for the number of rules. The sequence number of the RuleTerminalNode indicates the element within the elements where to place the Match. Once all Command objects have finished we can iterate our elements, checking each element in turn, and firing the Matches if they exist. To improve performance, we remember the first and the last populated cell in the elements. The network is constructed, with each RuleTerminalNode being given a sequence number based on a salience number and its order of being added to the network.

Typically the right-input node memories are Hash Maps, for fast object deletion; here, as we know there will be no object deletions, we can use a list when the values of the object are not indexed. For larger numbers of objects indexed Hash Maps provide a performance increase; if we know an object type has only a few instances, indexing is probably not advantageous, and a list can be used.

Sequential mode can only be used with a Stateless Session and is off by default. To turn it on, either call RuleBaseConfiguration.setSequential(true), or set the rulebase configuration property drools.sequential to true. Sequential mode can fall back to a dynamic agenda by calling setSequentialAgenda with SequentialAgenda.DYNAMIC. You may also set the "drools.sequential.agenda" property to "sequential" or "dynamic".

The introduction of PHREAK as default algorithm for the Drools engine made the rules' evaluation lazy. This new Drools lazy behavior allowed a relevant performance boost but, in some very specific cases, breaks the semantic of a few Drools features.

More precisely in some circumstances it is necessary to propagate the insertion of new fact into th session immediately. For instance Drools allows a query to be executed in pull only (or passive) mode by prepending a '?' symbol to its invocation as in the following example:


In this case, since the query is passive, it shouldn't react to the insertion of a String matching the join condition in the query itself. In other words this sequence of commands

KieSession ksession = ...
ksession.insert(1);
ksession.insert("1");
ksession.fireAllRules();

shouldn't cause the rule R to fire because the String satisfying the query condition has been inserted after the Integer and the passive query shouldn't react to this insertion. Conversely the rule should fire if the insertion sequence is inverted because the insertion of the Integer, when the passive query can be satisfied by the presence of an already existing String, will trigger it.

Unfortunately the lazy nature of PHREAK doesn't allow the engine to make any distinction regarding the insertion sequence of the two facts, so the rule will fire in both cases. In circumstances like this it is necessary to evaluate the rule eagerly as done by the old RETEOO-based engine.

In other cases it is required that the propagation is eager, meaning that it is not immedate, but anyway has to happen before the engine/agenda starts scheduled evaluations. For instance this is necessary when a rule has the no-loop or the lock-on-active attribute and in fact when this happens this propagation mode is automatically enforced by the engine.

To cover these use cases, and in all other situations where an immediate or eager rule evaluation is required, it is possible to declaratively specify so by annotating the rule itself with @Propagation(Propagation.Type), where Propagation.Type is an enumeration with 3 possible values:

  • IMMEDIATE means that the propagation is performed immediately.

  • EAGER means that the propagation is performed lazily but eagerly evaluated before scheduled evaluations.

  • LAZY means that the propagation is totally lazy and this is default PHREAK behaviour

This means that the following drl:


will make the rule R to fire if and only if the Integer is inserted after the String, thus behaving in accordance with the semantic of the passive query.

The CommandFactory allows for commands to be executed on those sessions, the only difference being that the Stateless Knowledge Session executes fireAllRules() at the end before disposing the session. The currently supported commands are:

InsertObject will insert a single object, with an optional "out" identifier. InsertElements will iterate an Iterable, inserting each of the elements. What this means is that a Stateless Knowledge Session is no longer limited to just inserting objects, it can now start processes or execute queries, and do this in any order.


The execute method always returns an ExecutionResults instance, which allows access to any command results if they specify an out identifier such as the "stilton_id" above.


The execute method only allows for a single command. That's where BatchExecution comes in, which represents a composite command, created from a list of commands. Now, execute will iterate over the list and execute each command in turn. This means you can insert some objects, start a process, call fireAllRules and execute a query, all in a single execute(...) call, which is quite powerful.

As mentioned previosly, the StatelessKieSession will execute fireAllRules() automatically at the end. However the keen-eyed reader probably has already noticed the FireAllRules command and wondered how that works with a StatelessKieSession. The FireAllRules command is allowed, and using it will disable the automatic execution at the end; think of using it as a sort of manual override function.

A custom XStream marshaller can be used with the Drools Pipeline to achieve XML scripting, which is perfect for services. Here are two simple XML samples, one for the BatchExecution and one for the ExecutionResults.



Spring and Camel, covered in the integrations book, facilitate declarative services.


The CommandExecutor returns an ExecutionResults, and this is handled by the pipeline code snippet as well. A similar output for the <batch-execution> XML sample above would be:


The BatchExecutionHelper provides a configured XStream instance to support the marshalling of Batch Executions, where the resulting XML can be used as a message format, as shown above. Configured converters only exist for the commands supported via the Command Factory. The user may add other converters for their user objects. This is very useful for scripting stateless or stateful knowledge sessions, especially when services are involved.

There is currently no XML schema to support schema validation. The basic format is outlined here, and the drools-pipeline module has an illustrative unit test in the XStreamBatchExecutionTest unit test. The root element is <batch-execution> and it can contain zero or more commands elements.


This contains a list of elements that represent commands, the supported commands is limited to those Commands provided by the Command Factory. The most basic of these is the <insert> element, which inserts objects. The contents of the insert element is the user object, as dictated by XStream.


The insert element features an "out-identifier" attribute, demanding that the inserted object will also be returned as part of the result payload.


It's also possible to insert a collection of objects using the <insert-elements> element. This command does not support an out-identifier. The org.domain.UserClass is just an illustrative user object that XStream would serialize.


While the out attribute is useful in returning specific instances as a result payload, we often wish to run actual queries. Both parameter and parameterless queries are supported. The name attribute is the name of the query to be called, and the out-identifier is the identifier to be used for the query results in the <execution-results> payload.


Drools has a "native" rule language. This format is very light in terms of punctuation, and supports natural and domain specific languages via "expanders" that allow the language to morph to your problem domain. This chapter is mostly concerted with this native rule format. The diagrams used to present the syntax are known as "railroad" diagrams, and they are basically flow charts for the language terms. The technically very keen may also refer to DRL.g which is the Antlr3 grammar for the rule language. If you use the Rule Workbench, a lot of the rule structure is done for you with content assistance, for example, type "ru" and press ctrl+space, and it will build the rule structure for you.

Drools 5 introduces standardized error messages. This standardization aims to help users to find and resolve problems in a easier and faster way. In this section you will learn how to identify and interpret those error messages, and you will also receive some tips on how to solve the problems associated with them.

A package is a collection of rules and other related constructs, such as imports and globals. The package members are typically related to each other - perhaps HR rules, for instance. A package represents a namespace, which ideally is kept unique for a given grouping of rules. The package name itself is the namespace, and is not related to files or folders in any way.

It is possible to assemble rules from multiple rule sources, and have one top level package configuration that all the rules are kept under (when the rules are assembled). Although, it is not possible to merge into the same package resources declared under different names. A single Rulebase may, however, contain multiple packages built on it. A common structure is to have all the rules for a package in the same file as the package declaration (so that is it entirely self-contained).

The following railroad diagram shows all the components that may make up a package. Note that a package must have a namespace and be declared using standard Java conventions for package names; i.e., no spaces, unlike rule names which allow spaces. In terms of the order of elements, they can appear in any order in the rule file, with the exception of the package statement, which must be at the top of the file. In all cases, the semicolons are optional.


Notice that any rule attribute (as described the section Rule Attributes) may also be written at package level, superseding the attribute's default value. The modified default may still be replaced by an attribute setting within a rule.


With global you define global variables. They are used to make application objects available to the rules. Typically, they are used to provide data or services that the rules use, especially application services used in rule consequences, and to return data from the rules, like logs or values added in rule consequences, or for the rules to interact with the application, doing callbacks. Globals are not inserted into the Working Memory, and therefore a global should never be used to establish conditions in rules except when it has a constant immutable value. The engine cannot be notified about value changes of globals and does not track their changes. Incorrect use of globals in constraints may yield surprising results - surprising in a bad way.

If multiple packages declare globals with the same identifier they must be of the same type and all of them will reference the same global value.

In order to use globals you must:

  1. Declare your global variable in your rules file and use it in rules. Example:

    global java.util.List myGlobalList;
    
    rule "Using a global"
    when
        eval( true )
    then
        myGlobalList.add( "Hello World" );
    end
    
  2. Set the global value on your working memory. It is a best practice to set all global values before asserting any fact to the working memory. Example:

    List list = new ArrayList();
    KieSession kieSession = kiebase.newKieSession();
    kieSession.setGlobal( "myGlobalList", list );
    

Note that these are just named instances of objects that you pass in from your application to the working memory. This means you can pass in any object you want: you could pass in a service locator, or perhaps a service itself. With the new from element it is now common to pass a Hibernate session as a global, to allow from to pull data from a named Hibernate query.

One example may be an instance of a Email service. In your integration code that is calling the rule engine, you obtain your emailService object, and then set it in the working memory. In the DRL, you declare that you have a global of type EmailService, and give it the name "email". Then in your rule consequences, you can use things like email.sendSMS(number, message).

Globals are not designed to share data between rules and they should never be used for that purpose. Rules always reason and react to the working memory state, so if you want to pass data from rule to rule, assert the data as facts into the working memory.

Care must be taken when changing data held by globals because the rule engine is not aware of those changes, hence cannot react to them.



Type declarations have two main goals in the rules engine: to allow the declaration of new types, and to allow the declaration of metadata for types.

  • Declaring new types: Drools works out of the box with plain Java objects as facts. Sometimes, however, users may want to define the model directly to the rules engine, without worrying about creating models in a lower level language like Java. At other times, there is a domain model already built, but eventually the user wants or needs to complement this model with additional entities that are used mainly during the reasoning process.

  • Declaring metadata: facts may have meta information associated to them. Examples of meta information include any kind of data that is not represented by the fact attributes and is consistent among all instances of that fact type. This meta information may be queried at runtime by the engine and used in the reasoning process.

To declare a new type, all you need to do is use the keyword declare, followed by the list of fields, and the keyword end. A new fact must have a list of fields, otherwise the engine will look for an existing fact class in the classpath and raise an error if not found.


The previous example declares a new fact type called Address. This fact type will have three attributes: number, streetName and city. Each attribute has a type that can be any valid Java type, including any other class created by the user or even other fact types previously declared.

For instance, we may want to declare another fact type Person:


As we can see on the previous example, dateOfBirth is of type java.util.Date, from the Java API, while address is of the previously defined fact type Address.

You may avoid having to write the fully qualified name of a class every time you write it by using the import clause, as previously discussed.


When you declare a new fact type, Drools will, at compile time, generate bytecode that implements a Java class representing the fact type. The generated Java class will be a one-to-one Java Bean mapping of the type definition. So, for the previous example, the generated Java class would be:


Since the generated class is a simple Java class, it can be used transparently in the rules, like any other fact.


Metadata may be assigned to several different constructions in Drools: fact types, fact attributes and rules. Drools uses the at sign ('@') to introduce metadata, and it always uses the form:

@metadata_key( metadata_value )

The parenthesized metadata_value is optional.

For instance, if you want to declare a metadata attribute like author, whose value is Bob, you could simply write:


Drools allows the declaration of any arbitrary metadata attribute, but some will have special meaning to the engine, while others are simply available for querying at runtime. Drools allows the declaration of metadata both for fact types and for fact attributes. Any metadata that is declared before the attributes of a fact type are assigned to the fact type, while metadata declared after an attribute are assigned to that particular attribute.


In the previous example, there are two metadata items declared for the fact type (@author and @dateOfCreation) and two more defined for the name attribute (@key and @maxLength). Please note that the @key metadata has no required value, and so the parentheses and the value were omitted.:

Some annotations have predefined semantics that are interpreted by the engine. The following is a list of some of these predefined annotations and their meaning.

As noted before, Drools also supports annotations in type attributes. Here is a list of predefined attribute annotations.

Patterns support positional arguments on type declarations.

Positional arguments are ones where you don't need to specify the field name, as the position maps to a known named field. i.e. Person( name == "mark" ) can be rewritten as Person( "mark"; ). The semicolon ';' is important so that the engine knows that everything before it is a positional argument. Otherwise we might assume it was a boolean expression, which is how it could be interpreted after the semicolon. You can mix positional and named arguments on a pattern by using the semicolon ';' to separate them. Any variables used in a positional that have not yet been bound will be bound to the field that maps to that position.

declare Cheese
    name : String
    shop : String
    price : int
end

The default order is the declared order, but this can be overridden using @position

declare Cheese
    name : String @position(1)
    shop : String @position(2)
    price : int @position(0)
end

The @Position annotation, in the org.drools.definition.type package, can be used to annotate original pojos on the classpath. Currently only fields on classes can be annotated. Inheritance of classes is supported, but not interfaces of methods yet.

Example patterns, with two constraints and a binding. Remember semicolon ';' is used to differentiate the positional section from the named argument section. Variables and literals and expressions using just literals are supported in positional arguments, but not variables.


Cheese( "stilton", "Cheese Shop", p; )
Cheese( "stilton", "Cheese Shop"; p : price )
Cheese( "stilton"; shop == "Cheese Shop", p : price )
Cheese( name == "stilton"; shop == "Cheese Shop", p : price )

@Position is inherited when beans extend each other; while not recommended, two fields may have the same @position value, and not all consecutive values need be declared. If a @position is repeated, the conflict is solved using inheritance (fields in the superclass have the precedence) and the declaration order. If a @position value is missing, the first field without an explicit @position (if any) is selected to fill the gap. As always, conflicts are resolved by inheritance and declaration order.

declare Cheese
    name : String 
    shop : String @position(2)
    price : int @position(0)
end

declare SeasonedCheese extends Cheese
    year : Date @position(0)
    origin : String @position(6)
    country : String    
end

In the example, the field order would be : price (@position 0 in the superclass), year (@position 0 in the subclass), name (first field with no @position), shop (@position 2), country (second field without @position), origin.

Declared types are usually used inside rules files, while Java models are used when sharing the model between rules and applications. Although, sometimes, the application may need to access and handle facts from the declared types, especially when the application is wrapping the rules engine and providing higher level, domain specific user interfaces for rules management.

In such cases, the generated classes can be handled as usual with the Java Reflection API, but, as we know, that usually requires a lot of work for small results. Therefore, Drools provides a simplified API for the most common fact handling the application may want to do.

The first important thing to realize is that a declared fact will belong to the package where it was declared. So, for instance, in the example below, Person will belong to the org.drools.examples package, and so the fully qualified name of the generated class will be org.drools.examples.Person.


Declared types, as discussed previously, are generated at knowledge base compilation time, i.e., the application will only have access to them at application run time. Therefore, these classes are not available for direct reference from the application.

Drools then provides an interface through which users can handle declared types from the application code: org.drools.definition.type.FactType. Through this interface, the user can instantiate, read and write fields in the declared fact types.


The API also includes other helpful methods, like setting all the attributes at once, reading values from a Map, or reading all attributes at once, into a Map.

Although the API is similar to Java reflection (yet much simpler to use), it does not use reflection underneath, relying on much more performant accessors implemented with generated bytecode.

WARNING : this feature is still experimental and subject to changes

The same fact may have multiple dynamic types which do not fit naturally in a class hierarchy. Traits allow to model this very common scenario. A trait is an interface that can be applied (and eventually removed) to an individual object at runtime. To create a trait rather than a traditional bean, one has to declare them explicitly as in the following example:


At runtime, this declaration results in an interface, which can be used to write patterns, but can not be instantiated directly. In order to apply a trait to an object, we provide the new don keyword, which can be used as simply as this:


when a core object dons a trait, a proxy class is created on the fly (one such class will be generated lazily for each core/trait class combination). The proxy instance, which wraps the core object and implements the trait interface, is inserted automatically and will possibly activate other rules. An immediate advantage of declaring and using interfaces, getting the implementation proxy for free from the engine, is that multiple inheritance hierarchies can be exploited when writing rules. The core classes, however, need not implement any of those interfaces statically, also facilitating the use of legacy classes as cores. In fact, any object can don a trait, provided that they are declared as @Traitable. Notice that this annotation used to be optional, but now is mandatory.


The only connection between core classes and trait interfaces is at the proxy level: a trait is not specifically tied to a core class. This means that the same trait can be applied to totally different objects. For this reason, the trait does not transparently expose the fields of its core object. So, when writing a rule using a trait interface, only the fields of the interface will be available, as usual. However, any field in the interface that corresponds to a core object field, will be mapped by the proxy class:


In this case, the code and balance would be read from the underlying Customer object. Likewise, the setAccount will modify the underlying object, preserving a strongly typed access to the data structures. A hard field must have the same name and type both in the core class and all donned interfaces. The name is used to establish the mapping: if two fields have the same name, then they must also have the same declared type. The annotation @org.drools.core.factmodel.traits.Alias allows to relax this restriction. If an @Alias is provided, its value string will be used to resolve mappings instead of the original field name. @Alias can be applied both to traits and core beans.


More work is being done on reaxing this constraint (see the experimental section on "logical" traits later). Now, one might wonder what happens when a core class does NOT provide the implementation for a field defined in an interface. We call hard fields those trait fields which are also core fields and thus readily available, while we define soft those fields which are NOT provided by the core class. Hidden fields, instead, are fields in the core class not exposed by the interface.

So, while hard field management is intuitive, there remains the problem of soft and hidden fields. Hidden fields are normally only accessible using the core class directly. However, the "fields" Map can be used on a trait interface to access a hidden field. If the field can't be resolved, null will be returned. Notice that this feature is likely to change in the future.


Soft fields, instead, are stored in a Map-like data structure that is specific to each core object and referenced by the proxy(es), so that they are effectively shared even when an object dons multiple traits.


A core object also holds a reference to all its proxies, so that it is possible to track which type(s) have been added to an object, using a sort of dynamic "instanceof" operator, which we called isA. The operator can accept a String, a class literal or a list of class literals. In the latter case, the constraint is satisfied only if all the traits have been donned.


Eventually, the business logic may require that a trait is removed from a wrapped object. To this end, we provide two options. The first is a "logical don", which will result in a logical insertion of the proxy resulting from the traiting operation. The TMS will ensure that the trait is removed when its logical support is removed in the first place.


The second is the use of the "shed" keyword, which causes the removal of any type that is a subtype (or equivalent) of the one passed as an argument. Notice that, as of version 5.5, shed would only allow to remove a single specific trait.


This operation returns another proxy implementing the org.drools.core.factmodel.traits.Thing interface, where the getFields() and getCore() methods are defined. Internally, in fact, all declared traits are generated to extend this interface (in addition to any others specified). This allows to preserve the wrapper with the soft fields which would otherwise be lost.

A trait and its proxies are also correlated in another way. Starting from version 5.6, whenever a core object is "modified", its proxies are "modified" automatically as well, to allow trait-based patterns to react to potential changes in hard fields. Likewise, whenever a trait proxy (mached by a trait pattern) is modified, the modification is propagated to the core class and the other traits. Morover, whenever a don operation is performed, the core object is also modified automatically, to reevaluate any "isA" operation which may be triggered.

Potentially, this may result in a high number of modifications, impacting performance (and correctness) heavily. So two solutions are currently implemented. First, whenever a core object is modified, only the most specific traits (in the sense of inheritance between trait interfaces) are updated and an internal blocking mechanism is in place to ensure that each potentially matching pattern is evaluated once and only once. So, in the following situation:

declare trait GoldenCustomer end
declare trait NationalGoldenustomer extends GoldenCustomer end
declare trait SeniorGoldenCustomer extends GoldenCustomer end

a modification of an object that is both a GoldenCustomer, a NationalGoldenCustomer and a SeniorGoldenCustomer wold cause only the latter two proxies to be actually modified. The first would match any pattern for GoldenCustomer and NationalGoldenCustomer; the latter would instead be prevented from rematching GoldenCustomer, but would be allowed to match SeniorGoldenCustomer patterns. It is not necessary, instead, to modify the GoldenCustomer proxy since it is already covered by at least one other more specific trait.

The second method, up to the usr, is to mark traits as @PropertyReactive. Property reactivity is trait-enabled and takes into account the trait field mappings, so to block unnecessary propagations.

WARNING : This feature is extremely experimental and subject to changes

Normally, a hard field must be exposed with its original type by all traits donned by an object, to prevent situations such as


Should a Person don both Customer and Patient, the type of the hard field id would be ambiguous. However, consider the following example, where GoldenCustomers refer their best friends so that they become Customers as well:


Aside from the @Alias, a Person-as-GoldenCustomer's best friend might be compatible with the requirements of the trait GoldenCustomer, provided that they are some kind of Customer themselves. Marking a Person as "logically traitable" - i.e. adding the annotation @Traitable( logical = true ) - will instruct the engine to try and preserve the logical consistency rather than throwing an exception due to a hard field with different type declarations (Person vs Customer). The following operations would then work:


Notice that, by the time p1 becomes GoldenCustomer, p2 must have already become a Customer themselves, otherwise a runtime exception will be thrown since the very definition of GoldenCustomer would have been violated.

In some cases, however, one might want to infer, rather than verify, that p2 is a Customer by virtue that p1 is a GoldenCustomer. This modality can be enabled by marking Customer as "logical", using the annotation @org.drools.core.factmodel.traits.Trait( logical = true ). In this case, should p2 not be a Customer by the time that p1 becomes a GoldenCustomer, it will be automatically don the trait Customer to preserve the logical integrity of the system.

Notice that the annotation on the core class enables the dynamic type management for its fields, whereas the annotation on the traits determines whether they will be enforced as integrity constraints or cascaded dynamically.



A rule specifies that when a particular set of conditions occur, specified in the Left Hand Side (LHS), then do what queryis specified as a list of actions in the Right Hand Side (RHS). A common question from users is "Why use when instead of if?" "When" was chosen over "if" because "if" is normally part of a procedural execution flow, where, at a specific point in time, a condition is to be checked. In contrast, "when" indicates that the condition evaluation is not tied to a specific evaluation sequence or point in time, but that it happens continually, at any time during the life time of the engine; whenever the condition is met, the actions are executed.

A rule must have a name, unique within its rule package. If you define a rule twice in the same DRL it produces an error while loading. If you add a DRL that includes a rule name already in the package, it replaces the previous rule. If a rule name is to have spaces, then it will need to be enclosed in double quotes (it is best to always use double quotes).

Attributes - described below - are optional. They are best written one per line.

The LHS of the rule follows the when keyword (ideally on a new line), similarly the RHS follows the then keyword (again, ideally on a newline). The rule is terminated by the keyword end. Rules cannot be nested.



Rule attributes provide a declarative way to influence the behavior of the rule. Some are quite simple, while others are part of complex subsystems such as ruleflow. To get the most from Drools you should make sure you have a proper understanding of each attribute.


no-loop

default value: false

type: Boolean

When a rule's consequence modifies a fact it may cause the rule to activate again, causing an infinite loop. Setting no-loop to true will skip the creation of another Activation for the rule with the current set of facts.

ruleflow-group

default value: N/A

type: String

Ruleflow is a Drools feature that lets you exercise control over the firing of rules. Rules that are assembled by the same ruleflow-group identifier fire only when their group is active.

lock-on-active

default value: false

type: Boolean

Whenever a ruleflow-group becomes active or an agenda-group receives the focus, any rule within that group that has lock-on-active set to true will not be activated any more; irrespective of the origin of the update, the activation of a matching rule is discarded. This is a stronger version of no-loop, because the change could now be caused not only by the rule itself. It's ideal for calculation rules where you have a number of rules that modify a fact and you don't want any rule re-matching and firing again. Only when the ruleflow-group is no longer active or the agenda-group loses the focus those rules with lock-on-active set to true become eligible again for their activations to be placed onto the agenda.

salience

default value: 0

type: integer

Each rule has an integer salience attribute which defaults to zero and can be negative or positive. Salience is a form of priority where rules with higher salience values are given higher priority when ordered in the Activation queue.

Drools also supports dynamic salience where you can use an expression involving bound variables.


agenda-group

default value: MAIN

type: String

Agenda groups allow the user to partition the Agenda providing more execution control. Only rules in the agenda group that has acquired the focus are allowed to fire.

auto-focus

default value: false

type: Boolean

When a rule is activated where the auto-focus value is true and the rule's agenda group does not have focus yet, then it is given focus, allowing the rule to potentially fire.

activation-group

default value: N/A

type: String

Rules that belong to the same activation-group, identified by this attribute's string value, will only fire exclusively. More precisely, the first rule in an activation-group to fire will cancel all pending activations of all rules in the group, i.e., stop them from firing.

Note: This used to be called Xor group, but technically it's not quite an Xor. You may still hear people mention Xor group; just swap that term in your mind with activation-group.

dialect

default value: as specified by the package

type: String

possible values: "java" or "mvel"

The dialect species the language to be used for any code expressions in the LHS or the RHS code block. Currently two dialects are available, Java and MVEL. While the dialect can be specified at the package level, this attribute allows the package definition to be overridden for a rule.

date-effective

default value: N/A

type: String, containing a date and time definition

A rule can only activate if the current date and time is after date-effective attribute.

date-expires

default value: N/A

type: String, containing a date and time definition

A rule cannot activate if the current date and time is after the date-expires attribute.

duration

default value: no default value

type: long

The duration dictates that the rule will fire after a specified duration, if it is still true.


Rules now support both interval and cron based timers, which replace the now deprecated duration attribute.


Interval (indicated by "int:") timers follow the semantics of java.util.Timer objects, with an initial delay and an optional repeat interval. Cron (indicated by "cron:") timers follow standard Unix cron expressions:


A rule controlled by a timer becomes active when it matches, and once for each individual match. Its consequence is executed repeatedly, according to the timer's settings. This stops as soon as the condition doesn't match any more.

Consequences are executed even after control returns from a call to fireUntilHalt. Moreover, the Engine remains reactive to any changes made to the Working Memory. For instance, removing a fact that was involved in triggering the timer rule's execution causes the repeated execution to terminate, or inserting a fact so that some rule matches will cause that rule to fire. But the Engine is not continually active, only after a rule fires, for whatever reason. Thus, reactions to an insertion done asynchronously will not happen until the next execution of a timer-controlled rule. Disposing a session puts an end to all timer activity.

Conversely when the rule engine runs in passive mode (i.e.: using fireAllRules instead of fireUntilHalt) by default it doesn't fire consequences of timed rules unless fireAllRules isn't invoked again. However it is possible to change this default behavior by configuring the KieSession with a TimedRuleExectionOption as shown in the following example.


It is also possible to have a finer grained control on the timed rules that have to be automatically executed. To do this it is necessary to set a FILTERED TimedRuleExectionOption that allows to define a callback to filter those rules, as done in the next example.


For what regards interval timers it is also possible to define both the delay and interval as an expression instead of a fixed value. To do that it is necessary to use an expression timer (indicated by "expr:") as in the following example:


The expressions, $d and $p in this case, can use any variable defined in the pattern matching part of the rule and can be any String that can be parsed in a time duration or any numeric value that will be internally converted in a long representing a duration expressed in milliseconds.

Both interval and expression timers can have 3 optional parameters named "start", "end" and "repeat-limit". When one or more of these parameters are used the first part of the timer definition must be followed by a semicolon ';' and the parameters have to be separated by a comma ',' as in the following example:


The value for start and end parameters can be a Date, a String representing a Date or a long, or more in general any Number, that will be transformed in a Java Date applying the following conversion:

new Date( ((Number) n).longValue() )

Conversely the repeat-limit can be only an integer and it defines the maximum number of repetitions allowed by the timer. If both the end and the repeat-limit parameters are set the timer will stop when the first of the two will be matched.

The using of the start parameter implies the definition of a phase for the timer, where the beginning of the phase is given by the start itself plus the eventual delay. In other words in this case the timed rule will then be scheduled at times:

start + delay + n*period

for up to repeat-limit times and no later than the end timestamp (whichever first). For instance the rule having the following interval timer

timer ( int: 30s 1m; start="3-JAN-2010" )

will be scheduled at the 30th second of every minute after the midnight of the 3-JAN-2010. This also means that if for example you turn the system on at midnight of the 3-FEB-2010 it won't be scheduled immediately but will preserve the phase defined by the timer and so it will be scheduled for the first time 30 seconds after the midnight. If for some reason the system is paused (e.g. the session is serialized and then deserialized after a while) the rule will be scheduled only once to recover from missing activations (regardless of how many activations we missed) and subsequently it will be scheduled again in phase with the timer.

Calendars are used to control when rules can fire. The Calendar API is modelled on Quartz:


Calendars are registered with the KieSession:


They can be used in conjunction with normal rules and rules including timers. The rule attribute "calendars" may contain one or more comma-separated calendar names written as string literals.


Any bean property can be used directly. A bean property is exposed using a standard Java bean getter: a method getMyProperty() (or isMyProperty() for a primitive boolean) which takes no arguments and return something. For example: the age property is written as age in DRL instead of the getter getAge():

Person( age == 50 )

// this is the same as:
Person( getAge() == 50 )

Drools uses the standard JDK Introspector class to do this mapping, so it follows the standard Java bean specification.

Nested property access is also supported:

Person( address.houseNumber == 50 )

// this is the same as:
Person( getAddress().getHouseNumber() == 50 )

Nested properties are also indexed.

You can use any Java expression that returns a boolean as a constraint inside the parentheses of a pattern. Java expressions can be mixed with other expression enhancements, such as property access:

Person( age == 50 )

It is possible to change the evaluation priority by using parentheses, as in any logic or mathematical expression:

Person( age > 100 && ( age % 10 == 0 ) )

It is possible to reuse Java methods:

Person( Math.round( weight / ( height * height ) ) < 25.0 )

Normal Java operator precedence applies, see the operator precedence list below.

Type coercion is always attempted if the field and the value are of different types; exceptions will be thrown if a bad coercion is attempted. For instance, if "ten" is provided as a string in a numeric evaluator, an exception is thrown, whereas "10" would coerce to a numeric 10. Coercion is always in favor of the field type and not the value type:

Person( age == "10" ) // "10" is coerced to 10

Coercion to the correct value for the evaluator and the field will be attempted.

Patterns now support positional arguments on type declarations.

Positional arguments are ones where you don't need to specify the field name, as the position maps to a known named field. i.e. Person( name == "mark" ) can be rewritten as Person( "mark"; ). The semicolon ';' is important so that the engine knows that everything before it is a positional argument. Otherwise we might assume it was a boolean expression, which is how it could be interpreted after the semicolon. You can mix positional and named arguments on a pattern by using the semicolon ';' to separate them. Any variables used in a positional that have not yet been bound will be bound to the field that maps to that position.

declare Cheese
    name : String
    shop : String
    price : int
end

Example patterns, with two constraints and a binding. Remember semicolon ';' is used to differentiate the positional section from the named argument section. Variables and literals and expressions using just literals are supported in positional arguments, but not variables. Positional arguments are always resolved using unification.

Cheese( "stilton", "Cheese Shop", p; )
Cheese( "stilton", "Cheese Shop"; p : price )
Cheese( "stilton"; shop == "Cheese Shop", p : price )
Cheese( name == "stilton"; shop == "Cheese Shop", p : price )

Positional arguments that are given a previously declared binding will constrain against that using unification; these are referred to as input arguments. If the binding does not yet exist, it will create the declaration binding it to the field represented by the position argument; these are referred to as output arguments.

When you call modify() (see the modify statement section) on a given object it will trigger a revaluation of all patterns of the matching object type in the knowledge base. This can can lead to unwanted and useless evaluations and in the worst cases to infinite recursions. The only workaround to avoid it was to split up your objects into smaller ones having a 1 to 1 relationship with the original object.

This feature allows the pattern matching to only react to modification of properties actually constrained or bound inside of a given pattern. That will help with performance and recursion and avoid artificial object splitting.

By default this feature is off in order to make the behavior of the rule engine backward compatible with the former releases. When you want to activate it on a specific bean you have to annotate it with @propertyReactive. This annotation works both on DRL type declarations:

declare Person
@propertyReactive
    firstName : String
    lastName : String
end

and on Java classes:

@PropertyReactive
    public static class Person {
    private String firstName;
    private String lastName;
}

In this way, for instance, if you have a rule like the following:

rule "Every person named Mario is a male" when
    $person : Person( firstName == "Mario" )
then
    modify ( $person )  { setMale( true ) }
end

you won't have to add the no-loop attribute to it in order to avoid an infinite recursion because the engine recognizes that the pattern matching is done on the 'firstName' property while the RHS of the rule modifies the 'male' one. Note that this feature does not work for update(), and this is one of the reasons why we promote modify() since it encapsulates the field changes within the statement. Moreover, on Java classes, you can also annotate any method to say that its invocation actually modifies other properties. For instance in the former Person class you could have a method like:

@Modifies( { "firstName", "lastName" } )
public void setName(String name) {
    String[] names = name.split("\\s");
    this.firstName = names[0];
    this.lastName = names[1];
}

That means that if a rule has a RHS like the following:

modify($person) { setName("Mario Fusco") }

it will correctly recognize that the values of both properties 'firstName' and 'lastName' could have potentially been modified and act accordingly, not missing of reevaluating the patterns constrained on them. At the moment the usage of @Modifies is not allowed on fields but only on methods. This is coherent with the most common scenario where the @Modifies will be used for methods that are not related with a class field as in the Person.setName() in the former example. Also note that @Modifies is not transitive, meaning that if another method internally invokes the Person.setName() one it won't be enough to annotate it with @Modifies( { "name" } ), but it is necessary to use @Modifies( { "firstName", "lastName" } ) even on it. Very likely @Modifies transitivity will be implemented in the next release.

For what regards nested accessors, the engine will be notified only for top level fields. In other words a pattern matching like:

Person ( address.city.name == "London ) 

will be revaluated only for modification of the 'address' property of a Person object. In the same way the constraints analysis is currently strictly limited to what there is inside a pattern. Another example could help to clarify this. An LHS like the following:

$p : Person( )
Car( owner = $p.name )

will not listen on modifications of the person's name, while this one will do:

Person( $name : name )
Car( owner = $name )

To overcome this problem it is possible to annotate a pattern with @watch as it follows:

$p : Person( ) @watch ( name )
Car( owner = $p.name )

Indeed, annotating a pattern with @watch allows you to modify the inferred set of properties for which that pattern will react. Note that the properties named in the @watch annotation are actually added to the ones automatically inferred, but it is also possible to explicitly exclude one or more of them prepending their name with a ! and to make the pattern to listen for all or none of the properties of the type used in the pattern respectively with the wildcrds * and !*. So, for example, you can annotate a pattern in the LHS of a rule like:

// listens for changes on both firstName (inferred) and lastName
Person( firstName == $expectedFirstName ) @watch( lastName )

// listens for all the properties of the Person bean
Person( firstName == $expectedFirstName ) @watch( * )

// listens for changes on lastName and explicitly exclude firstName
Person( firstName == $expectedFirstName ) @watch( lastName, !firstName )

// listens for changes on all the properties except the age one
Person( firstName == $expectedFirstName ) @watch( *, !age )

Since doesn't make sense to use this annotation on a pattern using a type not annotated with @PropertyReactive the rule compiler will raise a compilation error if you try to do so. Also the duplicated usage of the same property in @watch (for example like in: @watch( firstName, ! firstName ) ) will end up in a compilation error. In a next release we will make the automatic detection of the properties to be listened smarter by doing analysis even outside of the pattern.

It also possible to enable this feature by default on all the types of your model or to completely disallow it by using on option of the KnowledgeBuilderConfiguration. In particular this new PropertySpecificOption can have one of the following 3 values:

- DISABLED => the feature is turned off and all the other related annotations are just ignored
- ALLOWED => this is the default behavior: types are not property reactive unless they are not annotated with @PropertySpecific
- ALWAYS => all types are property reactive by default

So, for example, to have a KnowledgeBuilder generating property reactive types by default you could do:

KnowledgeBuilderConfiguration config = KnowledgeBuilderFactory.newKnowledgeBuilderConfiguration();
config.setOption(PropertySpecificOption.ALWAYS);
KnowledgeBuilder kbuilder = KnowledgeBuilderFactory.newKnowledgeBuilder(config);

In this last case it will be possible to disable the property reactivity feature on a specific type by annotating it with @ClassReactive.

The Conditional Element or is used to group other Conditional Elements into a logical disjunction. Drools supports both prefix or and infix or.


Traditional infix or is supported:

//infixOr
Cheese( cheeseType : type ) or Person( favouriteCheese == cheeseType )

Explicit grouping with parentheses is also supported:

//infixOr with grouping
( Cheese( cheeseType : type ) or
  ( Person( favouriteCheese == cheeseType ) and
    Person( favouriteCheese == cheeseType ) )

Note

The symbol || (as an alternative to or) is deprecated. But it is still supported in the syntax for backwards compatibility.


Prefix or is also supported:

(or Person( sex == "f", age > 60 )
    Person( sex == "m", age > 65 )

Note

The behavior of the Conditional Element or is different from the connective || for constraints and restrictions in field constraints. The engine actually has no understanding of the Conditional Element or. Instead, via a number of different logic transformations, a rule with or is rewritten as a number of subrules. This process ultimately results in a rule that has a single or as the root node and one subrule for each of its CEs. Each subrule can activate and fire like any normal rule; there is no special behavior or interaction between these subrules. - This can be most confusing to new rule authors.

The Conditional Element or also allows for optional pattern binding. This means that each resulting subrule will bind its pattern to the pattern binding. Each pattern must be bound separately, using eponymous variables:

pensioner : ( Person( sex == "f", age > 60 ) or Person( sex == "m", age > 65 ) )
(or pensioner : Person( sex == "f", age > 60 ) 
    pensioner : Person( sex == "m", age > 65 ) )

Since the conditional element or results in multiple subrule generation, one for each possible logically outcome, the example above would result in the internal generation of two rules. These two rules work independently within the Working Memory, which means both can match, activate and fire - there is no shortcutting.

The best way to think of the conditional element or is as a shortcut for generating two or more similar rules. When you think of it that way, it's clear that for a single rule there could be multiple activations if two or more terms of the disjunction are true.


The Conditional Element forall completes the First Order Logic support in Drools. The Conditional Element forall evaluates to true when all facts that match the first pattern match all the remaining patterns. Example:

rule "All English buses are red"
when
    forall( $bus : Bus( type == 'english') 
                   Bus( this == $bus, color = 'red' ) )
then
    // all English buses are red
end

In the above rule, we "select" all Bus objects whose type is "english". Then, for each fact that matches this pattern we evaluate the following patterns and if they match, the forall CE will evaluate to true.

To state that all facts of a given type in the working memory must match a set of constraints, forall can be written with a single pattern for simplicity. Example:


Another example shows multiple patterns inside the forall:


Forall can be nested inside other CEs. For instance, forall can be used inside a not CE. Note that only single patterns have optional parentheses, so that with a nested forall parentheses must be used:


As a side note, forall( p1 p2 p3...) is equivalent to writing:

not(p1 and not(and p2 p3...))

Also, it is important to note that forall is a scope delimiter. Therefore, it can use any previously bound variable, but no variable bound inside it will be available for use outside of it.


The Conditional Element from enables users to specify an arbitrary source for data to be matched by LHS patterns. This allows the engine to reason over data not in the Working Memory. The data source could be a sub-field on a bound variable or the results of a method call. It is a powerful construction that allows out of the box integration with other application components and frameworks. One common example is the integration with data retrieved on-demand from databases using hibernate named queries.

The expression used to define the object source is any expression that follows regular MVEL syntax. Therefore, it allows you to easily use object property navigation, execute method calls and access maps and collections elements.

Here is a simple example of reasoning and binding on another pattern sub-field:

rule "validate zipcode"
when
    Person( $personAddress : address ) 
    Address( zipcode == "23920W") from $personAddress 
then
    // zip code is ok
end

With all the flexibility from the new expressiveness in the Drools engine you can slice and dice this problem many ways. This is the same but shows how you can use a graph notation with the 'from':

rule "validate zipcode"
when
    $p : Person( ) 
    $a : Address( zipcode == "23920W") from $p.address 
then
    // zip code is ok
end

Previous examples were evaluations using a single pattern. The CE from also support object sources that return a collection of objects. In that case, from will iterate over all objects in the collection and try to match each of them individually. For instance, if we want a rule that applies 10% discount to each item in an order, we could do:

rule "apply 10% discount to all items over US$ 100,00 in an order"
when
    $order : Order()
    $item  : OrderItem( value > 100 ) from $order.items
then
    // apply discount to $item
end

The above example will cause the rule to fire once for each item whose value is greater than 100 for each given order.

You must take caution, however, when using from, especially in conjunction with the lock-on-active rule attribute as it may produce unexpected results. Consider the example provided earlier, but now slightly modified as follows:

rule "Assign people in North Carolina (NC) to sales region 1"
ruleflow-group "test"
lock-on-active true
when
    $p : Person( ) 
    $a : Address( state == "NC") from $p.address 
then
    modify ($p) {} // Assign person to sales region 1 in a modify block
end

rule "Apply a discount to people in the city of Raleigh"
ruleflow-group "test"
lock-on-active true
when
    $p : Person( ) 
    $a : Address( city == "Raleigh") from $p.address 
then
    modify ($p) {} // Apply discount to person in a modify block
end

In the above example, persons in Raleigh, NC should be assigned to sales region 1 and receive a discount; i.e., you would expect both rules to activate and fire. Instead you will find that only the second rule fires.

If you were to turn on the audit log, you would also see that when the second rule fires, it deactivates the first rule. Since the rule attribute lock-on-active prevents a rule from creating new activations when a set of facts change, the first rule fails to reactivate. Though the set of facts have not changed, the use of from returns a new fact for all intents and purposes each time it is evaluated.

First, it's important to review why you would use the above pattern. You may have many rules across different rule-flow groups. When rules modify working memory and other rules downstream of your RuleFlow (in different rule-flow groups) need to be reevaluated, the use of modify is critical. You don't, however, want other rules in the same rule-flow group to place activations on one another recursively. In this case, the no-loop attribute is ineffective, as it would only prevent a rule from activating itself recursively. Hence, you resort to lock-on-active.

There are several ways to address this issue:

  • Avoid the use of from when you can assert all facts into working memory or use nested object references in your constraint expressions (shown below).

  • Place the variable assigned used in the modify block as the last sentence in your condition (LHS).

  • Avoid the use of lock-on-active when you can explicitly manage how rules within the same rule-flow group place activations on one another (explained below).

The preferred solution is to minimize use of from when you can assert all your facts into working memory directly. In the example above, both the Person and Address instance can be asserted into working memory. In this case, because the graph is fairly simple, an even easier solution is to modify your rules as follows:

rule "Assign people in North Carolina (NC) to sales region 1"
ruleflow-group "test"
lock-on-active true
when
    $p : Person(address.state == "NC" )  
then
    modify ($p) {} // Assign person to sales region 1 in a modify block
end

rule "Apply a discount to people in the city of Raleigh"
ruleflow-group "test"
lock-on-active true
when
    $p : Person(address.city == "Raleigh" )  
then
    modify ($p) {} //Apply discount to person in a modify block
end

Now, you will find that both rules fire as expected. However, it is not always possible to access nested facts as above. Consider an example where a Person holds one or more Addresses and you wish to use an existential quantifier to match people with at least one address that meets certain conditions. In this case, you would have to resort to the use of from to reason over the collection.

There are several ways to use from to achieve this and not all of them exhibit an issue with the use of lock-on-active. For example, the following use of from causes both rules to fire as expected:

rule "Assign people in North Carolina (NC) to sales region 1"
ruleflow-group "test"
lock-on-active true
when
    $p : Person($addresses : addresses)
    exists (Address(state == "NC") from $addresses)  
then
    modify ($p) {} // Assign person to sales region 1 in a modify block
end

rule "Apply a discount to people in the city of Raleigh"
ruleflow-group "test"
lock-on-active true
when
    $p : Person($addresses : addresses)
    exists (Address(city == "Raleigh") from $addresses)  
then
    modify ($p) {} // Apply discount to person in a modify block
end

However, the following slightly different approach does exhibit the problem:

rule "Assign people in North Carolina (NC) to sales region 1"
ruleflow-group "test"
lock-on-active true
when
    $assessment : Assessment()
    $p : Person()
    $addresses : List() from $p.addresses
    exists (Address( state == "NC") from $addresses) 
then
    modify ($assessment) {} // Modify assessment in a modify block
end

rule "Apply a discount to people in the city of Raleigh"
ruleflow-group "test"
lock-on-active true
when
    $assessment : Assessment()
    $p : Person()
    $addresses : List() from $p.addresses 
    exists (Address( city == "Raleigh") from $addresses)
then
    modify ($assessment) {} // Modify assessment in a modify block
end

In the above example, the $addresses variable is returned from the use of from. The example also introduces a new object, assessment, to highlight one possible solution in this case. If the $assessment variable assigned in the condition (LHS) is moved to the last condition in each rule, both rules fire as expected.

Though the above examples demonstrate how to combine the use of from with lock-on-active where no loss of rule activations occurs, they carry the drawback of placing a dependency on the order of conditions on the LHS. In addition, the solutions present greater complexity for the rule author in terms of keeping track of which conditions may create issues.

A better alternative is to assert more facts into working memory. In this case, a person's addresses may be asserted into working memory and the use of from would not be necessary.

There are cases, however, where asserting all data into working memory is not practical and we need to find other solutions. Another option is to reevaluate the need for lock-on-active. An alternative to lock-on-active is to directly manage how rules within the same rule-flow group activate one another by including conditions in each rule that prevent rules from activating each other recursively when working memory is modified. For example, in the case above where a discount is applied to citizens of Raleigh, a condition may be added to the rule that checks whether the discount has already been applied. If so, the rule does not activate.


The Conditional Element collect allows rules to reason over a collection of objects obtained from the given source or from the working memory. In First Oder Logic terms this is the cardinality quantifier. A simple example:

import java.util.ArrayList

rule "Raise priority if system has more than 3 pending alarms"
when
    $system : System()
    $alarms : ArrayList( size >= 3 )
              from collect( Alarm( system == $system, status == 'pending' ) )
then
    // Raise priority, because system $system has
    // 3 or more alarms pending. The pending alarms
    // are $alarms.
end

In the above example, the rule will look for all pending alarms in the working memory for each given system and group them in ArrayLists. If 3 or more alarms are found for a given system, the rule will fire.

The result pattern of collect can be any concrete class that implements the java.util.Collection interface and provides a default no-arg public constructor. This means that you can use Java collections like ArrayList, LinkedList, HashSet, etc., or your own class, as long as it implements the java.util.Collection interface and provide a default no-arg public constructor.

Both source and result patterns can be constrained as any other pattern.

Variables bound before the collect CE are in the scope of both source and result patterns and therefore you can use them to constrain both your source and result patterns. But note that collect is a scope delimiter for bindings, so that any binding made inside of it is not available for use outside of it.

Collect accepts nested from CEs. The following example is a valid use of "collect":

import java.util.LinkedList;

rule "Send a message to all mothers"
when
    $town : Town( name == 'Paris' )
    $mothers : LinkedList() 
               from collect( Person( gender == 'F', children > 0 ) 
                             from $town.getPeople() 
                           )
then
    // send a message to all mothers
end

The Conditional Element accumulate is a more flexible and powerful form of collect, in the sense that it can be used to do what collect does and also achieve results that the CE collect is not capable of achieving. Accumulate allows a rule to iterate over a collection of objects, executing custom actions for each of the elements, and at the end, it returns a result object.

Accumulate supports both the use of pre-defined accumulate functions, or the use of inline custom code. Inline custom code should be avoided though, as it is harder for rule authors to maintain, and frequently leads to code duplication. Accumulate functions are easier to test and reuse.

The Accumulate CE also supports multiple different syntaxes. The preferred syntax is the top level accumulate, as noted bellow, but all other syntaxes are supported for backward compatibility.

The top level accumulate syntax is the most compact and flexible syntax. The simplified syntax is as follows:

accumulate( <source pattern>; <functions> [;<constraints>] )

For instance, a rule to calculate the minimum, maximum and average temperature reading for a given sensor and that raises an alarm if the minimum temperature is under 20C degrees and the average is over 70C degrees could be written in the following way, using Accumulate:

rule "Raise alarm"
when
    $s : Sensor()
    accumulate( Reading( sensor == $s, $temp : temperature );
                $min : min( $temp ),
                $max : max( $temp ),
                $avg : average( $temp );
                $min < 20, $avg > 70 )
then
    // raise the alarm
end

In the above example, min, max and average are Accumulate Functions and will calculate the minimum, maximum and average temperature values over all the readings for each sensor.

Drools ships with several built-in accumulate functions, including:

These common functions accept any expression as input. For instance, if someone wants to calculate the average profit on all items of an order, a rule could be written using the average function:

rule "Average profit"
when
    $order : Order()
    accumulate( OrderItem( order == $order, $cost : cost, $price : price );
                $avgProfit : average( 1 - $cost / $price ) )
then
    // average profit for $order is $avgProfit
end

Accumulate Functions are all pluggable. That means that if needed, custom, domain specific functions can easily be added to the engine and rules can start to use them without any restrictions. To implement a new Accumulate Function all one needs to do is to create a Java class that implements the org.drools.core.runtime.rule.TypedAccumulateFunction interface. As an example of an Accumulate Function implementation, the following is the implementation of the average function:

/**
 * An implementation of an accumulator capable of calculating average values
 */
public class AverageAccumulateFunction implements org.drools.core.runtime.rule.TypedAccumulateFunction {

    public void readExternal(ObjectInput in) throws IOException, ClassNotFoundException {

    }

    public void writeExternal(ObjectOutput out) throws IOException {

    }

    public static class AverageData implements Externalizable {
        public int    count = 0;
        public double total = 0;

        public AverageData() {}

        public void readExternal(ObjectInput in) throws IOException, ClassNotFoundException {
            count   = in.readInt();
            total   = in.readDouble();
        }

        public void writeExternal(ObjectOutput out) throws IOException {
            out.writeInt(count);
            out.writeDouble(total);
        }

    }

    /* (non-Javadoc)
     * @see org.drools.base.accumulators.AccumulateFunction#createContext()
     */
    public Serializable createContext() {
        return new AverageData();
    }

    /* (non-Javadoc)
     * @see org.drools.core.base.accumulators.AccumulateFunction#init(java.lang.Object)
     */
    public void init(Serializable context) throws Exception {
        AverageData data = (AverageData) context;
        data.count = 0;
        data.total = 0;
    }

    /* (non-Javadoc)
     * @see org.drools.core.base.accumulators.AccumulateFunction#accumulate(java.lang.Object, java.lang.Object)
     */
    public void accumulate(Serializable context,
                           Object value) {
        AverageData data = (AverageData) context;
        data.count++;
        data.total += ((Number) value).doubleValue();
    }

    /* (non-Javadoc)
     * @see org.drools.core.base.accumulators.AccumulateFunction#reverse(java.lang.Object, java.lang.Object)
     */
    public void reverse(Serializable context,
                        Object value) throws Exception {
        AverageData data = (AverageData) context;
        data.count--;
        data.total -= ((Number) value).doubleValue();
    }

    /* (non-Javadoc)
     * @see org.drools.core.base.accumulators.AccumulateFunction#getResult(java.lang.Object)
     */
    public Object getResult(Serializable context) throws Exception {
        AverageData data = (AverageData) context;
        return new Double( data.count == 0 ? 0 : data.total / data.count );
    }

    /* (non-Javadoc)
     * @see org.drools.core.base.accumulators.AccumulateFunction#supportsReverse()
     */
    public boolean supportsReverse() {
        return true;
    }

    /**
     * {@inheritDoc}
     */
    public Class< ? > getResultType() {
        return Number.class;
    }

}

The code for the function is very simple, as we could expect, as all the "dirty" integration work is done by the engine. Finally, to use the function in the rules, the author can import it using the "import accumulate" statement:

import accumulate <class_name> <function_name>

For instance, if one implements the class some.package.VarianceFunction function that implements the variance function and wants to use it in the rules, he would do the following:


Note

The built in functions (sum, average, etc) are imported automatically by the engine. Only user-defined custom accumulate functions need to be explicitly imported.

Note

For backward compatibility, Drools still supports the configuration of accumulate functions through configuration files and system properties, but this is a deprecated method. In order to configure the variance function from the previous example using the configuration file or system property, the user would set a property like this:

drools.accumulate.function.variance = some.package.VarianceFunction

Please note that "drools.accumulate.function." is a prefix that must always be used, "variance" is how the function will be used in the drl files, and "some.package.VarianceFunction" is the fully qualified name of the class that implements the function behavior.

Another possible syntax for the accumulate is to define inline custom code, instead of using accumulate functions. As noted on the previous warned, this is discouraged though for the stated reasons.

The general syntax of the accumulate CE with inline custom code is:

<result pattern> from accumulate( <source pattern>,
                                  init( <init code> ),
                                  action( <action code> ),
                                  reverse( <reverse code> ),
                                  result( <result expression> ) )

The meaning of each of the elements is the following:

It is easier to understand if we look at an example:

rule "Apply 10% discount to orders over US$ 100,00"
when
    $order : Order()
    $total : Number( doubleValue > 100 ) 
             from accumulate( OrderItem( order == $order, $value : value ),
                              init( double total = 0; ),
                              action( total += $value; ),
                              reverse( total -= $value; ),
                              result( total ) )
then
    # apply discount to $order
end

In the above example, for each Order in the Working Memory, the engine will execute the init code initializing the total variable to zero. Then it will iterate over all OrderItem objects for that order, executing the action for each one (in the example, it will sum the value of all items into the total variable). After iterating over all OrderItem objects, it will return the value corresponding to the result expression (in the above example, the value of variable total). Finally, the engine will try to match the result with the Number pattern, and if the double value is greater than 100, the rule will fire.

The example used Java as the semantic dialect, and as such, note that the usage of the semicolon as statement delimiter is mandatory in the init, action and reverse code blocks. The result is an expression and, as such, it does not admit ';'. If the user uses any other dialect, he must comply to that dialect's specific syntax.

As mentioned before, the reverse code is optional, but it is strongly recommended that the user writes it in order to benefit from the improved performance on update and delete.

The accumulate CE can be used to execute any action on source objects. The following example instantiates and populates a custom object:

rule "Accumulate using custom objects"
when
    $person   : Person( $likes : likes )
    $cheesery : Cheesery( totalAmount > 100 )
                from accumulate( $cheese : Cheese( type == $likes ),
                                 init( Cheesery cheesery = new Cheesery(); ),
                                 action( cheesery.addCheese( $cheese ); ),
                                 reverse( cheesery.removeCheese( $cheese ); ),
                                 result( cheesery ) );
then
    // do something
end

The Right Hand Side (RHS) is a common name for the consequence or action part of the rule; this part should contain a list of actions to be executed. It is bad practice to use imperative or conditional code in the RHS of a rule; as a rule should be atomic in nature - "when this, then do this", not "when this, maybe do this". The RHS part of a rule should also be kept small, thus keeping it declarative and readable. If you find you need imperative and/or conditional code in the RHS, then maybe you should be breaking that rule down into multiple rules. The main purpose of the RHS is to insert, delete or modify working memory data. To assist with that there are a few convenience methods you can use to modify working memory; without having to first reference a working memory instance.

update(object, handle); will tell the engine that an object has changed (one that has been bound to something on the LHS) and rules may need to be reconsidered.

update(object); can also be used; here the Knowledge Helper will look up the facthandle for you, via an identity check, for the passed object. (Note that if you provide Property Change Listeners to your Java beans that you are inserting into the engine, you can avoid the need to call update() when the object changes.). After a fact's field values have changed you must call update before changing another fact, or you will cause problems with the indexing within the rule engine. The modify keyword avoids this problem.

insert(new Something()); will place a new object of your creation into the Working Memory.

insertLogical(new Something()); is similar to insert, but the object will be automatically deleted when there are no more facts to support the truth of the currently firing rule.

delete(handle); removes an object from Working Memory.

These convenience methods are basically macros that provide short cuts to the KnowledgeHelper instance that lets you access your Working Memory from rules files. The predefined variable drools of type KnowledgeHelper lets you call several other useful methods. (Refer to the KnowledgeHelper interface documentation for more advanced operations).

The full Knowledge Runtime API is exposed through another predefined variable, kcontext, of type KieContext. Its method getKieRuntime() delivers an object of type KieRuntime, which, in turn, provides access to a wealth of methods, many of which are quite useful for coding RHS logic.

Sometimes the constraint of having one single consequence for each rule can be somewhat limiting and leads to verbose and difficult to be maintained repetitions like in the following example:

rule "Give 10% discount to customers older than 60"
when
    $customer : Customer( age > 60 )
then
    modify($customer) { setDiscount( 0.1 ) };
end

rule "Give free parking to customers older than 60"
when
    $customer : Customer( age > 60 )
    $car : Car ( owner == $customer )
then
    modify($car) { setFreeParking( true ) };
end

It is already possible to partially overcome this problem by making the second rule extending the first one like in:

rule "Give 10% discount to customers older than 60"
when
    $customer : Customer( age > 60 )
then
    modify($customer) { setDiscount( 0.1 ) };
end

rule "Give free parking to customers older than 60"
    extends "Give 10% discount to customers older than 60"
when
    $car : Car ( owner == $customer )
then
    modify($car) { setFreeParking( true ) };
end

Anyway this feature makes it possible to define more labelled consequences other than the default one in a single rule, so, for example, the 2 former rules can be compacted in only one like it follows:

rule "Give 10% discount and free parking to customers older than 60"
when
    $customer : Customer( age > 60 )
    do[giveDiscount]
    $car : Car ( owner == $customer )
then
    modify($car) { setFreeParking( true ) };
then[giveDiscount]
    modify($customer) { setDiscount( 0.1 ) };
end

This last rule has 2 consequences, the usual default one, plus another one named "giveDiscount" that is activated, using the keyword do, as soon as a customer older than 60 is found in the knowledge base, regardless of the fact that he owns a car or not. The activation of a named consequence can be also guarded by an additional condition like in this further example:

rule "Give free parking to customers older than 60 and 10% discount to golden ones among them"
when
    $customer : Customer( age > 60 )
    if ( type == "Golden" ) do[giveDiscount]
    $car : Car ( owner == $customer )
then
    modify($car) { setFreeParking( true ) };
then[giveDiscount]
    modify($customer) { setDiscount( 0.1 ) };
end

The condition in the if statement is always evaluated on the pattern immediately preceding it. In the end this last, a bit more complicated, example shows how it is possible to switch over different conditions using a nested if/else statement:

rule "Give free parking and 10% discount to over 60 Golden customer and 5% to Silver ones"
when
    $customer : Customer( age > 60 )
    if ( type == "Golden" ) do[giveDiscount10]
    else if ( type == "Silver" ) break[giveDiscount5]
    $car : Car ( owner == $customer )
then
    modify($car) { setFreeParking( true ) };
then[giveDiscount10]
    modify($customer) { setDiscount( 0.1 ) };
then[giveDiscount5]
    modify($customer) { setDiscount( 0.05 ) };
end

Here the purpose is to give a 10% discount AND a free parking to Golden customers over 60, but only a 5% discount (without free parking) to the Silver ones. This result is achieved by activating the consequence named "giveDiscount5" using the keyword break instead of do. In fact do just schedules a consequence in the agenda, allowing the remaining part of the LHS to continue of being evaluated as per normal, while break also blocks any further pattern matching evaluation. Note, of course, that the activation of a named consequence not guarded by any condition with break doesn't make sense (and generates a compile time error) since otherwise the LHS part following it would be never reachable.


A query is a simple way to search the working memory for facts that match the stated conditions. Therefore, it contains only the structure of the LHS of a rule, so that you specify neither "when" nor "then". A query has an optional set of parameters, each of which can be optionally typed. If the type is not given, the type Object is assumed. The engine will attempt to coerce the values as needed. Query names are global to the KieBase; so do not add queries of the same name to different packages for the same RuleBase.

To return the results use ksession.getQueryResults("name"), where "name" is the query's name. This returns a list of query results, which allow you to retrieve the objects that matched the query.

The first example presents a simple query for all the people over the age of 30. The second one, using parameters, combines the age limit with a location.



We iterate over the returned QueryResults using a standard "for" loop. Each element is a QueryResultsRow which we can use to access each of the columns in the tuple. These columns can be accessed by bound declaration name or index position.


Support for positional syntax has been added for more compact code. By default the declared type order in the type declaration matches the argument position. But it possible to override these using the @position annotation. This allows patterns to be used with positional arguments, instead of the more verbose named arguments.

declare Cheese
    name : String @position(1)
    shop : String @position(2)
    price : int @position(0)
end

The @Position annotation, in the org.drools.definition.type package, can be used to annotate original pojos on the classpath. Currently only fields on classes can be annotated. Inheritance of classes is supported, but not interfaces or methods. The isContainedIn query below demonstrates the use of positional arguments in a pattern; Location(x, y;) instead of Location( thing == x, location == y).

Queries can now call other queries, this combined with optional query arguments provides derivation query style backward chaining. Positional and named syntax is supported for arguments. It is also possible to mix both positional and named, but positional must come first, separated by a semi colon. Literal expressions can be passed as query arguments, but at this stage you cannot mix expressions with variables. Here is an example of a query that calls another query. Note that 'z' here will always be an 'out' variable. The '?' symbol means the query is pull only, once the results are returned you will not receive further results as the underlying data changes.

declare Location
    thing : String 
    location : String 
end

query isContainedIn( String x, String y ) 
    Location(x, y;)
    or 
    ( Location(z, y;) and ?isContainedIn(x, z;) )
end

As previously mentioned you can use live "open" queries to reactively receive changes over time from the query results, as the underlying data it queries against changes. Notice the "look" rule calls the query without using '?'.

query isContainedIn( String x, String y ) 
    Location(x, y;)
    or 
    ( Location(z, y;) and isContainedIn(x, z;) )
end

rule look when 
    Person( $l : likes ) 
    isContainedIn( $l, 'office'; )
then
   insertLogical( $l 'is in the office' );
end 

Drools supports unification for derivation queries, in short this means that arguments are optional. It is possible to call queries from Java leaving arguments unspecified using the static field org.drools.core.runtime.rule.Variable.v - note you must use 'v' and not an alternative instance of Variable. These are referred to as 'out' arguments. Note that the query itself does not declare at compile time whether an argument is in or an out, this can be defined purely at runtime on each use. The following example will return all objects contained in the office.

results = ksession.getQueryResults( "isContainedIn", new Object[] {  Variable.v, "office" } );
l = new ArrayList<List<String>>();
for ( QueryResultsRow r : results ) {
    l.add( Arrays.asList( new String[] { (String) r.get( "x" ), (String) r.get( "y" ) } ) );
}  

The algorithm uses stacks to handle recursion, so the method stack will not blow up.

The following is not yet supported:

  • List and Map unification

  • Variables for the fields of facts

  • Expression unification - pred( X, X + 1, X * Y / 7 )

Domain Specific Languages (or DSLs) are a way of creating a rule language that is dedicated to your problem domain. A set of DSL definitions consists of transformations from DSL "sentences" to DRL constructs, which lets you use of all the underlying rule language and engine features. Given a DSL, you write rules in DSL rule (or DSLR) files, which will be translated into DRL files.

DSL and DSLR files are plain text files, and you can use any text editor to create and modify them. But there are also DSL and DSLR editors, both in the IDE as well as in the web based BRMS, and you can use those as well, although they may not provide you with the full DSL functionality.

The Drools DSL mechanism allows you to customise conditional expressions and consequence actions. A global substitution mechanism ("keyword") is also available.


In the preceding example, [when] indicates the scope of the expression, i.e., whether it is valid for the LHS or the RHS of a rule. The part after the bracketed keyword is the expression that you use in the rule; typically a natural language expression, but it doesn't have to be. The part to the right of the equal sign ("=") is the mapping of the expression into the rule language. The form of this string depends on its destination, RHS or LHS. If it is for the LHS, then it ought to be a term according to the regular LHS syntax; if it is for the RHS then it might be a Java statement.

Whenever the DSL parser matches a line from the rule file written in the DSL with an expression in the DSL definition, it performs three steps of string manipulation. First, it extracts the string values appearing where the expression contains variable names in braces (here: {colour}). Then, the values obtained from these captures are then interpolated wherever that name, again enclosed in braces, occurs on the right hand side of the mapping. Finally, the interpolated string replaces whatever was matched by the entire expression in the line of the DSL rule file.

Note that the expressions (i.e., the strings on the left hand side of the equal sign) are used as regular expressions in a pattern matching operation against a line of the DSL rule file, matching all or part of a line. This means you can use (for instance) a '?' to indicate that the preceding character is optional. One good reason to use this is to overcome variations in natural language phrases of your DSL. But, given that these expressions are regular expression patterns, this also means that all "magic" characters of Java's pattern syntax have to be escaped with a preceding backslash ('\').

It is important to note that the compiler transforms DSL rule files line by line. In the above example, all the text after "Something is " to the end of the line is captured as the replacement value for "{colour}", and this is used for interpolating the target string. This may not be exactly what you want. For instance, when you intend to merge different DSL expressions to generate a composite DRL pattern, you need to transform a DSLR line in several independent operations. The best way to achieve this is to ensure that the captures are surrounded by characteristic text - words or even single characters. As a result, the matching operation done by the parser plucks out a substring from somewhere within the line. In the example below, quotes are used as distinctive characters. Note that the characters that surround the capture are not included during interpolation, just the contents between them.

As a rule of thumb, use quotes for textual data that a rule editor may want to enter. You can also enclose the capture with words to ensure that the text is correctly matched. Both is illustrated by the following example. Note that a single line such as Something is "green" and another solid thing is now correctly expanded.


It is a good idea to avoid punctuation (other than quotes or apostrophes) in your DSL expressions as much as possible. The main reason is that punctuation is easy to forget for rule authors using your DSL. Another reason is that parentheses, the period and the question mark are magic characters, requiring escaping in the DSL definition.

In a DSL mapping, the braces "{" and "}" should only be used to enclose a variable definition or reference, resulting in a capture. If they should occur literally, either in the expression or within the replacement text on the right hand side, they must be escaped with a preceding backslash ("\"):

[then]do something= if (foo) \{ doSomething(); \}
    

Note

If braces "{" and "}" should appear in the replacement string of a DSL definition, escape them with a backslash ('\').


Given the above DSL examples, the following examples show the expansion of various DSLR snippets:


Note

Don't forget that if you are capturing plain text from a DSL rule line and want to use it as a string literal in the expansion, you must provide the quotes on the right hand side of the mapping.

You can chain DSL expressions together on one line, as long as it is clear to the parser where one ends and the next one begins and where the text representing a parameter ends. (Otherwise you risk getting all the text until the end of the line as a parameter value.) The DSL expressions are tried, one after the other, according to their order in the DSL definition file. After any match, all remaining DSL expressions are investigated, too.

The resulting DRL text may consist of more than one line. Line ends are in the replacement text are written as \n.

A common requirement when writing rule conditions is to be able to add an arbitrary combination of constraints to a pattern. Given that a fact type may have many fields, having to provide an individual DSL statement for each combination would be plain folly.

The DSL facility allows you to add constraints to a pattern by a simple convention: if your DSL expression starts with a hyphen (minus character, "-") it is assumed to be a field constraint and, consequently, is is added to the last pattern line preceding it.

For an example, lets take look at class Cheese, with the following fields: type, price, age and country. We can express some LHS condition in normal DRL like the following

Cheese(age < 5, price == 20, type=="stilton", country=="ch")

The DSL definitions given below result in three DSL phrases which may be used to create any combination of constraint involving these fields.

[when]There is a Cheese with=Cheese()
[when]- age is less than {age}=age<{age}
[when]- type is '{type}'=type=='{type}'
[when]- country equal to '{country}'=country=='{country}'

You can then write rules with conditions like the following:

There is a Cheese with
        - age is less than 42
        - type is 'stilton'

The parser will pick up a line beginning with "-" and add it as a constraint to the preceding pattern, inserting a comma when it is required. For the preceding example, the resulting DRL is:

Cheese(age<42, type=='stilton')

Combining all all numeric fields with all relational operators (according to the DSL expression "age is less than..." in the preceding example) produces an unwieldy amount of DSL entries. But you can define DSL phrases for the various operators and even a generic expression that handles any field constraint, as shown below. (Notice that the expression definition contains a regular expression in addition to the variable name.)

[when][]is less than or equal to=<=
[when][]is less than=<
[when][]is greater than or equal to=>=
[when][]is greater than=>
[when][]is equal to===
[when][]equals===
[when][]There is a Cheese with=Cheese()
[when][]- {field:\w*} {operator} {value:\d*}={field} {operator} {value} 

Given these DSL definitions, you can write rules with conditions such as:

There is a Cheese with
   - age is less than 42
   - rating is greater than 50
   - type equals 'stilton'

In this specific case, a phrase such as "is less than" is replaced by <, and then the line matches the last DSL entry. This removes the hyphen, but the final result is still added as a constraint to the preceding pattern. After processing all of the lines, the resulting DRL text is:

Cheese(age<42, rating > 50, type=='stilton')

A good way to get started is to write representative samples of the rules your application requires, and to test them as you develop. This will provide you with a stable framework of conditional elements and their constraints. Rules, both in DRL and in DSLR, refer to entities according to the data model representing the application data that should be subject to the reasoning process defined in rules. Notice that writing rules is generally easier if most of the data model's types are facts.

Given an initial set of rules, it should be possible to identify recurring or similar code snippets and to mark variable parts as parameters. This provides reliable leads as to what might be a handy DSL entry. Also, make sure you have a full grasp of the jargon the domain experts are using, and base your DSL phrases on this vocabulary.

You may postpone implementation decisions concerning conditions and actions during this first design phase by leaving certain conditional elements and actions in their DRL form by prefixing a line with a greater sign (">"). (This is also handy for inserting debugging statements.)

During the next development phase, you should find that the DSL configuration stabilizes pretty quickly. New rules can be written by reusing the existing DSL definitions, or by adding a parameter to an existing condition or consequence entry.

Try to keep the number of DSL entries small. Using parameters lets you apply the same DSL sentence for similar rule patterns or constraints. But do not exaggerate: authors using the DSL should still be able to identify DSL phrases by some fixed text.

A DSL file is a text file in a line-oriented format. Its entries are used for transforming a DSLR file into a file according to DRL syntax.

A DSL entry consists of the following four parts:

Debugging of DSL expansion can be turned on, selectively, by using a comment line starting with "#/" which may contain one or more words from the table presented below. The resulting output is written to standard output.


Below are some sample DSL definitions, with comments describing the language features they illustrate.

# Comment: DSL examples

#/ debug: display result and usage

# keyword definition: replaces "regula" by "rule"
[keyword][]regula=rule

# conditional element: "T" or "t", "a" or "an", convert matched word
[when][][Tt]here is an? {entity:\w+}=
        ${entity!lc}: {entity!ucfirst} ()

# consequence statement: convert matched word, literal braces
[then][]update {entity:\w+}=modify( ${entity!lc} )\{ \}

The transformation of a DSLR file proceeds as follows:

  1. The text is read into memory.

  2. Each of the "keyword" entries is applied to the entire text. First, the regular expression from the keyword definition is modified by replacing white space sequences with a pattern matching any number of white space characters, and by replacing variable definitions with a capture made from the regular expression provided with the definition, or with the default (".*?"). Then, the DSLR text is searched exhaustively for occurrences of strings matching the modified regular expression. Substrings of a matching string corresponding to variable captures are extracted and replace variable references in the corresponding replacement text, and this text replaces the matching string in the DSLR text.

  3. Sections of the DSLR text between "when" and "then", and "then" and "end", respectively, are located and processed in a uniform manner, line by line, as described below.

    For a line, each DSL entry pertaining to the line's section is taken in turn, in the order it appears in the DSL file. Its regular expression part is modified: white space is replaced by a pattern matching any number of white space characters; variable definitions with a regular expression are replaced by a capture with this regular expression, its default being ".*?". If the resulting regular expression matches all or part of the line, the matched part is replaced by the suitably modified replacement text.

    Modification of the replacement text is done by replacing variable references with the text corresponding to the regular expression capture. This text may be modified according to the string transformation function given in the variable reference; see below for details.

    If there is a variable reference naming a variable that is not defined in the same entry, the expander substitutes a value bound to a variable of that name, provided it was defined in one of the preceding lines of the current rule.

  4. If a DSLR line in a condition is written with a leading hyphen, the expanded result is inserted into the last line, which should contain a pattern CE, i.e., a type name followed by a pair of parentheses. if this pair is empty, the expanded line (which should contain a valid constraint) is simply inserted, otherwise a comma (",") is inserted beforehand.

    If a DSLR line in a consequence is written with a leading hyphen, the expanded result is inserted into the last line, which should contain a "modify" statement, ending in a pair of braces ("{" and "}"). If this pair is empty, the expanded line (which should contain a valid method call) is simply inserted, otherwise a comma (",") is inserted beforehand.

Note

It is currently not possible to use a line with a leading hyphen to insert text into other conditional element forms (e.g., "accumulate") or it may only work for the first insertion (e.g., "eval").

All string transformation functions are described in the following table.


The following DSL examples show how to use string transformation functions.

# definitions for conditions
[when][]There is an? {entity}=${entity!lc}: {entity!ucfirst}()
[when][]- with an? {attr} greater than {amount}={attr} <= {amount!num}
[when][]- with a {what} {attr}={attr} {what!positive?>0/negative?%lt;0/zero?==0/ERROR}

A file containing a DSL definition has to be put under the resources folder or any of its subfolders like any other drools artifact. It must have the extension .dsl, or alternatively be marked with type ResourceType.DSL. when programmatically added to a KieFileSystem. For a file using DSL definition, the extension .dslr should be used, while it can be added to a KieFileSystem with type ResourceType.DSLR.

For parsing and expanding a DSLR file the DSL configuration is read and supplied to the parser. Thus, the parser can "recognize" the DSL expressions and transform them into native rule language expressions.

There is no broadly accepted definition on the term Complex Event Processing. The term Event by itself is frequently overloaded and used to refer to several different things, depending on the context it is used. Defining terms is not the goal of this guide and as so, lets adopt a loose definition that, although not formal, will allow us to proceed with a common understanding.

So, in the scope of this guide:

For instance, on a Stock Broker application, when a sale operation is executed, it causes a change of state in the domain. This change of state can be observed on several entities in the domain, like the price of the securities that changed to match the value of the operation, the ownership of the traded assets that changed from the seller to the buyer, the balance of the accounts from both seller and buyer that are credited and debited, etc. Depending on how the domain is modelled, this change of state may be represented by a single event, multiple atomic events or even hierarchies of correlated events. In any case, in the context of this guide, Event is the record of the change of a particular piece of data in the domain.

Events are processed by computer systems since they were invented, and throughout the history, systems responsible for that were given different names and different methodologies were employed. It wasn't until the 90's though, that a more focused work started on EDA (Event Driven Architecture) with a more formal definition on the requirements and goals for event processing. Old messaging systems started to change to address such requirements and new systems started to be developed with the single purpose of event processing. Two trends were born under the names of Event Stream Processing and Complex Event Processing.

In the very beginnings, Event Stream Processing was focused on the capabilities of processing streams of events in (near) real time, while the main focus of Complex Event Processing was on the correlation and composition of atomic events into complex (compound) events. An important (maybe the most important) milestone was the publishing of Dr. David Luckham's book "The Power of Events" in 2002. In the book, Dr Luckham introduces the concept of Complex Event Processing and how it can be used to enhance systems that deal with events. Over the years, both trends converged to a common understanding and today these systems are all referred to as CEP systems.

This is a very simplistic explanation to a really complex and fertile field of research, but sets a high level and common understanding of the concepts that this guide will introduce.

The current understanding of what Complex Event Processing is may be briefly described as the following quote from Wikipedia:

In other words, CEP is about detecting and selecting the interesting events (and only them) from an event cloud, finding their relationships and inferring new data from them and their relationships.

Event Processing use cases, in general, share several requirements and goals with Business Rules use cases. These overlaps happen both on the business side and on the technical side.

On the Business side:

From a technical perspective:

Even sharing requirements and goals, historically, both fields were born appart and although the industry evolved and one can find good products on the market, they either focus on event processing or on business rules management. That is due not only because of historical reasons but also because, even overlapping in part, use cases do have some different requirements.

In this context, Drools Fusion is the module responsible for adding event processing capabilities into the platform.

Supporting Complex Event Processing, though, is much more than simply understanding what an event is. CEP scenarios share several common and distinguishing characteristics:

Based on this general common characteristics, Drools Fusion defined a set of goals to be achieved in order to support Complex Event Processing appropriately:

The above list of goals are based on the requirements not covered by Drools Expert itself, since in a unified platform, all features of one module are leveraged by the other modules. This way, Drools Fusion is born with enterprise grade features like Pattern Matching, that is paramount to a CEP product, but that is already provided by Drools Expert. In the same way, all features provided by Drools Fusion are leveraged by Drools Flow (and vice-versa) making process management aware of event processing and vice-versa.

For the remaining of this guide, we will go through each of the features Drools Fusion adds to the platform. All these features are available to support different use cases in the CEP world, and the user is free to select and use the ones that will help him model his business use case.

An event is a fact that present a few distinguishing characteristics:

Drools supports the declaration and usage of events with both semantics: point-in-time events and interval-based events.

Rules engines in general have a well known way of processing data and rules and provide the application with the results. Also, there is not many requirements on how facts should be presented to the rules engine, specially because in general, the processing itself is time independent. That is a good assumption for most scenarios, but not for all of them. When the requirements include the processing of real time or near real time events, time becomes and important variable of the reasoning process.

The following sections will explain the impact of time on rules reasoning and the two modes provided by Drools for the reasoning process.

The CLOUD processing mode is the default processing mode. Users of rules engine are familiar with this mode because it behaves in exactly the same way as any pure forward chaining rules engine, including previous versions of Drools.

When running in CLOUD mode, the engine sees all facts in the working memory, does not matter if they are regular facts or events, as a whole. There is no notion of flow of time, although events have a timestamp as usual. In other words, although the engine knows that a given event was created, for instance, on January 1st 2009, at 09:35:40.767, it is not possible for the engine to determine how "old" the event is, because there is no concept of "now".

In this mode, the engine will apply its usual many-to-many pattern matching algorithm, using the rules constraints to find the matching tuples, activate and fire rules as usual.

This mode does not impose any kind of additional requirements on facts. So for instance:

On the other hand, since there is no requirements, some benefits are not available either. For instance, in CLOUD mode, it is not possible to use sliding windows, because sliding windows are based on the concept of "now" and there is no concept of "now" in CLOUD mode.

Since there is no ordering requirement on events, it is not possible for the engine to determine when events can no longer match and as so, there is no automatic life-cycle management for events. I.e., the application must explicitly delete events when they are no longer necessary, in the same way the application does with regular facts.

Cloud mode is the default execution mode for Drools, but in any case, as any other configuration in Drools, it is possible to change this behavior either by setting a system property, using configuration property files or using the API. The corresponding property is:

KieBaseConfiguration config = KieServices.Factory.get().newKieBaseConfiguration();
config.setOption( EventProcessingOption.CLOUD );

The equivalent property is:

drools.eventProcessingMode = cloud

The STREAM processing mode is the mode of choice when the application needs to process streams of events. It adds a few common requirements to the regular processing, but enables a whole lot of features that make stream event processing a lot simpler.

The main requirements to use STREAM mode are:

Given that the above requirements are met, the application may enable the STREAM mode using the following API:

KieBaseConfiguration config = KieServices.Factory.get().newKieBaseConfiguration();
config.setOption( EventProcessingOption.STREAM );

Or, the equivalent property:

drools.eventProcessingMode = stream

When using the STREAM, the engine knows the concept of flow of time and the concept of "now", i.e., the engine understands how old events are based on the current timestamp read from the Session Clock. This characteristic allows the engine to provide the following additional features to the application:

All these features are explained in the following sections.

Negative patterns behave different in STREAM mode when compared to CLOUD mode. In CLOUD mode, the engine assumes that all facts and events are known in advance (there is no concept of flow of time) and so, negative patterns are evaluated immediately.

When running in STREAM mode, negative patterns with temporal constraints may require the engine to wait for a time period before activating a rule. The time period is automatically calculated by the engine in a way that the user does not need to use any tricks to achieve the desired result.

For instance:


The above rule has no temporal constraints that would require delaying the rule, and so, the rule activates immediately. The following rule on the other hand, must wait for 10 seconds before activating, since it may take up to 10 seconds for the sprinklers to activate:


This behaviour allows the engine to keep consistency when dealing with negative patterns and temporal constraints at the same time. The above would be the same as writing the rule as below, but does not burden the user to calculate and explicitly write the appropriate duration parameter:


The following rule expects every 10 seconds at least one “Heartbeat” event, if not the rule fires. The special case in this rule is that we use the same type of the object in the first pattern and in the negative pattern. The negative pattern has the temporal constraint to wait between 0 to 10 seconds before firing and it excludes the Heartbeat bound to $h. Excluding the bound Heartbeat is important since the temporal constraint [0s, ...] does not exclude by itself the bound event $h from being matched again, thus preventing the rule to fire.


Reasoning over time requires a reference clock. Just to mention one example, if a rule reasons over the average price of a given stock over the last 60 minutes, how the engine knows what stock price changes happened over the last 60 minutes in order to calculate the average? The obvious response is: by comparing the timestamp of the events with the "current time". How the engine knows what time is now? Again, obviously, by querying the Session Clock.

The session clock implements a strategy pattern, allowing different types of clocks to be plugged and used by the engine. This is very important because the engine may be running in an elements of different scenarios that may require different clock implementations. Just to mention a few:

Drools 5 provides 2 clock implementations out of the box. The default real time clock, based on the system clock, and an optional pseudo clock, controlled by the application.

Sliding Windows are a way to scope the events of interest by defining a window that is constantly moving. The two most common types of sliding window implementations are time based windows and length based windows.

The next sections will detail each of them.

Sliding Length Windows work the same way as Time Windows, but consider events based on order of their insertion into the session instead of flow of time.

For instance, if the user wants to consider only the last 10 RHT Stock Ticks, independent of how old they are, the pattern would look like this:

StockTick( company == "RHT" ) over window:length( 10 )

As you can see, the pattern is similar to the one presented in the previous section, but instead of using window:time to define the sliding window, it uses window:length.

Using a similar example to the one in the previous section, if the user wants to sound an alarm in case the average temperature over the last 100 readings from a sensor is above the threshold value, the rule would look like:


The engine will keep only consider the last 100 readings to calculate the average temperature.

Important

Please note that falling off a length based window is not criteria for event expiration in the session. The engine disregards events that fall off a window when calculating that window, but does not remove the event from the session based on that condition alone as there might be other rules that depend on that event.

Important

Please note that length based windows do not define temporal constraints for event expiration from the session, and the engine will not consider them. If events have no other rules defining temporal constraints and no explicit expiration policy, the engine will keep them in the session indefinitely.

When using a sliding window, alpha constraints are evaluated before the window is considered, but beta (join) constraints are evaluated afterwards. This usually doesn't make a difference when time windows are concerned, but it's important when using a length window. For example this pattern:

StockTick( company == "RHT" ) over window:length( 10 )

defines a window of (at most) 10 StockTicks all having company equal to "RHT", while the following one:

$s : String()
StockTick( company == $s ) over window:length( 10 )

first creates a window of (at most) 10 StockTicks regardless of the value of their company attribute and then filters among them only the ones having the company equal to the String selected from the working memory.

Most CEP use cases have to deal with streams of events. The streams can be provided to the application in various forms, from JMS queues to flat text files, from database tables to raw sockets or even through web service calls. In any case, the streams share a common set of characteristics:

Drools generalized the concept of a stream as an "entry point" into the engine. An entry point is for drools a gate from which facts come. The facts may be regular facts or special facts like events.

In Drools, facts from one entry point (stream) may join with facts from any other entry point or event with facts from the working memory. Although, they never mix, i.e., they never lose the reference to the entry point through which they entered the engine. This is important because one may have the same type of facts coming into the engine through several entry points, but one fact that is inserted into the engine through entry point A will never match a pattern from a entry point B, for example.

Entry points are declared implicitly in Drools by directly making use of them in rules. I.e. referencing an entry point in a rule will make the engine, at compile time, to identify and create the proper internal structures to support that entry point.

So, for instance, lets imagine a banking application, where transactions are fed into the system coming from streams. One of the streams contains all the transactions executed in ATM machines. So, if one of the rules says: a withdraw is authorized if and only if the account balance is over the requested withdraw amount, the rule would look like:


In the previous example, the engine compiler will identify that the pattern is tied to the entry point "ATM Stream" and will both create all the necessary structures for the rulebase to support the "ATM Stream" and will only match WithdrawRequests coming from the "ATM Stream". In the previous example, the rule is also joining the event from the stream with a fact from the main working memory (CheckingAccount).

Now, lets imagine a second rule that states that a fee of $2 must be applied to any account for which a withdraw request is placed at a bank branch:


The previous rule will match events of the exact same type as the first rule (WithdrawRequest), but from two different streams, so an event inserted into "ATM Stream" will never be evaluated against the pattern on the second rule, because the rule states that it is only interested in patterns coming from the "Branch Stream".

So, entry points, besides being a proper abstraction for streams, are also a way to scope facts in the working memory, and a valuable tool for reducing cross products explosions. But that is a subject for another time.

Inserting events into an entry point is equally simple. Instead of inserting events directly into the working memory, insert them into the entry point as shown in the example below:


The previous example shows how to manually insert facts into a given entry point. Although, usually, the application will use one of the many adapters to plug a stream end point, like a JMS queue, directly into the engine entry point, without coding the inserts manually. The Drools pipeline API has several adapters and helpers to do that as well as examples on how to do it.

One of the benefits of running the engine in STREAM mode is that the engine can detect when an event can no longer match any rule due to its temporal constraints. When that happens, the engine can safely delete the event from the session without side effects and release any resources used by that event.

There are basically 2 ways for the engine to calculate the matching window for a given event:

Temporal reasoning is another requirement of any CEP system. As discussed previously, one of the distinguishing characteristics of events is their strong temporal relationships.

Temporal reasoning is an extensive field of research, from its roots on Temporal Modal Logic to its more practical applications in business systems. There are hundreds of papers and thesis written and approaches are described for several applications. Drools once more takes a pragmatic and simple approach based on several sources, but specially worth noting the following papers:

Drools implements the Interval-based Time Event Semantics described by Allen, and represents Point-in-Time Events as Interval-based evens with duration 0 (zero).

Drools implements all 13 operators defined by Allen and also their logical complement (negation). This section details each of the operators and their parameters.

The after evaluator correlates two events and matches when the temporal distance from the current event to the event being correlated belongs to the distance range declared for the operator.

Lets look at an example:

$eventA : EventA( this after[ 3m30s, 4m ] $eventB ) 

The previous pattern will match if and only if the temporal distance between the time when $eventB finished and the time when $eventA started is between ( 3 minutes and 30 seconds ) and ( 4 minutes ). In other words:

 3m30s <= $eventA.startTimestamp - $eventB.endTimeStamp <= 4m 

The temporal distance interval for the after operator is optional:

The before evaluator correlates two events and matches when the temporal distance from the event being correlated to the current correlated belongs to the distance range declared for the operator.

Lets look at an example:

$eventA : EventA( this before[ 3m30s, 4m ] $eventB ) 

The previous pattern will match if and only if the temporal distance between the time when $eventA finished and the time when $eventB started is between ( 3 minutes and 30 seconds ) and ( 4 minutes ). In other words:

 3m30s <= $eventB.startTimestamp - $eventA.endTimeStamp <= 4m 

The temporal distance interval for the before operator is optional:

The during evaluator correlates two events and matches when the current event happens during the occurrence of the event being correlated.

Lets look at an example:

$eventA : EventA( this during $eventB ) 

The previous pattern will match if and only if the $eventA starts after $eventB starts and finishes before $eventB finishes.

In other words:

$eventB.startTimestamp < $eventA.startTimestamp <= $eventA.endTimestamp < $eventB.endTimestamp 

The during operator accepts 1, 2 or 4 optional parameters as follow:

The includes evaluator correlates two events and matches when the event being correlated happens during the current event. It is the symmetrical opposite of during evaluator.

Lets look at an example:

$eventA : EventA( this includes $eventB ) 

The previous pattern will match if and only if the $eventB starts after $eventA starts and finishes before $eventA finishes.

In other words:

$eventA.startTimestamp < $eventB.startTimestamp <= $eventB.endTimestamp < $eventA.endTimestamp 

The includes operator accepts 1, 2 or 4 optional parameters as follow:

The declarative agenda allows to use rules to control which other rules can fire and when. While this will add a lot more overhead than the simple use of salience, the advantage is it is declarative and thus more readable and maintainable and should allow more use cases to be achieved in a simpler fashion.

This feature is off by default and must be explicitly enabled, that is because it is considered highly experimental for the moment and will be subject to change, but can be activated on a given KieBase by adding the declarativeAgenda='enabled' attribute in the corresponding kbase tag of the kmodule.xml file as in the following example.


The basic idea is:

  • All rule's Matches are inserted into WorkingMemory as facts. So you can now do pattern matching against a Match. The rule's metadata and declarations are available as fields on the Match object.

  • You can use the kcontext.blockMatch( Match match ) for the current rule to block the selected match. Only when that rule becomes false will the match be eligible for firing. If it is already eligible for firing and is later blocked, it will be removed from the agenda until it is unblocked.

  • A match may have multiple blockers and a count is kept. All blockers must became false for the counter to reach zero to enable the Match to be eligible for firing.

  • kcontext.unblockAllMatches( Match match ) is an over-ride rule that will remove all blockers regardless

  • An activation may also be cancelled, so it never fires with cancelMatch

  • An unblocked Match is added to the Agenda and obeys normal salience, agenda groups, ruleflow groups etc.

  • The @Direct annotations allows a rule to fire as soon as it's matched, this is to be used for rules that block/unblock matches, it is not desirable for these rules to have side effects that impact else where.


Here is a basic example that will block all matches from rules that have metadata @department('sales'). They will stay blocked until the blockerAllSalesRules rule becomes false, i.e. "go2" is retracted.


Warning

Further than annotate the blocking rule with @Direct, it is also necessary to annotate all the rules that could be potentially blocked by it with @Eager. This is because, since the Match has to be evaluated by the pattern matching of the blocking rule, the potentially blocked ones cannot be evaluated lazily, otherwise won't be any Match to be evaluated.

This example shows how you can use active property to count the number of active or inactive (already fired) matches.


When the field of a fact is a collection it is possible to bind and reason over all the items in that collection on by one using the from keyword. Nevertheless, when it is required to browse a graph of object the extensive use of the from conditional element may result in a verbose and cubersome syntax like in the following example:


In this example it has been assumed to use a domain model consisting of a Student who has a Plan of study: a Plan can have zero or more Exams and an Exam zero or more Grades. Note that only the root object of the graph (the Student in this case) needs to be in the working memory in order to make this works.

By borrowing ideas from XPath, this syntax can be made more succinct, as XPath has a compact notation for navigating through related elements while handling collections and filtering constraints. This XPath-inspired notation has been called OOPath since it is explictly intended to browse graph of objects. Using this notation the former example can be rewritten as it follows:


Formally, the core grammar of an OOPath expression can be defined in EBNF notation in this way.

OOPExpr = ( "/" | "?/" ) OOPSegment { ( "/" | "?/" | "." ) OOPSegment } ;
    OOPSegment = [ID ( ":" | ":=" )] ID ["[" Number "]"] ["{" Constraints "}"];

In practice an OOPath expression has the following features.

  • It has to start with / or with a ?/ in case of a completely non-reactive OOPath (see below).

  • It can dereference a single property of an object with the . operator

  • It can dereference a multiple property of an object using the / operator. If a collection is returned, it will iterate over the values in the collection

  • While traversing referenced objects it can filter away those not satisfying one or more constraints, written as predicate expressions between curly brackets like in:

    Student( $grade: /plan/exams{ course == "Big Data" }/grades )
  • A constraint can also have a beckreference to an object of the graph traversed before the currently iterated one. For example the following OOPath:

    Student( $grade: /plan/exams/grades{ result > ../averageResult } )

    will match only the grades having a result above the average for the passed exam.

  • A constraint can also recursively be another OOPath as it follows:

    Student( $exam: /plan/exams{ /grades{ result > 20 } } )
  • Items can also be accessed by their index by putting it between square brackets like in:

    Student( $grade: /plan/exams[0]/grades )

    To adhere to Java convention OOPath indexes are 0-based, compared to XPath 1-based

At the moment Drools is not able to react to updates involving a deeply nested object traversed during the evaluation of an OOPath expression. To make these objects reactive to changes it is then necessary to make them extend the class org.drools.core.phreak.ReactiveObject. It is planned to overcome this limitation by implementing a mechanism that automatically instruments the classes belonging to a specific domain model.

Having extendend that class, the domain objects can notify the engine when one of its field has been updated by invoking the inherited method notifyModification as in the following example:


In this way when using an OOPath like the following:

Student( $grade: /plan/exams{ course == "Big Data" }/grades )

if an exam is moved to a different course, the rule is re-triggered and the list of grades matching the rule recomputed.

It is also possible to have reactivity only in one subpart of the OOPath as in:

Student( $grade: /plan/exams{ course == "Big Data" }?/grades )

Here, using the ?/ separator instead of the / one, the engine will react to a change made to an exam, or if an exam is added to the plan, but not if a new grade is added to an existing exam. Of course if a OOPath chunk is not reactive, all remaining part of the OOPath from there till the end of the expression will be non-reactive as well. For instance the following OOPath

Student( $grade: ?/plan/exams{ course == "Big Data" }/grades )

will be completely non-reactive. For this reason it is not allowed to use the ?/ separator more than once in the same OOPath so an expression like:

Student( $grade: /plan?/exams{ course == "Big Data" }?/grades )

will cause a compile time error.

Integration Documentation

Table of Contents

11. Drools Commands
11.1. API
11.1.1. XStream
11.1.2. JSON
11.1.3. JAXB
11.2. Commands supported
11.2.1. BatchExecutionCommand
11.2.2. InsertObjectCommand
11.2.3. RetractCommand
11.2.4. ModifyCommand
11.2.5. GetObjectCommand
11.2.6. InsertElementsCommand
11.2.7. FireAllRulesCommand
11.2.8. StartProcessCommand
11.2.9. SignalEventCommand
11.2.10. CompleteWorkItemCommand
11.2.11. AbortWorkItemCommand
11.2.12. QueryCommand
11.2.13. SetGlobalCommand
11.2.14. GetGlobalCommand
11.2.15. GetObjectsCommand
12. CDI
12.1. Introduction
12.2. Annotations
12.2.1. @KReleaseId
12.2.2. @KContainer
12.2.3. @KBase
12.2.4. @KSession for KieSession
12.2.5. @KSession for StatelessKieSession
12.3. API Example Comparison
13. Integration with Spring
13.1. Important Changes for Drools 6.0
13.2. Integration with Drools Expert
13.2.1. KieModule
13.2.2. KieBase
13.2.3. IMPORTANT NOTE
13.2.4. KieSessions
13.2.5. Kie:ReleaseId
13.2.6. Kie:Import
13.2.7. Annotations
13.2.8. Event Listeners
13.2.9. Loggers
13.2.10. Defining Batch Commands
13.2.11. Persistence
13.2.12. Leveraging Other Spring Features
13.3. Integration with jBPM Human Task
13.3.1. How to configure Spring with jBPM Human task
14. Android Integration
14.1. Integration with Drools Expert
14.1.1. Pre-serialized Rules
14.1.2. KieContainer API with drools-compiler dependency
14.2. Integration with Roboguice
14.2.1. Pre-serialized Rules with Roboguice
14.2.2. KieContainer with drools-compiler dependency and Roboguice
15. Apache Camel Integration
15.1. Camel
16. Drools Camel Server
16.1. Introduction
16.2. Deployment
16.3. Configuration
16.3.1. REST/Camel Services configuration
17. JMX monitoring with RHQ/JON
17.1. Introduction
17.1.1. Enabling JMX monitoring in a Drools application
17.1.2. Installing and running the RHQ/JON plugin

XML marshalling/unmarshalling of the Drools Commands requires the use of special classes, which are going to be described in the following sections.

The following urls show sample script examples for jaxb, xstream and json marshalling using:

There are two options for using JAXB, you can define your model in an XSD file or you can have a POJO model. In both cases you have to declare your model inside JAXBContext, and in order to do that you need to use Drools Helper classes. Once you have the JAXBContext you need to create the Unmarshaller/Marshaller as needed.

Currently, the following commands are supported:

CDI, Contexts and Dependency Injection, is Java specification that provides declarative controls and strucutres to an application. KIE can use it to automatically instantiate and bind things, without the need to use the programmatic API.

@KContainer, @KBase and @KSession all support an optional 'name' attribute. CDI typically does "getOrCreate" when it injects, all injections receive the same instance for the same set of annotations. the 'name' annotation forces a unique instance for each name, although all instance for that name will be identity equals.

In this section we will explain the kie namespace.

<kie:ksession> element defines KieSessions. The same tag is used to define both Stateful (org.kie.api.runtime.KieSession) and Stateless (org.kie.api.runtime.StatelessKieSession) sessions.

Starting with version 6.2, kie-spring allows for importing of kie objects from kjars found on the classpath. Two modes of importing the kie objects are currently supported.


@KContainer, @KBase and @KSession all support an optional 'name' attribute. Spring typically does "get" when it injects, all injections receive the same instance for the same set of annotations. the 'name' annotation forces a unique instance for each name, although all instance for that name will be identity equals.

Drools supports adding 3 types of listeners to KieSessions - AgendaListener, WorkingMemoryListener, ProcessEventListener

The kie-spring module allows you to configure these listeners to KieSessions using XML tags. These tags have identical names as the actual listener interfaces i.e., <kie:agendaEventListener....>, <kie:ruleRuntimeEventListener....> and <kie:processEventListener....>.

kie-spring provides features to define the listeners as standalone (individual) listeners and also to define them as a group.

Drools supports adding 2 types of loggers to KieSessions - ConsoleLogger, FileLogger.

The kie-spring module allows you to configure these loggers to KieSessions using XML tags. These tags have identical names as the actual logger interfaces i.e., <kie:consoleLogger....> and <kie:fileLogger....>.

This section provides details on leveraging other standard spring features when integrating with Drools Expert.

As shown above, the Spring XML contains the definition of the profiles. While loading the ApplicationContext you have to tell Spring which profile you’re loading.

There are several ways of selecting your profile and the most useful is by using the "spring.profiles.active" system property.

System.setProperty("spring.profiles.active", "development");
ApplicationContext ctx = new ClassPathXmlApplicationContext("beans.xml");

Obviously, it is not a good practice to hard code things as shown above and the recommended practice is to keep the system properties definitions independent of the application.

-Dspring.profiles.active="development"

The profiles can also be loaded and enabled programmtically

...
GenericXmlApplicationContext ctx = new GenericXmlApplicationContext("beans.xml");
ConfigurableEnvironment env = ctx.getEnvironment();
env.setActiveProfiles("development");
ctx.refresh();
...
      

This chapter describes the infrastructure used when configuring a human task server with Spring as well as a little bit about the infrastructure used when doing this.

The jBPM human task server can be configured to use Spring persistence. Example 13.18, “Configuring Human Task with Spring” is an example of this which uses local transactions and Spring's thread-safe EntityManager proxy.

The following diagram shows the dependency graph used in Example 13.18, “Configuring Human Task with Spring”.


A TaskService instance is dependent on two other bean types: a drools SystemEventListener bean as well as a TaskSessionSpringFactoryImpl bean. The TaskSessionSpringFactoryImpl bean is howerver not injected into the TaskService bean because this would cause a circular dependency. To solve this problem, when the TaskService bean is injected into the TaskSessionSpringFactoryImpl bean, the setter method used secretly injects the TaskSessionSpringFactoryImpl instance back into the TaskService bean and initializes the TaskService bean as well.

The TaskSessionSpringFactoryImpl bean is responsible for creating all the internal instances in human task that deal with transactions and persistence context management. Besides a TaskService instance, this bean also requires a transaction manager and a persistence context to be injected. Specifically, it requires an instance of a HumanTaskSpringTransactionManager bean (as a transaction manager) and an instance of a SharedEntityManagerBean bean (as a persistence context instance).

We also use some of the standard Spring beans in order to configure persistence: there's a bean to hold the EntityManagerFactory instance as well as the SharedEntityManagerBean instance. The SharedEntityManagerBean provides a shared, thread-safe proxy for the actual EntityManager.

The HumanTaskSpringTransactionManager bean serves as a wrapper around the Spring transaction manager, in this case the JpaTransactionManager. An instance of a JpaTransactionManager bean is also instantiated because of this.

Example 13.18. Configuring Human Task with Spring

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:jbpm="http://drools.org/schema/drools-spring"
       xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
       http://drools.org/schema/drools-spring org/drools/container/spring/drools-spring-1.2.0.xsd">

  <!-- persistence & transactions-->
  <bean id="htEmf" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
    <property name="persistenceUnitName" value="org.jbpm.task" />
  </bean>

  <bean id="htEm" class="org.springframework.orm.jpa.support.SharedEntityManagerBean">
    <property name="entityManagerFactory" ref="htEmf"/>
  </bean>

  <bean id="jpaTxMgr" class="org.springframework.orm.jpa.JpaTransactionManager">
    <property name="entityManagerFactory" ref="htEmf" />
    <!-- this must be true if using the SharedEntityManagerBean, and false otherwise -->
    <property name="nestedTransactionAllowed" value="true"/>
  </bean>

  <bean id="htTxMgr" class="org.drools.container.spring.beans.persistence.HumanTaskSpringTransactionManager">
    <constructor-arg ref="jpaTxMgr" />
  </bean>

  <!-- human-task beans -->

  <bean id="systemEventListener" class="org.drools.SystemEventListenerFactory" factory-method="getSystemEventListener" />

  <bean id="taskService" class="org.jbpm.task.service.TaskService" >
    <property name="systemEventListener" ref="systemEventListener" />
  </bean>

  <bean id="springTaskSessionFactory" class="org.jbpm.task.service.persistence.TaskSessionSpringFactoryImpl"
        init-method="initialize" depends-on="taskService" >
    <!-- if using the SharedEntityManagerBean, make sure to enable nested transactions -->
    <property name="entityManager" ref="htEm" />
    <property name="transactionManager" ref="htTxMgr" />
    <property name="useJTA" value="false" />
    <property name="taskService" ref="taskService" />
  </bean>

</beans>

When using the SharedEntityManagerBean instance, it's important to configure the Spring transaction manager to use nested transactions. This is because the SharedEntityManagerBean is a transactional persistence context and will close the persistence context after every operation. However, the human task server needs to be able to access (persisted) entities after operations. Nested transactions allow us to still have access to entities that otherwise would have been detached and are no longer accessible, especially when using an ORM framework that uses lazy-initialization of entities.

Also, while the TaskSessionSpringFactoryImpl bean takes an useJTA parameter, at the moment, JTA transactions with Spring have not yet been fully tested.

Drools Android integration comes in two flavors, with or without the drools-compiler dependency. Without the drools-compiler dependency the knowledge bases are pre-serialized at buildtime using the kie-maven-plugin. They can be then deserialized using the API, or directly injected using Roboguice. When using the drools-compiler dependency there are two options: (1) standard KieContainer API or (2) CDI-like injection with Roboguice.

It is possible to use Drools without the drools-compiler dependency, which results in a smaller apk, by pre-serializing the compiled knowledge bases during build-time and then de-serializing them at runtime.

With the drools-compiler dependency standard KieContainer API can be used. This comes at a cost of a larger apk. To avoid the 65K limit, multidex (or proguard) can be used.

The kie-maven-plugin must be configured to build the kiebase. Multidex must be used to allow for the increased dependencies. There are also some settings for merging various Drools XML files within the apk.

Example 14.2. pom.xml with drools-compiler and multidex

<dependencyManagement>
  <dependencies>
    <dependency>
      <groupId>org.drools</groupId>
      <artifactId>drools-bom</artifactId>
      <version>6.4.0.Final</version>
      <type>pom</type>
      <scope>import</scope>
  </dependencies>
</dependencyManagement>
<dependencies>
  <dependency>
    <groupId>org.drools</groupId>
    <artifactId>drools-android</artifactId>
    <exclusions>
      <exclusion>
        <groupId>org.slf4j</groupId>
        <artifactId>slf4j-api</artifactId>
      </exclusion>
    </exclusions>
  </dependency>
  <dependency>
    <groupId>org.drools</groupId>
    <artifactId>drools-compiler</artifactId>
    <exclusions>
      <exclusion>
         <groupId>org.slf4j</groupId>
         <artifactId>slf4j-api</artifactId>
      </exclusion>
      <exclusion>
         <groupId>xmlpull</groupId>
         <artifactId>xmlpull</artifactId>
      </exclusion>
      <exclusion>
         <groupId>xpp3</groupId>
         <artifactId>xpp3_min</artifactId>
      </exclusion>
      <exclusion>
         <groupId>org.slf4j</groupId>
         <artifactId>slf4j-api</artifactId>
      </exclusion>
      <exclusion>
         <groupId>org.eclipse.jdt.core.compiler</groupId>
         <artifactId>ecj</artifactId>
      </exclusion>
    </exclusions>
  </dependency>
</dependencies>
<build>
  <plugins>
    <plugin>
      <groupId>org.kie</groupId>
      <artifactId>kie-maven-plugin</artifactId>
      <version>6.4.0.Final</version>
      <executions>
        <execution>
          <id>compile-kbase</id>
          <goals>
            <goal>build</goal>
          </goals>
          <phase>compile</phase>
        </execution>
      </executions>
    </plugin>
    <plugin>
      <groupId>com.simpligility.maven.plugins</groupId>
      <artifactId>android-maven-plugin</artifactId>
      <version>4.2.1</version>
      <extensions>true</extensions>
      <configuration>
        <sdk>
          <platform>21</platform>
        </sdk>
        <dex>
          <coreLibrary>true</coreLibrary>
          <jvmArguments><jvmArgument>-Xmx2048m</jvmArgument></jvmArguments>
          <multiDex>true</multiDex>
          <mainDexList>maindex.txt</mainDexList>
        </dex>
        <extractDuplicates>true</extractDuplicates>
        <apk>
          <metaInf>
            <includes>
              <include>services/**</include>
              <include>kmodule.*</include>
              <include>HelloKB/**</include>
              <include>drools**</include>
              <include>maven/${project.groupId}/${project.artifactId}/**</include>
            </includes>
          </metaInf>
        </apk>
      </configuration>
    </plugin>
  </plugins>
</build>
                  

With Roboguice and drools-compiler almost the full CDI syntax can be used to inject KieContainers, KieBases, and KieSessions.

@KContainer, @KBase and @KSession all support an optional 'name' attribute. CDI typically does "getOrCreate" when it injects, all injections receive the same instance for the same set of annotations. the 'name' annotation forces a unique instance for each name, although all instance for that name will be identity equals.

Camel provides a light weight bus framework for getting information into and out of Drools.

Drools introduces two elements to make easy integration.

Drools can be configured like any normal camel component, but notice the policy that wraps the drools related segments. This will route all payloads to ksession1


It is possible to not specify the session in the drools endpoint uri, and instead "multiplex" based on an attribute or header. In this example the policy will check either the header field "DroolsLookup" for the named session to execute and if that isn't specified it'll check the "lookup" attribute on the incoming payload.



The following urls show sample script examples for jaxb, xstream and json marshalling using:

  • http://fisheye.jboss.org/browse/JBossRules/trunk/drools-camel/src/test/resources/org/drools/camel/component/jaxb.mvt?r=HEAD

  • http://fisheye.jboss.org/browse/JBossRules/trunk/drools-camel/src/test/resources/org/drools/camel/component/jaxb.mvt?r=HEAD

  • http://fisheye.jboss.org/browse/JBossRules/trunk/drools-camel/src/test/resources/org/drools/camel/component/xstream.mvt?r=HEAD

Inside the war file you will find a few XML configuration files.

The next step is configure the services that are going to be exposed through drools-server. You can modify this configuration in camel-server.xml file.

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xmlns:cxf="http://camel.apache.org/schema/cxf"
  xmlns:jaxrs="http://cxf.apache.org/jaxrs"
  xsi:schemaLocation="
  http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd
  http://camel.apache.org/schema/cxf http://camel.apache.org/schema/cxf/camel-cxf.xsd
  http://cxf.apache.org/jaxrs http://cxf.apache.org/schemas/jaxrs.xsd
  http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd">

<import resource="classpath:META-INF/cxf/cxf.xml" />
<import resource="classpath:META-INF/cxf/cxf-extension-jaxrs-binding.xml"/> 
<import resource="classpath:META-INF/cxf/cxf-servlet.xml" />

  <!--
   !   If you are running on JBoss you will need to copy a camel-jboss.jar into the lib and set this ClassLoader configuration
   !  http://camel.apache.org/camel-jboss.html
   !   <bean id="jbossResolver" class="org.apache.camel.jboss.JBossPackageScanClassResolver"/>
   -->

  <!--
   !   Define the server end point.
   !   Copy and paste this element, changing id and the address, to expose services on different urls.
   !   Different Camel routes can handle different end point paths.
   -->
  <cxf:rsServer id="rsServer"  
                address="/rest"
                serviceClass="org.kie.jax.rs.CommandExecutorImpl">
       <cxf:providers>
           <bean class="org.kie.jax.rs.CommandMessageBodyReader"/>
       </cxf:providers>
  </cxf:rsServer>  
  
  <cxf:cxfEndpoint id="soapServer"
            address="/soap"
             serviceName="ns:CommandExecutor"
             endpointName="ns:CommandExecutorPort"
          wsdlURL="soap.wsdl"
          xmlns:ns="http://soap.jax.drools.org/" >
    <cxf:properties>
      <entry key="dataFormat" value="MESSAGE"/>
      <entry key="defaultOperationName" value="execute"/>
    </cxf:properties>
  </cxf:cxfEndpoint>

  <!-- Leave this, as it's needed to make Camel "drools" aware -->
  <bean id="kiePolicy" class="org.kie.camel.component.KiePolicy" />

  <camelContext id="camel" xmlns="http://camel.apache.org/schema/spring">    
    <!-- 
     ! Routes incoming messages from end point id="rsServer".
     ! Example route unmarshals the messages with xstream and executes against ksession1.
     ! Copy and paste this element, changing marshallers and the 'to' uri, to target different sessions, as needed.
     !-->
     
    <route>
       <from uri="cxfrs://bean://rsServer"/>
       <policy ref="kiePolicy">
         <unmarshal ref="xstream" />
         <to uri="kie:ksession1" />
         <marshal ref="xstream" />
       </policy>
    </route>    

    <route>
      <from uri="cxf://bean://soapServer"/>
      <policy ref="kiePolicy">
        <unmarshal ref="xstream" />       
        <to uri="kie:ksession1" />
        <marshal ref="xstream" />
      </policy>
    </route>
        
  </camelContext>
  
</beans>