4.1. What is New and Noteworthy in Drools 5.4.0

The simulation project that was first started in 2009, http://blog.athico.com/2009/07/drools-simulation-and-test-framework.html, has undergone an over haul and is now in a usable state. We have not yet promoted this to knowledge-api, so it's considered unstable and will change during the beta process. For now though, the adventurous people can take a look at the unit tests and start playing.

The Simulator runs the Simulation. The Simulation is your scenario definition. The Simulation consists of 1 to n Paths, you can think of a Path as a sort of Thread. The Path is a chronological line on which Steps are specified at given temporal distances from the start. You don't specify a time unit for the Step, say 12:00am, instead it is always a relative time distance from the start of the Simulation (note: in Beta2 this will be relative time distance from the last step in the same path). Each Step contains one or more Commands, i.e. create a StatefulKnowledgeSession or insert an object or start a process. These are the very same commands that you would use to script a knowledge session using the batch execution, so it's re-using existing concepts.

  • 1.1 Simulation

    • 1..n Paths

      • 1..n Steps

        • 1..n Commands

All the steps, from all paths, are added to a priority queue which is ordered by the temporal distance, and allows us to incrementally execute the engine using a time slicing approach. The simulator pops of the steps from the queue in turn. For each Step it increments the engine clock and then executes all the Step's Commands.

Here is an example Command (notice it uses the same Commands as used by the CommandExecutor):

new InsertObjectCommand( new Person( "darth", 97 ) )

Commands can be grouped together, especially Assertion commands, via test groups. The test groups are mapped to JUnit "test methods", so as they pass or fail using a specialised JUnit Runner the Eclipse GUI is updated - as illustrated in the above image, showing two passed test groups named "test1" and "test2".

Using the JUnit integration is trivial. Just annotate the class with @RunWith(JUnitSimulationRunner.class). Then any method that is annotated with @Test and returns a Simulation instance will be invoked executing the returned Simulation instance in the Simulator. As test groups are executed the JUnit GUI is updated.

When executing any commands on a KnowledgeBuilder, KnowledgeBase or StatefulKnowledgeSession the system assumes a "register" approach. To get a feel for this look at the org.drools.simulation.impl.SimulationTest at github (path may change over time).

cmds.add( new NewKnowledgeBuilderCommand( null ) );
cmds.add( new SetVariableCommandFromLastReturn( "path1",
                                                KnowledgeBuilder.class.getName() ) );

cmds.add( new KnowledgeBuilderAddCommand( ResourceFactory.newByteArrayResource( str.getBytes() ),
                                          ResourceType.DRL, null ) );

Notice the set command. "path1" is the context, each path has it's own variable context. All paths inherit from a "root" context. "KnowledgeBuilder.class.getName() " is the name that we are setting the return value of the last command. As mentioned before we consider the class names of those classes as registers, any further commands that attempt to operate on a knowledge builder will use what ever is assigned to that, as in the case of KnowledgeBuilderAddCommand. This allows multiple kbuilders, kbases and ksessions to exist in one context under different variable names, but only the one assigned to the register name is the one that is currently executed on.

The code below show the rough outline used in SimulationTest:

Simulation simulation = new SimulationImpl();
PathImpl path = new PathImpl( simulation,
                              "path1" );                              
simulation.getPaths().put( "path1",
                           path );                                      
List<Step> steps = new ArrayList<Step>();
path.setSteps( steps );
List<Command> cmds = new ArrayList<Command>();
.... add commands to step here ....
// create a step at temporal distance of 2000ms from start
steps.add( new StepImpl( path,
                         cmds,
                         2000 ) ); 

We know the above looks quite verbose. SimulationTest just shows our low level canonical model, the idea is that high level representations are built ontop of this. As this is a builder API we are currently focusing on two sets of fluents, compact and standard. We will also work on a spreadsheet UI for building these, and eventually a dedicated textual dsl.

The compact fluent is designed to provide the absolute minimum necessary to run against a single ksession. A good place to start is org.drools.simulation.impl.CompactFluentTest, a snippet of which is shown below. Notice we set "yoda" to "y" and can then assert on that. Currently inside of the test string it executes using mvel. The eventual goal is to build out a set of hamcrest matchers that will allow assertions against the state of the engine, such as what rules have fired and optionally with with data.

FluentCompactSimulation f = new FluentCompactSimulationImpl();
f.newStatefulKnowledgeSession()
    .getKnowledgeBase()
    .addKnowledgePackages( ResourceFactory.newByteArrayResource( str.getBytes() ),
                           ResourceType.DRL )
    .end()
    .newStep( 100 ) // increases the time 100ms
    .insert( new Person( "yoda",
                         150 ) ).set( "y" )
    .fireAllRules()
    // show testing inside of ksession execution
    .test( "y.name == 'yoda'" )
    .test( "y.age == 160" );

Note that the test is not executing at build time, it's building a script to be executed later. The script underneath matches what you saw in SimulationTest. Currently the way to run a simulation manually is shown below. Although you already saw in SimulationTest that JUnit will execute these automatically. We'll improve this over time.

SimulationImpl sim = (SimulationImpl) ((FluentCompactSimulationImpl) f).getSimulation();
Simulator simulator = new Simulator( sim,
                                     new Date().getTime() );
simulator.run();

The standard fluent is almost a 1 to 1 mapping to the canonical path, step and command structure in SimulationTest- just more compact. Start by looking in org.drools.simulation.impl.StandardFluentTest. This fluent allows you to run any number of paths and steps, along with a lot more control over multiple kbuilders, kbases and ksessions.

FluentStandardSimulation f = new FluentStandardSimulationImpl();      
f.newPath("init")
     .newStep( 0 )
          // set to ROOT, as I want paths to share this
         .newKnowledgeBuilder()
             .add( ResourceFactory.newByteArrayResource( str.getBytes() ),
                   ResourceType.DRL )
         .end(ContextManager.ROOT, KnowledgeBuilder.class.getName() )
         .newKnowledgeBase()
             .addKnowledgePackages()
         .end(ContextManager.ROOT, KnowledgeBase.class.getName() )
    .end()
 .newPath( "path1" )
    .newStep( 1000 )
        .newStatefulKnowledgeSession()
            .insert( new Person( "yoda", 150 ) ).set( "y" )
            .fireAllRules()
            .test( "y.name == 'yoda'" )
            .test( "y.age == 160" )
        .end()
    .end()
 .newPath( "path2" )
    .newStep( 800 )
        .newStatefulKnowledgeSession()
            .insert( new Person( "darth", 70 ) ).set( "d" )
            .fireAllRules()
            .test( "d.name == 'darth'" )
            .test( "d.age == 80" )
        .end()
    .end()
 .end

There is still an awful lot to do, this is designed to eventually provide a unified simulation and testing environment for rules, workflow and event processing over time, and eventually also over distributed architectures.

  • Flesh out the api to support more commands, and also to encompass jBPM commands

  • Improve out of the box usability, including moving interfaces to knowledge-api and hiding "new" constructors with factory methods

  • Commands are already marshallable to json and xml. They should be updated to allow full round tripping from java api commands and json/xml documents.

  • Develop hamcrest matchers for testing state

    • What rule(s) fired, including optionally what data was used with the executing rule (Drools)

    • What rules are active for a given fact

    • What rules activated and de-activated for a given fact change

    • Process variable state (jBPM)

    • Wait node states (jBPM)

  • Design and build tabular authoring tools via spreadsheet, targeting the web with round tripping to excel.

  • Design and develop textual DSL for authoring - maybe part of DRL (long term task).

Currently when in a RHS you invoke update() or modify() on a given object it will trigger a revaluation of all patterns of the matching object type in the knowledge base. As some have experienced, this can be a problem that often can lead to unwanted and useless evaluations and in the worst cases to infinite recursions. The only workaround to avoid it was to split up your objects into smaller ones having a 1 to 1 relationship with the original object.

This new feature allows the pattern matching to only react to modification of properties actually constrained or bound inside of a given pattern. That will help with performance and recursion and avoid artificial object splitting. The implementation is bit mask based, so very efficient. When the engine executes a modify statement it uses a bit mask of fields being changed, the pattern will only respond if it has an overlapping bit mask. This does not work for update(), and is one of the reason why we promote modify() as it encapsulates the field changes within the statement.

By default this feature is off in order to make the behavior of the rule engine backward compatible with the former releases. When you want to activate it on a specific bean you have to annotate it with @propertyReactive. This annotation works both on drl type declarations:

declare Person
    @propertyReactive
    firstName : String
    lastName : String
end

and on Java classes:

@PropertyReactive
public static class Person {
    private String firstName;
    private String lastName;
}

In this way, for instance, if you have a rule like the following:

rule "Every person named Mario is a male" when
    $person : Person( firstName == "Mario" )
then
    modify ( $person )  { setMale( true ) }
end

you won't have to add the no-loop attribute to it in order to avoid an infinite recursion because the engine recognizes that the pattern matching is done on the 'firstName' property while the RHS of the rule modifies the 'male' one. Note that this feature does not work for update(), and this is one of the reasons why we promote modify() since it encapsulates the field changes within the statement. Moreover, on Java classes, you can also annotate any method to say that its invocation actually modifies other properties. For instance in the former Person class you could have a method like:

@Modifies( { "firstName", "lastName" } )
public void setName(String name) {
    String[] names = name.split("\\s");
    this.firstName = names[0];
    this.lastName = names[1];
}

That means that if a rule has a RHS like the following:

modify($person) { setName("Mario Fusco") }

it will correctly recognize that the values of both properties 'firstName' and 'lastName' could have potentially been modified and act accordingly, not missing of reevaluating the patterns constrained on them. At the moment the usage of @Modifies is not allowed on fields but only on methods. This is coherent with the most common scenario where the @Modifies will be used for methods that are not related with a class field as in the Person.setName() in the former example. Also note that @Modifies is not transitive, meaning that if another method internally invokes the Person.setName() one it won't be enough to annotate it with @Modifies( { "name" } ), but it is necessary to use @Modifies( { "firstName", "lastName" } ) even on it. Very likely @Modifies transitivity will be implemented in the next release.

For what regards nested accessors, the engine will be notified only for top level fields. In other words a pattern matching like:

Person ( address.city.name == "London ) 

will be reevaluated only for modification of the 'address' property of a Person object. In the same way the constraints analysis is currently strictly limited to what there is inside a pattern. Another example could help to clarify this. An LHS like the following:

$p : Person( )
Car( owner = $p.name )

will not listen on modifications of the person's name, while this one will do:

Person( $name : name )
Car( owner = $name )

To overcome this problem it is possible to annotate a pattern with @watch as it follows:

$p : Person( ) @watch ( name )
Car( owner = $p.name )

Indeed, annotating a pattern with @watch allows you to modify the inferred set of properties for which that pattern will react. Note that the properties named in the @watch annotation are actually added to the ones automatically inferred, but it is also possible to explicitly exclude one or more of them prepending their name with a ! and to make the pattern to listen for all or none of the properties of the type used in the pattern respectively with the wildcards * and !*. So, for example, you can annotate a pattern in the LHS of a rule like:

// listens for changes on both firstName (inferred) and lastName
Person( firstName == $expectedFirstName ) @watch( lastName )

// listens for all the properties of the Person bean
Person( firstName == $expectedFirstName ) @watch( * )

// listens for changes on lastName and explicitly exclude firstName
Person( firstName == $expectedFirstName ) @watch( lastName, !firstName )

// listens for changes on all the properties except the age one
Person( firstName == $expectedFirstName ) @watch( *, !age )

Since doesn't make sense to use this annotation on a pattern using a type not annotated with @PropertyReactive the rule compiler will raise a compilation error if you try to do so. Also the duplicated usage of the same property in @watch (for example like in: @watch( firstName, ! firstName ) ) will end up in a compilation error. In a next release we will make the automatic detection of the properties to be listened smarter by doing analysis even outside of the pattern.

It also possible to enable this feature by default on all the types of your model or to completely disallow it by using on option of the KnowledgeBuilderConfiguration. In particular this new PropertySpecificOption can have one of the following 3 values:

- DISABLED => the feature is turned off and all the other related annotations are just ignored
- ALLOWED => this is the default behavior: types are not property reactive unless they are not annotated with @PropertySpecific
- ALWAYS => all types are property reactive by default

So, for example, to have a KnowledgeBuilder generating property reactive types by default you could do:

KnowledgeBuilderConfiguration config = KnowledgeBuilderFactory.newKnowledgeBuilderConfiguration();
config.setOption(PropertySpecificOption.ALWAYS);
KnowledgeBuilder kbuilder = KnowledgeBuilderFactory.newKnowledgeBuilder(config);

In this last case it will be possible to disable the property reactivity feature on a specific type by annotating it with @ClassReactive.

This API is experimental: future backwards incompatible changes are possible.

Using the new fluent simulation testing, you can test your rules in unit tests more easily:

    @Test

    public void rejectMinors() {
        SimulationFluent simulationFluent = new DefaultSimulationFluent();
        Driver john = new Driver("John", "Smith", new LocalDate().minusYears(10));
        Car mini = new Car("MINI-01", CarType.SMALL, false, new BigDecimal("10000.00"));
        PolicyRequest johnMiniPolicyRequest = new PolicyRequest(john, mini);
        johnMiniPolicyRequest.addCoverageRequest(new CoverageRequest(CoverageType.COLLISION));
        johnMiniPolicyRequest.addCoverageRequest(new CoverageRequest(CoverageType.COMPREHENSIVE));
        simulationFluent
        .newKnowledgeBuilder()
            .add(ResourceFactory.newClassPathResource("org/drools/examples/carinsurance/rule/policyRequestApprovalRules.drl"),
                    ResourceType.DRL)
            .end()
        .newKnowledgeBase()
            .addKnowledgePackages()
            .end()
        .newStatefulKnowledgeSession()
            .insert(john).set("john")
            .insert(mini).set("mini")
            .insert(johnMiniPolicyRequest).set("johnMiniPolicyRequest")
            .fireAllRules()
            .test("johnMiniPolicyRequest.automaticallyRejected == true")
            .test("johnMiniPolicyRequest.rejectedMessageList.size() == 1")
            .end()
        .runSimulation();
    }

You can even test your CEP rules in unit tests without suffering from slow tests:

    @Test

    public void lyingAboutAge() {
        SimulationFluent simulationFluent = new DefaultSimulationFluent();
        Driver realJohn = new Driver("John", "Smith", new LocalDate().minusYears(10));
        Car realMini = new Car("MINI-01", CarType.SMALL, false, new BigDecimal("10000.00"));
        PolicyRequest realJohnMiniPolicyRequest = new PolicyRequest(realJohn, realMini);
        realJohnMiniPolicyRequest.addCoverageRequest(new CoverageRequest(CoverageType.COLLISION));
        realJohnMiniPolicyRequest.addCoverageRequest(new CoverageRequest(CoverageType.COMPREHENSIVE));
        realJohnMiniPolicyRequest.setAutomaticallyRejected(true);
        realJohnMiniPolicyRequest.addRejectedMessage("Too young.");
        Driver fakeJohn = new Driver("John", "Smith", new LocalDate().minusYears(30));
        Car fakeMini = new Car("MINI-01", CarType.SMALL, false, new BigDecimal("10000.00"));
        PolicyRequest fakeJohnMiniPolicyRequest = new PolicyRequest(fakeJohn, fakeMini);
        fakeJohnMiniPolicyRequest.addCoverageRequest(new CoverageRequest(CoverageType.COLLISION));
        fakeJohnMiniPolicyRequest.addCoverageRequest(new CoverageRequest(CoverageType.COMPREHENSIVE));
        fakeJohnMiniPolicyRequest.setAutomaticallyRejected(false);
        simulationFluent
        .newStep(0)
        .newKnowledgeBuilder()
            .add(ResourceFactory.newClassPathResource("org/drools/examples/carinsurance/cep/policyRequestFraudDetectionRules.drl"),
                    ResourceType.DRL)
            .end()
        .newKnowledgeBase()
            .addKnowledgePackages()
            .end(World.ROOT, KnowledgeBase.class.getName())
        .newStatefulKnowledgeSession()
            .end()
        .newStep(1000)
        .getStatefulKnowledgeSession()
            .insert(realJohn).set("realJohn")
            .insert(realMini).set("realMini")
            .insert(realJohnMiniPolicyRequest).set("realJohnMiniPolicyRequest")
            .fireAllRules()
            .test("realJohnMiniPolicyRequest.requiresManualApproval == false")
            .end()
        .newStep(5000)
        .getStatefulKnowledgeSession()
            .insert(fakeJohn).set("fakeJohn")
            .insert(fakeMini).set("fakeMini")
            .insert(fakeJohnMiniPolicyRequest).set("fakeJohnMiniPolicyRequest")
            .fireAllRules()
            .test("fakeJohnMiniPolicyRequest.requiresManualApproval == true")
            .end()
        .runSimulation();
    }

Until now, implementing TSP or Vehicle Routing like problems in Planner was hard. The new chaining support makes it easy.

You simply declare that a planning variable (previousAppearance) of this planning entity (VrpCustomer) is chained and therefor possibly referencing another planning entity (VrpCustomer) itself, creating a chain with that entity.

public class VrpCustomer implements VrpAppearance {


    ...
    @PlanningVariable(chained = true)
    @ValueRanges({
            @ValueRange(type = ValueRangeType.FROM_SOLUTION_PROPERTY, solutionProperty = "vehicleList"),
            @ValueRange(type = ValueRangeType.FROM_SOLUTION_PROPERTY, solutionProperty = "customerList",
                    excludeUninitializedPlanningEntity = true)})
    public VrpAppearance getPreviousAppearance() {
        return previousAppearance;
    }
    ...
}

This triggers automatic chain correction:

Without any extra boilerplate code, this is compatible with:

For more information, read the Planner reference manual.