3.1. What is New and Noteworthy in Drools 5.4.0.Beta1

The simulation project that was first started in 2009, http://blog.athico.com/2009/07/drools-simulation-and-test-framework.html, has undergone an over haul and is now in a usable state. We have not yet promoted this to -api, so it's considered unstable and will change during the beta process. For now though the adventurous people can take a look at the unit tests and start playing.

The Simulator runs the Simulation. The Simulation is your scenario definition. The Simulation consists of 1 to n Paths, you can think of a Path as a sort of Thread. The Path is a chronological line on which Steps are specified at given temporal distances from the start. You don't specify a time unit for the Step, say 12:00am, instead it is always a relative time distance from the start of the Simulation. Each Step contains one or more Commands, i.e. create a StatefulKnowledgeSession or insert an object or start a process. These are the very same commands that you would use to script a knowledge session using the batch execution, so it's re-using existing concepts.

  • 1.1 Simulation

    • 1..n Paths

      • 1..n Steps

        • 1..n Commands

All the steps, from all paths, are added to a priority queue which is ordered by the temporal distance, and allows us to incrementally execute the engine using a time slicing approach. The simulator pops of the steps from the queue in turn. For each Step it increments the engine clock and then executes all the Step's Commands.

Here is an example Command (notice it uses the same Commands as used by the CommandExecutor):

new InsertObjectCommand( new Person( "darth", 97 ) )

Commands can be grouped together, especially Assertion commands, via test groups. The test groups are mapped to JUnit "test methods", so as they pass or fail using a specialised JUnit Runner the Eclipse GUI is updated - as illustrated in the above image, showing two passed test groups named "test1" and "test2".

Using the JUnit integration is trivial. Just annotate the class with @RunWith(JUnitSimulationRunner.class). Then any method that is annotated with @Test and returns a Simulation instance will be invoked executing the returned Simulation instance in the Simulator. As test groups are executed the JUnit GUI is updated.

When executing any commands on a KnowledgeBuilder, KnowledgeBase or StatefulKnowledgeSession the system assumes a "register" approach. To get a feel for this look at the org.drools.simulation.impl.SimulationTest at github (path may change over time).

cmds.add( new NewKnowledgeBuilderCommand( null ) );
cmds.add( new SetVariableCommandFromLastReturn( "path1",
                                                KnowledgeBuilder.class.getName() ) );

cmds.add( new KnowledgeBuilderAddCommand( ResourceFactory.newByteArrayResource( str.getBytes() ),
                                          ResourceType.DRL, null ) );

Notice the set command. "path1" is the context, each path has it's own variable context. All paths inherit from a "root" context. "KnowledgeBuilder.class.getName() " is the name that we are setting the return value of the last command. As mentioned before we consider the class names of those classes as registers, any further commands that attempt to operate on a knowledge builder will use what ever is assigned to that, as in the case of KnowledgeBuilderAddCommand. This allows multiple kbuilders, kbases and ksessions to exist in one context under different variable names, but only the one assigned to the register name is the one that is currently executed on.

The code below show the rough outline used in SimulationTest:

Simulation simulation = new SimulationImpl();
PathImpl path = new PathImpl( simulation,
                              "path1" );                              
simulation.getPaths().put( "path1",
                           path );                                      
List<Step> steps = new ArrayList<Step>();
path.setSteps( steps );
List<Command> cmds = new ArrayList<Command>();
.... add commands to step here ....
// create a step at temporal distance of 2000ms from start
steps.add( new StepImpl( path,
                         cmds,
                         2000 ) ); 

We know the above looks quite verbose. SimulationTest just shows our low level canonical model, the idea is that high level representations are built ontop of this. As this is a builder api we are currently focusing on two sets of fluents, compact and standard. We will also work on a spreadsheet UI for building these, and eventually a dedicated textual dsl.

The compact fluent is designed to provide the absolute minimum necessary to run against a single ksession. A good place to start is org.drools.simulation.impl.CompactFluentTest , a snippet of which is shown below. Notice we set "yoda" to "y" and can then assert on that. Currently inside of the test string it executes using mvel. The eventual goal is to build out a set of hamcrest matchers that will allow assertions against the state of the engine, such as what rules have fired and optionally with with data.

FluentCompactSimulation f = new FluentCompactSimulationImpl();
f.newStatefulKnowledgeSession()
    .getKnowledgeBase()
    .addKnowledgePackages( ResourceFactory.newByteArrayResource( str.getBytes() ),
                           ResourceType.DRL )
    .end()
    .newStep( 100 ) // increases the time 100ms
    .insert( new Person( "yoda",
                         150 ) ).set( "y" )
    .fireAllRules()
    // show testing inside of ksession execution
    .test( "y.name == 'yoda'" )
    .test( "y.age == 160" );

Note that the test is not executing at build time, it's building a script to be executed later. The script underneath matches what you saw in SimulationTest. Currently the way to run a simulation manually is shown below. Atlhough you already saw in SimulationTest that JUnit will execute these automatically. We'll improve this over itme.

SimulationImpl sim = (SimulationImpl) ((FluentCompactSimulationImpl) f).getSimulation();
Simulator simulator = new Simulator( sim,
                                     new Date().getTime() );
simulator.run();

The standard fluent is almost a 1 to 1 mapping to the cannonical path, step and command structure in SimulationTest- just more compact. Start by looking in org.drools.simulation.impl.StandardFluentTest.. This fluent allows you to run any number of paths and steps, along with a lot more control over multiple kbuilders, kbases and ksessions.

FluentStandardSimulation f = new FluentStandardSimulationImpl();      
f.newPath("init")
     .newStep( 0 )
          // set to ROOT, as I want paths to share this
         .newKnowledgeBuilder()
             .add( ResourceFactory.newByteArrayResource( str.getBytes() ),
                   ResourceType.DRL )
         .end(ContextManager.ROOT, KnowledgeBuilder.class.getName() )
         .newKnowledgeBase()
             .addKnowledgePackages()
         .end(ContextManager.ROOT, KnowledgeBase.class.getName() )
    .end()
 .newPath( "path1" )
    .newStep( 1000 )
        .newStatefulKnowledgeSession()
            .insert( new Person( "yoda", 150 ) ).set( "y" )
            .fireAllRules()
            .test( "y.name == 'yoda'" )
            .test( "y.age == 160" )
        .end()
    .end()
 .newPath( "path2" )
    .newStep( 800 )
        .newStatefulKnowledgeSession()
            .insert( new Person( "darth", 70 ) ).set( "d" )
            .fireAllRules()
            .test( "d.name == 'darth'" )
            .test( "d.age == 80" )
        .end()
    .end()
 .end

There is still an aweful lot to do, this is designed to eventually provide a unified simulation and testing environment for rules, workflow and event processing over time, and eventually also over distributed architectures.

  • Flesh out the api to support more commands, and also to encompass jBPM commands

  • Improve out of the box userbility, including moving interfaces to knowlege-api and hiding "new" constructors with factory methods

  • Commands are already marshallable to json and xml. They should be updated to allow full round tripping from java api commands and json/xml documents.

  • Develop hamcrest matchers for testing state

    • What rule(s) fired, including optionally what data was used with the executing rule (Drools)

    • What rules are active for a given fact

    • What rules activated and de-activated for a given fact change

    • Process variable state (jBPM)

    • Wait node states (jBPM)

  • Design and build tabular authoring tools via spreadsheet, targetting the web with round tripping to excel.

  • Design and develop textual DSL for authoing - maybe part of DRL (long term task).