JBoss.orgCommunity Documentation

Drools Planner User Guide

Version 5.3.0.CR1


1. Planner introduction
1.1. What is Drools Planner?
1.2. What is a planning problem?
1.2.1. A planning problem is NP-complete
1.2.2. A planning problem has (hard and soft) constraints
1.2.3. A planning problem has a huge search space
1.3. Status of Drools Planner
1.4. Getting Drools Planner and running the examples
1.4.1. Getting the release package and running the examples
1.4.2. Get it with maven, gradle, ivy, buildr or ANT
1.4.3. Build it from source
1.5. Questions, issues and blog
2. Use cases and examples
2.1. Introduction
2.2. The n queens example
2.2.1. Problem statement
2.2.2. Solution(s)
2.2.3. Screenshot
2.2.4. Problem size
2.2.5. Domain class diagram
2.3. The Manners 2009 example
2.3.1. Problem statement
2.4. The Traveling Salesman Problem (TSP) example
2.4.1. Problem statement
2.5. The Traveling Tournament Problem (TTP) example
2.5.1. Problem statement
2.5.2. Simple and smart implementation
2.5.3. Problem size
2.6. Cloud balancing
2.6.1. Problem statement
2.7. The ITC 2007 curriculum course example
2.7.1. Problem statement
2.8. The ITC 2007 examination example
2.8.1. Problem statement
2.8.2. Problem size
2.8.3. Domain class diagram
2.9. The patient admission scheduling (PAS) example (hospital bed planning)
2.9.1. Problem statement
2.10. The INRC 2010 nurse rostering example
2.10.1. Problem statement
3. Planner configuration
3.1. Overview
3.2. Solver configuration
3.2.1. Solver configuration by XML file
3.2.2. Solver configuration by Java API
3.3. Model your planning problem
3.3.1. Is this class a problem fact or planning entity?
3.3.2. Problem fact
3.3.3. Planning entity and planning variables
3.3.4. Planning value and planning value ranges
3.3.5. Planning problem and planning solution
3.4. Solver
3.4.1. The Solver interface
3.4.2. Solving a problem
3.4.3. Environment mode
4. Score calculation with a rule engine
4.1. Rule based score calculation
4.2. Choosing a Score implementation
4.2.1. The ScoreDefinition interface
4.2.2. SimpleScore
4.2.3. HardAndSoftScore
4.2.4. Implementing a custom Score
4.3. Defining the score rules source
4.3.1. A scoreDrl resource on the classpath
4.3.2. A RuleBase (possibly defined by Guvnor)
4.4. Implementing a score rule
4.5. Aggregating the score rules into the Score
4.6. Delta based score calculation
4.7. Tips and tricks
5. Optimization algorithms
5.1. The size of real world problems
5.2. The secret sauce of Drools Planner
5.3. Optimization algorithms overview
5.4. SolverPhase
5.5. Which optimization algorithms should I use?
5.6. Logging level: What is the Solver doing?
5.7. Custom SolverPhase
6. Exact methods
6.1. Overview
6.2. Brute Force
6.2.1. Algorithm description
6.2.2. Configuration
6.3. Branch and bound
6.3.1. Algorithm description
6.3.2. Configuration
7. Construction heuristics
7.1. Overview
7.2. First Fit
7.2.1. Algorithm description
7.2.2. Configuration
7.3. First Fit Decreasing
7.3.1. Algorithm description
7.3.2. Configuration
7.4. Best Fit
7.4.1. Algorithm description
7.4.2. Configuration
7.5. Best Fit Decreasing
7.5.1. Algorithm description
7.5.2. Configuration
7.6. Cheapest insertion
7.6.1. Algorithm description
7.6.2. Configuration
8. Local search solver
8.1. Overview
8.2. Hill climbing (simple local search)
8.2.1. Algorithm description
8.3. Tabu search
8.3.1. Algorithm description
8.4. Simulated annealing
8.4.1. Algorithm description
8.5. About neighborhoods, moves and steps
8.5.1. A move
8.5.2. Move generation
8.5.3. A step
8.5.4. Getting stuck in local optima
8.6. Deciding the next step
8.6.1. Selector
8.6.2. Acceptor
8.6.3. Forager
8.7. Best solution
8.8. Termination
8.8.1. TimeMillisSpendTermination
8.8.2. StepCountTermination
8.8.3. ScoreAttainedTermination
8.8.4. UnimprovedStepCountTermination
8.8.5. Combining Terminations
8.8.6. Another thread can ask a Solver to terminate early
8.9. Using a custom Selector, Acceptor, Forager or Termination
9. Evolutionary algorithms
9.1. Overview
9.2. Evolutionary Strategies
9.3. Genetic algorithms
10. Benchmarking and tweaking
10.1. Finding the best configuration
10.2. Building a Benchmarker
10.2.1. Adding the exta dependency
10.2.2. Building a basic Benchmarker
10.2.3. Warming up the hotspot compiler
10.3. Summary statistics
10.3.1. Best score summary
10.4. Statistics per data set (graph and CSV)
10.4.1. Best score over time statistic (graph and CSV)
10.4.2. Calculate count per second statistic (graph and CSV)
10.4.3. Memory use statistic (graph and CSV)
11. Repeated planning
11.1. Introduction to repeated planning
11.2. Backup planning
11.3. Continuous planning (windowed planning)
11.4. Real-time planning (event based planning)
Index

Drools Planner optimizes planning problems. It solves use cases, such as:

  • Employee shift rostering: rostering nurses, repairmen, ...

  • Agenda scheduling: scheduling meetings, appointments, maintenance jobs, advertisements, ...

  • Educational timetabling: scheduling lessons, courses, exams, conference presentations, ...

  • Vehicle routing: planning vehicles (trucks, trains, boats, airplanes, ...) with freight and/or people

  • Bin packing: filling containers, trucks, ships and storage warehouses, but also cloud computers nodes, ...

  • Job shop scheduling: planning car assembly lines, machine queue planning, workforce task planning, ...

  • Cutting stock: while minimizing waste: cutting paper, steel, carpet, ...

  • Sport scheduling: planning football leagues, baseball leagues, ...

  • Financial optimization: investment portfolio optimization, risk spreading, ...

Every organization faces planning problems: they have a number of things to do and a limited set of constrained resources to do them with.

Drools Planner enables normal JavaTM programmers to solve planning problems efficiently. Under the hood, it combines optimization algorithms (including Metaheuristics such as Tabu Search and Simulated Annealing) with the power of score calculation by a rule engine.

Drools Planner, like the rest of Drools, is business-friendly open source software under the Apache Software License 2.0 (layman's explanation).

All the use cases above are probably NP-complete. In layman's terms, this means:

  • It's easy to verify a given solution to a problem in reasonable time.

  • There is no silver bullet to find the optimal solution of a problem in reasonable time (*).

Note

(*) At least, none of the smartest computer scientists in the world have found such a silver bullet yet. But if they find one for 1 NP-complete problem, it will work for every NP-complete problem.

In fact, there's a $ 1,000,000 reward for anyone that proves if such a silver bullet actually exists or not.

The implication of this is pretty dire: solving your problem is probably harder than you anticipated, because the 2 common techniques won't suffice:

  • A brute force algorithm (even a smarter variant) will take too long.

  • A quick algorithm, for example; in bin packing, putting in the largest items first, will return a solution that is usually far from optimal.

Drools Planner does find a good solution in reasonable time for such planning problems.

A planning problem has a number of solutions. There are several categories of solutions:

Counterintuitively, the number of possible solutions is huge (if calculated correctly), even with a small dataset. As you'll see in the examples, most instances have a lot more possible solutions than the minimal number of atoms in the known universe (10^80). Because there is no silver bullet to find the optimal solution, any implementation is forced to evaluate at least a subset of all those possible solutions.

Drools Planner supports several optimization algorithms to efficiently wade through that incredibly large number of possible solutions. Depending on the use case, some optimization algorithms perform better than others. In Drools Planner it is easy to switch the optimization algorithm, by changing the solver configuration in a few XML lines or by API.

Drools Planner is production ready. The API is almost stable but backward incompatible changes can occur. With the recipe called UpgradeFromPreviousVersionRecipe.txt you can easily upgrade and deal with any backwards incompatible changes between versions. That recipe file is included in every release.

You can download a release of Drools Planner from the Drools download site. To run an example, just open the directory examples and run the script (runExamples.sh on Linux and mac or runExamples.bat on windows) and pick an example in the GUI:

$ cd examples
$ ./runExamples.sh
$ cd examples
$ runExamples.bat

The Drools Planner jars are available on the central maven repository (and the JBoss maven repository).

If you use maven, just add a dependency to drools-planner-core in your project's pom.xml:


    <dependency>
        <groupId>org.drools.planner</groupId>
        <artifactId>drools-planner-core</artifactId>
        <version>5.x</version>
    </dependency>

This is similar for gradle, ivy and buildr.

If you're still using ant (without ivy), copy all the jars from the download zip's binaries directory and manually verify that your classpath doesn't contain duplicate jars.

Your questions and comments are welcome on the user mailing list. Start the subject of your mail with [planner]. You can read/write to the user mailing list without littering your mailbox through this web forum or this newsgroup.

Feel free to report an issue (such as a bug, improvement or a new feature request) for the Drools Planner code or for this manual to the drools issue tracker. Select the component drools-planner.

Pull requests (and patches) are very welcome and get priority treatment! Include the pull request link to a JIRA issue and optionally send a mail to the dev mailing list to get the issue fixed fast. By open sourcing your improvements, you 'll benefit from our peer review, improvements made upon your improvements and maybe even a thank you on our blog.

Check our blog and twitter (Geoffrey De Smet) for news and articles. If Drools Planner helps you solve your problem, don't forget to blog or tweet about it!

Use a good domain model: it will be easier to understand and solve your planning problem with Drools Planner. This is the domain model for the n queens example:

public class Column {
    
    private int index;

    // ... getters and setters
}
public class Row {
    
    private int index;

    // ... getters and setters
}
public class Queen {
    
    private Column column;
    private Row row;

    public int getAscendingDiagonalIndex() {...}
    public int getDescendingDiagonalIndex() {...}

    // ... getters and setters
}
public class NQueens implements Solution<SimpleScore> {
    
    private int n;
    private List<Column> columnList;
    private List<Row> rowList;

    private List<Queen> queenList;

    private SimpleScore score;

    // ... getters and setters
}

A Queen instance has a Column (for example: 0 is column A, 1 is column B, ...) and a Row (its row, for example: 0 is row 0, 1 is row 1, ...). Based on the column and the row, the ascending diagonal line as well as the descending diagonal line can be calculated. The column and row indexes start from the upper left corner of the chessboard.


When 2 queens share the same column, row or diagonal line, such as (*) and (**), they can attack each other.

A single NQueens instance contains a list of all Queen instances. It is the Solution implementation which will be supplied to, solved by and retrieved from the Solver. Notice that in the 4 queens example, NQueens's getN() method will always return 4.

You can build a Solver instance with the XmlSolverConfigurer. Configure it with a solver configuration XML file:

        XmlSolverConfigurer configurer = new XmlSolverConfigurer();

        configurer.configure("/org/drools/planner/examples/nqueens/solver/nqueensSolverConfig.xml");
        Solver solver = configurer.buildSolver();

A solver configuration file looks something like this:


<?xml version="1.0" encoding="UTF-8"?>
<solver>
  <!-- Define the model -->
  <solutionClass>org.drools.planner.examples.nqueens.domain.NQueens</solutionClass>
  <planningEntityClass>org.drools.planner.examples.nqueens.domain.Queen</planningEntityClass>

  <!-- Define the score function -->
  <scoreDrl>/org/drools/planner/examples/nqueens/solver/nQueensScoreRules.drl</scoreDrl>
  <scoreDefinition>
    <scoreDefinitionType>SIMPLE</scoreDefinitionType>
  </scoreDefinition>

  <!-- Configure the optimization algorithm(s) -->
  <termination>
    ...
  </termination>
  <constructionHeuristic>
    ...
  </constructionHeuristic>
  <localSearch>
    ...
  </localSearch>
</solver>

Notice the 3 parts in it:

We 'll explain these various parts of a configuration later in this manual.

Drools Planner makes it relatively easy to switch optimization algorithm(s) just by changing the configuration. There's even a Benchmark utility which allows you to play out different configurations against each other and report the most appropriate configuration for your problem. You could for example play out tabu search versus simulated annealing, on 4 queens and 64 queens.

As an alternative to the XML file, a solver configuration can also be configured with the SolverConfig API:

        SolverConfig solverConfig = new SolverConfig();

        solverConfig.setSolutionClass(NQueens.class);
        Set<Class<?>> planningEntityClassSet = new HashSet<Class<?>>();
        planningEntityClassSet.add(Queen.class);
        solverConfig.setPlanningEntityClassSet(planningEntityClassSet);

        solverConfig.setScoreDrlList(
                Arrays.asList("/org/drools/planner/examples/nqueens/solver/nQueensScoreRules.drl"));
        ScoreDefinitionConfig scoreDefinitionConfig = solverConfig.getScoreDefinitionConfig();
        scoreDefinitionConfig.setScoreDefinitionType(
                ScoreDefinitionConfig.ScoreDefinitionType.SIMPLE);

        TerminationConfig terminationConfig = solverConfig.getTerminationConfig();
        // ...
        List<SolverPhaseConfig> solverPhaseConfigList = new ArrayList<SolverPhaseConfig>();
        ConstructionHeuristicSolverPhaseConfig constructionHeuristicSolverPhaseConfig
                = new ConstructionHeuristicSolverPhaseConfig();
        // ...
        solverPhaseConfigList.add(constructionHeuristicSolverPhaseConfig);
        LocalSearchSolverPhaseConfig localSearchSolverPhaseConfig = new LocalSearchSolverPhaseConfig();
        // ...
        solverPhaseConfigList.add(localSearchSolverPhaseConfig);
        solverConfig.setSolverPhaseConfigList(solverPhaseConfigList);
        Solver solver = solverConfig.buildSolver();

It is highly recommended to configure by XML file instead of this API. To dynamically configure a value at runtime, use the XML file as a template and extract the SolverConfig class with getSolverConfig() to configure the dynamic value at runtime:

        XmlSolverConfigurer configurer = new XmlSolverConfigurer();
        configurer.configure("/org/drools/planner/examples/nqueens/solver/nqueensSolverConfig.xml");

        SolverConfig solverConfig = configurer.getSolverConfig();
        solverConfig.getTerminationConfig().setMaximumMinutesSpend(userInput);
        Solver solver = solverConfig.buildSolver();

A planning entity is a JavaBean (POJO) that changes during solving, for example a Queen that changes to another row. A planning problem has multiple planning entities, for example for a single n queens problem, each Queen is a planning entity. But there's usually only 1 planning entity class, for example the Queen class.

A planning entity class needs to be annotated with the @PlanningEntity annotation.

Each planning entity class has 1 or more planning variables. It usually also has 1 or more defining properties. For example in n queens, a Queen is defined by it's Column and has a planning variable Row. This means that a Queen's column never changes during solving, while it's row does change.

@PlanningEntity
public class Queen {

    private Column column;

    // Planning variables: changes during planning, between score calculations.
    private Row row;

    // ... getters and setters
}

A planning entity class can have multiple planning variables. For example, a Lecture is defined by it's Course and it's index in that course (because 1 course has multiple lectures). Each Lecture needs to be scheduled into a Period and a Room so it has 2 planning variables (period and room). For example: the course Mathematics has 8 lectures per week, of which the first lecture is Monday morning at 08:00 in room 212.

@PlanningEntity
public class Lecture {

    private Course course;
    private int lectureIndexInCourse;

    // Planning variables: changes during planning, between score calculations.
    private Period period;
    private Room room;

    // ...
}

The solver configuration also needs to be made aware of each planning entity class:

<solver>
  ...
  <planningEntityClass>org.drools.planner.examples.nqueens.domain.Queen</planningEntityClass>
  ...
</solver>

Some uses cases have multiple planning entity classes. For example: route freight and trains into railway network arcs, where each freight can use multiple trains over it's journey and each train can carry multiple freights per arc. Having multiple planning entity classes directly raises the implementation complexity of your use case.

Some optimization algorithms work more efficiently if they have an estimation of which planning entities are more difficult to plan. For example: in bin packing bigger items are harder to fit, in course scheduling lectures with more students are more difficult to schedule and in n queens the middle queens are more difficult.

Therefore, you can set a difficultyComparatorClass to the @PlanningEntity annotation:

@PlanningEntity(difficultyComparatorClass = CloudAssignmentDifficultyComparator.class)
public class CloudAssignment {
    // ...
}
public class CloudAssignmentDifficultyComparator implements Comparator<Object> {

    public int compare(Object a, Object b) {
        return compare((CloudAssignment) a, (CloudAssignment) b);
    }

    public int compare(CloudAssignment a, CloudAssignment b) {
        return new CompareToBuilder()
                .append(a.getCloudProcess().getMinimalMultiplicand(), b.getCloudProcess().getMinimalMultiplicand())
                .append(a.getCloudProcess().getId(), b.getCloudProcess().getId())
                .toComparison();
    }

}

Alternatively, you can also set a difficultyWeightFactoryClass to the @PlanningEntity annotation, so you have access to the rest of the problem facts from the solution too:

@PlanningEntity(difficultyWeightFactoryClass = QueenDifficultyWeightFactory.class)
public class Queen {
    // ...
}
public interface PlanningEntityDifficultyWeightFactory {

    Comparable createDifficultyWeight(Solution solution, Object planningEntity);

}
public class QueenDifficultyWeightFactory implements PlanningEntityDifficultyWeightFactory {

    public Comparable createDifficultyWeight(Solution solution, Object planningEntity) {
        NQueens nQueens = (NQueens) solution;
        Queen queen = (Queen) planningEntity;
        int distanceFromMiddle = calculateDistanceFromMiddle(nQueens.getN(), queen.getColumnIndex());
        return new QueenDifficultyWeight(queen, distanceFromMiddle);
    }

    // ...

    public static class QueenDifficultyWeight implements Comparable<QueenDifficultyWeight> {

        private final Queen queen;
        private final int distanceFromMiddle;

        public QueenDifficultyWeight(Queen queen, int distanceFromMiddle) {
            this.queen = queen;
            this.distanceFromMiddle = distanceFromMiddle;
        }

        public int compareTo(QueenDifficultyWeight other) {
            return new CompareToBuilder()
                    // The more difficult queens have a lower distance to the middle
                    .append(other.distanceFromMiddle, distanceFromMiddle) // Decreasing
                    .append(queen.getColumnIndex(), other.queen.getColumnIndex())
                    .toComparison();
        }

    }

}

None of the current planning variable state may be used to compare planning entities. They are likely to be null anyway. For example, a Queen's row variable may not be used.

Some optimization algorithms work more efficiently if they have an estimation of which planning values are stronger, which means they are more likely to satisfy a planning entity. For example: in bin packing bigger containers are more likely to fit an item and in course scheduling bigger rooms are less likely to break the student capacity constraint.

Therefore, you can set a strengthComparatorClass to the @PlanningVariable annotation:

    @PlanningVariable(strengthComparatorClass = CloudComputerStrengthComparator.class)
    // ...
    public CloudComputer getCloudComputer() {
        // ...
    }
public class CloudComputerStrengthComparator implements Comparator<Object> {

    public int compare(Object a, Object b) {
        return compare((CloudComputer) a, (CloudComputer) b);
    }

    public int compare(CloudComputer a, CloudComputer b) {
        return new CompareToBuilder()
                .append(a.getMultiplicand(), b.getMultiplicand())
                .append(b.getCost(), a.getCost()) // Descending (but this is debatable)
                .append(a.getId(), b.getId())
                .toComparison();
    }

}

Alternatively, you can also set a strengthWeightFactoryClass to the @PlanningVariable annotation, so you have access to the rest of the problem facts from the solution too:

    @PlanningVariable(strengthWeightFactoryClass = RowStrengthWeightFactory.class)
    // ...
    public Row getRow() {
        // ...
    }
public interface PlanningValueStrengthWeightFactory {

    Comparable createStrengthWeight(Solution solution, Object planningValue);

}
public class RowStrengthWeightFactory implements PlanningValueStrengthWeightFactory {

    public Comparable createStrengthWeight(Solution solution, Object planningValue) {
        NQueens nQueens = (NQueens) solution;
        Row row = (Row) planningValue;
        int distanceFromMiddle = calculateDistanceFromMiddle(nQueens.getN(), row.getIndex());
        return new RowStrengthWeight(row, distanceFromMiddle);
    }

    // ...

    public static class RowStrengthWeight implements Comparable<RowStrengthWeight> {

        private final Row row;
        private final int distanceFromMiddle;

        public RowStrengthWeight(Row row, int distanceFromMiddle) {
            this.row = row;
            this.distanceFromMiddle = distanceFromMiddle;
        }

        public int compareTo(RowStrengthWeight other) {
            return new CompareToBuilder()
                    // The stronger rows have a lower distance to the middle
                    .append(other.distanceFromMiddle, distanceFromMiddle) // Decreasing (but this is debatable)
                    .append(row.getIndex(), other.row.getIndex())
                    .toComparison();
        }

    }

}

None of the current planning variable state in any of the planning entities may be used to compare planning values. They are likely to be null anyway. For example, None of the row variables of any Queen may be used to determine the strength of a Row.

Most optimization algorithms use the cloneSolution() method to clone the solution each time they encounter a new best solution (so they can recall it later) or to work with multiple solutions in parallel.

The NQueens implementation only deep clones all Queen instances. When the original solution is changed during planning, by changing a Queen, the clone stays the same.

    /**

     * Clone will only deep copy the {@link #queenList}.
     */
    public NQueens cloneSolution() {
        NQueens clone = new NQueens();
        clone.id = id;
        clone.= n;
        clone.columnList = columnList;
        clone.rowList = rowList;
        List<Queen> clonedQueenList = new ArrayList<Queen>(queenList.size());
        for (Queen queen : queenList) {
            clonedQueenList.add(queen.clone());
        }
        clone.queenList = clonedQueenList;
        clone.score = score;
        return clone;
    }

The cloneSolution() method should only deep clone the planning entities. Notice that the problem facts, such as Column and Row are normally not cloned: even their List instances are not cloned.

Build a Solution instance to represent your planning problem, so you can set it on the Solver as the planning problem to solve. For example in n queens, an NQueens instance is created with the required Column and Row instances and every Queen set to a different column and every row set to null.

    private NQueens createNQueens(int n) {

        NQueens nQueens = new NQueens();
        nQueens.setId(0L);
        nQueens.setN(n);
        List<Column> columnList = new ArrayList<Column>(n);
        for (int i = 0; i < n; i++) {
            Column column = new Column();
            column.setId((long) i);
            column.setIndex(i);
            columnList.add(column);
        }
        nQueens.setColumnList(columnList);
        List<Row> rowList = new ArrayList<Row>(n);
        for (int i = 0; i < n; i++) {
            Row row = new Row();
            row.setId((long) i);
            row.setIndex(i);
            rowList.add(row);
        }
        nQueens.setRowList(rowList);
        List<Queen> queenList = new ArrayList<Queen>(n);
        long id = 0;
        for (Column column : columnList) {
            Queen queen = new Queen();
            queen.setId(id);
            id++;
            queen.setColumn(column);
            // Notice that we leave the PlanningVariable properties (row) on null
            queenList.add(queen);
        }
        nQueens.setQueenList(queenList);
        return nQueens;
    }

Usually, most of this data comes from your data layer, and your Solution implementation just aggregates that data and creates the uninitialized planning entity instances to plan:

        private void createLectureList(CurriculumCourseSchedule schedule) {
            List<Course> courseList = schedule.getCourseList();
            List<Lecture> lectureList = new ArrayList<Lecture>(courseList.size());
            for (Course course : courseList) {
                for (int i = 0; i < course.getLectureSize(); i++) {
                    Lecture lecture = new Lecture();
                    lecture.setCourse(course);
                    lecture.setLectureIndexInCourse(i);
                    // Notice that we leave the PlanningVariable properties (period and room) on null
                    lectureList.add(lecture);
                }
            }
            schedule.setLectureList(lectureList);
        }

The environment mode allows you to detect common bugs in your implementation.

You can set the environment mode in the solver configuration XML file:


<solver>
  <environmentMode>DEBUG</environmentMode>
  ...
</solver>

A solver has a single Random instance. Some solver configurations use the Random instance a lot more than others. For example simulated annealing depends highly on random numbers, while tabu search only depends on it to deal with score ties. The environment mode influences the seed of that Random instance.

There are 4 environment modes:

A ScoreCalculator instance is asserted into the WorkingMemory as a global called scoreCalculator. Your score rules need to (direclty or indirectly) update that instance. Usually you 'll make a single rule as an aggregation of the other rules to update the score:

global SimpleScoreCalculator scoreCalculator;

rule "multipleQueensHorizontal"
    when
        $q1 : Queen($id : id, $y : y);
        $q2 : Queen(id > $id, y == $y);
    then
        insertLogical(new UnweightedConstraintOccurrence("multipleQueensHorizontal", $q1, $q2));
end

// multipleQueensVertical is obsolete because it is always 0

rule "multipleQueensAscendingDiagonal"
    when
        $q1 : Queen($id : id, $ascendingD : ascendingD);
        $q2 : Queen(id > $id, ascendingD == $ascendingD);
    then
        insertLogical(new UnweightedConstraintOccurrence("multipleQueensAscendingDiagonal", $q1, $q2));
end

rule "multipleQueensDescendingDiagonal"
    when
        $q1 : Queen($id : id, $descendingD : descendingD);
        $q2 : Queen(id > $id, descendingD == $descendingD);
    then
        insertLogical(new UnweightedConstraintOccurrence("multipleQueensDescendingDiagonal", $q1, $q2));
end

rule "hardConstraintsBroken"
    when
        $occurrenceCount : Number() from accumulate(
            $unweightedConstraintOccurrence : UnweightedConstraintOccurrence(),
            count($unweightedConstraintOccurrence)
        );
    then
        scoreCalculator.setScore(- $occurrenceCount.intValue());
end

Most use cases will also weigh their constraints differently, by multiplying the count of each score rule with its weight. For example in freight routing, you can make 5 broken "avoid crossroads" soft constraints count as much as 1 broken "avoid highways at rush hour" soft constraint. This allows your business analysts to easily tweak the score function as they see fit.

Here's an example from CurriculumCourse, where assiging a Lecture to a Room which is missing 2 seats is weighted equally bad as having 1 isolated Lecture in a Curriculum:

// RoomCapacity: For each lecture, the number of students that attend the course must be less or equal
// than the number of seats of all the rooms that host its lectures.
// Each student above the capacity counts as 1 point of penalty.
rule "roomCapacity"
    when
        ...
    then
        insertLogical(new IntConstraintOccurrence("roomCapacity", ConstraintType.NEGATIVE_SOFT,
                ($studentSize - $capacity),
                ...));
end

// CurriculumCompactness: Lectures belonging to a curriculum should be adjacent
// to each other (i.e., in consecutive periods).
// For a given curriculum we account for a violation every time there is one lecture not adjacent
// to any other lecture within the same day.
// Each isolated lecture in a curriculum counts as 2 points of penalty.
rule "curriculumCompactness"
    when
        ...
    then
        insertLogical(new IntConstraintOccurrence("curriculumCompactness", ConstraintType.NEGATIVE_SOFT,
                2,
                ...));
end


// Accumulate soft constraints
rule "softConstraintsBroken"
        salience -1 // Do the other rules first (optional, for performance)
    when
        $softTotal : Number() from accumulate(
            IntConstraintOccurrence(constraintType == ConstraintType.NEGATIVE_SOFT, $weight : weight),
            sum($weight)
        )
    then
        scoreCalculator.setSoftConstraintsBroken($softTotal.intValue());
end

In case you haven't figured it out yet: performance (and scalability) is very important for solving planning problems well. What good is a real-time freight routing solver that takes a day to find a feasible solution? Even small and innocent looking problems can hide an enormous problem size. For example, they probably still don't know the optimal solution of the traveling tournament problem for as little as 12 traveling teams.

A Solver can use multiple optimization algorithms in sequence. Each optimization algorithm is represented by a SolverPhase. There is never more than 1 SolverPhase solving at the same time.

Here's a configuration that runs 3 phases in sequence:

<solver>
  ...
  <customSolverPhase><!-- Phase 1 -->
    ... <!-- custom construction heuristic -->
  </customSolverPhase>
  <localSearch><!-- Phase 2 -->
    ... <!-- simulated annealing -->
  </localSearch>
  <localSearch><!-- Phase 3 -->
    ... <!-- Tabu search -->
  </localSearch>
</solver>

When the first phase terminates, the second phase starts, and so on. When the last phase terminates, the Solver terminates.

Some phases (especially construction heuristics) will terminate automatically. Other phases (especially metaheuristics) will only terminate if the phase is configured to terminate:

<solver>
  ...
  <termination><!-- Solver termination -->
    <maximumSecondsSpend>90</maximumSecondsSpend>
  </termination>
  <localSearch>
    <termination><!-- Phase termination -->
      <maximumSecondsSpend>60</maximumSecondsSpend><!-- Give the next phase a chance to run too, before the solver terminates -->
    </termination>
    ...
  </localSearch>
  <localSearch>
    ...
  </localSearch>
</solver>

If the Solver terminates (before the last phase terminates itself), the current phase is terminated and all subsequent phases won't run.

The best way to illuminate the black box that is a Solver, is to play with the logging level:

Set the logging level on the category org.drools.planner, for example with log4j:

<log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/">

    <category name="org.drools.planner">
        <priority value="debug" />
    </category>

    ...

</log4j:configuration>

For example, set it to DEBUG logging, to see when the phases end and how fast steps are taken:

INFO  Solver started: time spend (0), score (null), new best score (null), random seed (0).
DEBUG     Step index (0), time spend (1), score (0), initialized planning entity (2 @ 0).
DEBUG     Step index (1), time spend (3), score (0), initialized planning entity (1 @ 2).
DEBUG     Step index (2), time spend (4), score (0), initialized planning entity (3 @ 3).
DEBUG     Step index (3), time spend (5), score (-1), initialized planning entity (0 @ 1).
INFO  Phase construction heuristic finished: step total (4), time spend (6), best score (-1).
DEBUG     Step index (0), time spend (10), score (-1),     best score (-1), accepted move size (12) for picked step (1 @ 2 => 3).
DEBUG     Step index (1), time spend (12), score (0), new best score (0), accepted move size (12) for picked step (3 @ 3 => 2).
INFO  Phase local search finished: step total (2), time spend (13), best score (0).
INFO  Solved: time spend (13), best score (0), average calculate count per second (4846).

All time spends are in milliseconds.

Between phases or before the first phase, you might want to execute a custom action on the Solution to get a better score. Yet you'll still want to reuse the score calculation. For example, to implement a custom construction heuristic without implementing an entire SolverPhase.

Implement the CustomSolverPhaseCommand interface :

public interface CustomSolverPhaseCommand {

    void changeWorkingSolution(SolutionDirector solutionDirector);

}

For example:

public class ExaminationStartingSolutionInitializer implements CustomSolverPhaseCommand {

    public void changeWorkingSolution(SolutionDirector solutionDirector) {
        Examination examination = (Examination) solutionDirector.getWorkingSolution();
        for (Exam exam : examination.getExamList()) {
            Score unscheduledScore = solutionDirector.calculateScoreFromWorkingMemory();
            ...
            for (Period period : examination.getPeriodList()) {
                exam.setPeriod(period)
                workingMemory.update(examHandle, exam);
                Score score = solutionDirector.calculateScoreFromWorkingMemory();
                ...
            }
            ...
        }
    }

}

And configure it like this:

<solver>
  ...
  <customSolverPhase>
    <customSolverPhaseCommandClass>org.drools.planner.examples.examination.solver.solution.initializer.ExaminationStartingSolutionInitializer</customSolverPhaseCommandClass>
  </customSolverPhase>
  ... <!-- Other phases -->
</solver>

It's possible to configure multiple customSolverPhaseCommandClass instances, which will be run in sequence.

A move is the change from a solution A to a solution B. For example, below you can see a single move on the starting solution of 4 queens that moves a single queen to another row:


A move can have a small or large impact. In the above example, the move of queen C0 to C2 is a small move. Some moves are the same move type. These are some possibilities for move types in n queens:

  • Move a single queen to another row. This is a small move. For example, move queen C0 to C2.

  • Move all queens a number of rows down or up. This a big move.

  • Move a single queen to another column. This is a small move. For example, move queen C2 to A0 (placing it on top of queen A0).

  • Add a queen to the board at a certain row and column.

  • Remove a queen from the board.

Because we have decided that all queens will be on the board at all times and each queen has an appointed column (for performance reasons), only the first 2 move types are usable in our example. Furthermore, we 're only using the first move type in the example because we think it gives the best performance, but you are welcome to prove us wrong.

Each of your move types will be an implementation of the Move interface:

public interface Move {


    boolean isMoveDoable(EvaluationHandler evaluationHandler);
    Move createUndoMove(EvaluationHandler evaluationHandler);
    void doMove(EvaluationHandler evaluationHandler);
}

Let's take a look at the Move implementation for 4 queens which moves a queen to a different row:

public class RowChangeMove implements Move {


    private Queen queen;
    private Row toRow;
    public RowChangeMove(Queen queen, Row toRow) {
        this.queen = queen;
        this.toRow = toRow;
    }
    // ... see below
}

An instance of RowChangeMove moves a queen from its current row to a different row.

Drools Planner calls the doMove(WorkingMemory) method to do a move. The Move implementation must notify the working memory of any changes it does on the solution facts:

    public void doMove(WorkingMemory workingMemory) {

        FactHandle queenHandle = workingMemory.getFactHandle(queen);
        queen.setRow(toRow);
        workingMemory.update(queenHandle, queen); // after changes are made
    }

You need to call the workingMemory.update(FactHandle, Object) method after modifying the fact. Note that you can alter multiple facts in a single move and effectively create a big move (also known as a coarse-grained move).

Drools Planner automatically filters out non doable moves by calling the isDoable(WorkingMemory) method on a move. A non doable move is:

  • A move that changes nothing on the current solution. For example, moving queen B0 to row 0 is not doable, because it is already there.

  • A move that is impossible to do on the current solution. For example, moving queen B0 to row 10 is not doable because it would move it outside the board limits.

In the n queens example, a move which moves the queen from its current row to the same row isn't doable:

    public boolean isMoveDoable(WorkingMemory workingMemory) {

        return !ObjectUtils.equals(queen.getRow(), toRow);
    }

Because we won't generate a move which can move a queen outside the board limits, we don't need to check it. A move that is currently not doable can become doable on a later solution.

Each move has an undo move: a move (usually of the same type) which does the exact opposite. In the above example the undo move of C0 to C2 would be the move C2 to C0. An undo move can be created from a move, but only before the move has been done on the current solution.

    public Move createUndoMove(WorkingMemory workingMemory) {

        return new RowChangeMove(queen, queen.getRow());
    }

Notice that if C0 would have already been moved to C2, the undo move would create the move C2 to C2, instead of the move C2 to C0.

The local search solver can do and undo a move more than once, even on different (successive) solutions.

A move must implement the equals() and hashcode() methods. 2 moves which make the same change on a solution, must be equal.

    public boolean equals(Object o) {

        if (this == o) {
            return true;
        } else if (instanceof RowChangeMove) {
            RowChangeMove other = (RowChangeMove) o;
            return new EqualsBuilder()
                    .append(queen, other.queen)
                    .append(toRow, other.toRow)
                    .isEquals();
        } else {
            return false;
        }
    }
    public int hashCode() {
        return new HashCodeBuilder()
                .append(queen)
                .append(toRow)
                .toHashCode();
    }

In the above example, the Queen class uses the default Object equal() and hashcode() implementations. Notice that it checks if the other move is an instance of the same move type. This is important because a move will be compared to a move with another move type if you're using more then 1 move type.

It's also recommended to implement the toString() method as it allows you to read Drools Planner's logging more easily:

    public String toString() {

        return queen + " => " + toRow;
    }

Now that we can make a single move, let's take a look at generating moves.

At each solution, local search will try all possible moves and pick the best move to change to the next solution. It's up to you to generate those moves. Let's take a look at all the possible moves on the starting solution of 4 queens:


As you can see, not all the moves are doable. At the starting solution we have 12 doable moves (n * (n - 1)), one of which will be move which changes the starting solution into the next solution. Notice that the number of possible solutions is 256 (n ^ n), much more that the amount of doable moves. Don't create a move to every possible solution. Instead use moves which can be sequentially combined to reach every possible solution.

It's highly recommended that you verify all solutions are connected by your move set. This means that by combining a finite number of moves you can reach any solution from any solution. Otherwise you're already excluding solutions at the start. Especially if you're using only big moves, you should check it. Just because big moves outperform small moves in a short test run, it doesn't mean that they will outperform them in a long test run.

You can mix different move types. Usually you're better off preferring small (fine-grained) moves over big (course-grained) moves because the score delta calculation will pay off more. However, as the traveling tournament example proves, if you can remove a hard constraint by using a certain set of big moves, you can win performance and scalability. Try it yourself: run both the simple (small moves) and the smart (big moves) version of the traveling tournament example. The smart version evaluates a lot less unfeasible solutions, which enables it to outperform and outscale the simple version.

Move generation currently happens with a MoveFactory:

public class NQueensMoveFactory extends CachedMoveListMoveFactory {


    public List<Move> createMoveList(Solution solution) {
        NQueens nQueens = (NQueens) solution;
        List<Move> moveList = new ArrayList<Move>();
        for (Queen queen : nQueens.getQueenList()) {
            for (int y : nQueens.getRowList()) {
                moveList.add(new YChangeMove(queen, y));
            }
        }
        return moveList;
    }
}

But we might be making move generation part of the DRL's in the future.

A step is the winning move. The local search solver tries every move on the current solution and picks the best accepted move as the step:


Because the move B0 to B3 has the highest score (-3), it is picked as the next step. Notice that C0 to C3 (not shown) could also have been picked because it also has the score -3. If multiple moves have the same highest score, one is picked randomly, in this case B0 to B3.

The step is made and from that new solution, the local search solver tries all the possible moves again, to decide the next step after that. It continually does this in a loop, and we get something like this:


Notice that the local search solver doesn't use a search tree, but a search path. The search path is highlighted by the green arrows. At each step it tries all possible moves, but unless it's the step, it doesn't investigate that solution further. This is one of the reasons why local search is very scalable.

As you can see, the local search solver solves the 4 queens problem by starting with the starting solution and make the following steps sequentially:

  1. B0 to B3

  2. D0 to B2

  3. A0 to B1

If we turn on DEBUG logging for the category org.drools.planner, then those steps are shown into the log:

INFO  Solver started: time spend (0), score (-6), new best score (-6), random seed (0).
DEBUG     Step index (0), time spend (20), score (-3), new best score (-3), accepted move size (12) for picked step (1 @ 0 => 3).
DEBUG     Step index (1), time spend (31), score (-1), new best score (-1), accepted move size (12) for picked step (0 @ 0 => 1).
DEBUG     Step index (2), time spend (40), score (0), new best score (0), accepted move size (12) for picked step (3 @ 0 => 2).
INFO  Phase local search finished: step total (3), time spend (41), best score (0).
INFO  Solved: time spend (41), best score (0), average calculate count per second (1780).

Notice that the logging uses the toString() method of our Move implementation: 1 @ 0 => 3.

The local search solver solves the 4 queens problem in 3 steps, by evaluating only 37 possible solutions (3 steps with 12 moves each + 1 starting solution), which is only fraction of all 256 possible solutions. It solves 16 queens in 31 steps, by evaluating only 7441 out of 18446744073709551616 possible solutions. Note: with construction heurstistics it's even a lot more efficient.

The local search solver decides the next step with the aid of 3 configurable components:


In the above example the selector generated the moves shown with the blue lines, the acceptor accepted all of them and the forager picked the move B0 to B3.

If we turn on TRACE logging for the category org.drools.planner, then the decision making is shown in the log:

INFO  Solver started: time spend (0), score (-6), new best score (-6), random seed (0).
TRACE         Ignoring not doable move ([Queen-0] 0 @ 0 => 0).
TRACE         Move score (-4), accept chance (1.0) for move (0 @ 0 => 1).
TRACE         Move score (-4), accept chance (1.0) for move (0 @ 0 => 2).
TRACE         Move score (-4), accept chance (1.0) for move (0 @ 0 => 3).
...
TRACE         Move score (-3), accept chance (1.0) for move (1 @ 0 => 3).
...
TRACE         Move score (-3), accept chance (1.0) for move (2 @ 0 => 3).
...
TRACE         Move score (-4), accept chance (1.0) for move (3 @ 0 => 3).
DEBUG     Step index (0), time spend (6), score (-3), new best score (-3), accepted move size (12) for picked step (1 @ 0 => 3).
...

An acceptor is used (together with a forager) to active tabu search, simulated annealing, great deluge, ... For each move it generates an accept chance. If a move is rejected it is given an accept chance of 0.0.

You can implement your own Acceptor, although the build-in acceptors should suffice for most needs. You can also combine multiple acceptors.

When tabu search takes steps it creates tabu's. It does not accept a move as the next step if that move breaks tabu. Drools Planner implements several tabu types:

You can even combine tabu types:


    <acceptor>
        <completeSolutionTabuSize>1000</completeSolutionTabuSize>
        <completeMoveTabuSize>7</completeMoveTabuSize>
    </acceptor>

If you pick a too small tabu size, your solver can still get stuck in a local optimum. On the other hand, with the exception of solution tabu, if you pick a too large tabu size, your solver can get stuck by bouncing of the walls. Use the benchmarker to fine tweak your configuration. Experiments teach us that it is generally best to use a prime number for the move tabu, undo move tabu or property tabu size.

A tabu search acceptor should be combined with a high or no subset selection.

Simulated annealing does not always pick the move with the highest score, neither does it evaluate many moves per step. At least at first. Instead, it gives unimproving moves also a chance to be picked, depending on its score and the time gradient of the Termination. In the end, it gradually turns into a hill climber, only accepting improving moves.

In many use cases, simulated annealing surpasses tabu search. By changing a few lines of configuration, you can easily switch from tabu search to simulated annealing and back.

Start with a simulatedAnnealingStartingTemperature set to the maximum score delta a single move can cause. Use the Benchmarker to tweak the value.


    <acceptor>
      <simulatedAnnealingStartingTemperature>2hard/100soft</simulatedAnnealingStartingTemperature>
    </acceptor>
    <forager>
        <minimalAcceptedSelection>4</minimalAcceptedSelection>
    </forager>

A simulated annealing acceptor should be combined with a low subset selection. The classic algorithm uses a minimalAcceptedSelection of 1, but usually 4 performs better.

You can even combine it with a tabu acceptor at the same time. Use a lower tabu size than in a pure tabu search configuration.


    <acceptor>
      <simulatedAnnealingStartingTemperature>10.0</simulatedAnnealingStartingTemperature>
      <completePropertyTabuSize>5</completePropertyTabuSize>
    </acceptor>
    <forager>
        <minimalAcceptedSelection>4</minimalAcceptedSelection>
    </forager>

This differs from phasing, another powerful technique, where first simulated annealing is used, followed by tabu search.

A forager gathers all accepted moves and picks the move which is the next step. Normally it picks the accepted move with the highest score. If several accepted moves have the highest score, one is picked randomly, weighted on their accept chance.

You can implement your own Forager, although the build-in forager should suffice for most needs.

Sooner or later the local search solver will have to stop solving. This can be because of a number of reasons: the time is up, the perfect score has been reached, ... The only thing you can't depend on is on finding the optimal solution (unless you know the optimal score), because a local search algorithm doesn't know it when it finds the optimal solution. For real-life problems this doesn't turn out to be much of a problem, because finding the optimal solution would take billions of years, so you 'll want to terminate sooner anyway.

You can configure when a local search solver needs to stop by configuring a Termination. A Termination can calculate a time gradient, which is a ratio between the time already spend solving and the expected entire solving time.

You can implement your own Termination, although the build-in Terminations should suffice for most needs.

The Benchmarker is current in the drools-planner-core modules, but it requires an extra dependency on the JFreeChart library.

If you use maven, add a dependency in your pom.xml file:

    <dependency>
      <groupId>jfree</groupId>
      <artifactId>jfreechart</artifactId>
      <version>1.0.13</version>
    </dependency>

This is similar for gradle, ivy and buildr.

If you use ANT, you've probably already copied the required jars from the download zip's binaries directory.

You can build a Benchmarker instance with theXmlSolverBenchmarker. Configure it with a benchmarker configuration xml file:

    XmlSolverBenchmarker benchmarker = new XmlSolverBenchmarker();

    benchmarker.configure("/org/drools/planner/examples/nqueens/benchmark/nqueensSolverBenchmarkConfig.xml");
    benchmarker.benchmark();
    benchmarker.writeResults(resultFile);

A basic benchmarker configuration file looks something like this:


<?xml version="1.0" encoding="UTF-8"?>
<solverBenchmarkSuite>
  <benchmarkDirectory>local/data/nqueens</benchmarkDirectory>
  <solverStatisticType>BEST_SOLUTION_CHANGED</solverStatisticType>
  <warmUpSecondsSpend>30</warmUpSecondsSpend>

  <inheritedSolverBenchmark>
    <unsolvedSolutionFile>data/nqueens/unsolved/unsolvedNQueens32.xml</unsolvedSolutionFile>
    <unsolvedSolutionFile>data/nqueens/unsolved/unsolvedNQueens64.xml</unsolvedSolutionFile>
    <solver>
      <solutionClass>org.drools.planner.examples.nqueens.domain.NQueens</solutionClass>
      <planningEntityClass>org.drools.planner.examples.nqueens.domain.Queen</planningEntityClass>
      <scoreDrl>/org/drools/planner/examples/nqueens/solver/nQueensScoreRules.drl</scoreDrl>
      <scoreDefinition>
        <scoreDefinitionType>SIMPLE</scoreDefinitionType>
      </scoreDefinition>
      <termination>
        <maximumSecondsSpend>20</maximumSecondsSpend>
      </termination>
      <constructionHeuristic>
        <constructionHeuristicType>FIRST_FIT_DECREASING</constructionHeuristicType>
        <constructionHeuristicPickEarlyType>FIRST_LAST_STEP_SCORE_EQUAL_OR_IMPROVING</constructionHeuristicPickEarlyType>
      </constructionHeuristic>
    </solver>
  </inheritedSolverBenchmark>

  <solverBenchmark>
    <name>Solution tabu</name>
    <solver>
      <localSearch>
        <selector>
          <moveFactoryClass>org.drools.planner.examples.nqueens.solver.move.factory.RowChangeMoveFactory</moveFactoryClass>
        </selector>
        <acceptor>
          <completeSolutionTabuSize>1000</completeSolutionTabuSize>
        </acceptor>
        <forager>
          <pickEarlyType>NEVER</pickEarlyType>
        </forager>
      </localSearch>
    </solver>
  </solverBenchmark>
  <solverBenchmark>
    <name>Move tabu</name>
    <solver>
      <localSearch>
        <selector>
          <moveFactoryClass>org.drools.planner.examples.nqueens.solver.move.factory.RowChangeMoveFactory</moveFactoryClass>
        </selector>
        <acceptor>
          <completeMoveTabuSize>5</completeMoveTabuSize>
        </acceptor>
        <forager>
          <pickEarlyType>NEVER</pickEarlyType>
        </forager>
      </localSearch>
    </solver>
  </solverBenchmark>
  <solverBenchmark>
    <name>Property tabu</name>
    <solver>
      <localSearch>
        <selector>
          <moveFactoryClass>org.drools.planner.examples.nqueens.solver.move.factory.RowChangeMoveFactory</moveFactoryClass>
        </selector>
        <acceptor>
          <completePropertyTabuSize>5</completePropertyTabuSize>
        </acceptor>
        <forager>
          <pickEarlyType>NEVER</pickEarlyType>
        </forager>
      </localSearch>
    </solver>
  </solverBenchmark>
</solverBenchmarkSuite>

This benchmarker will try 3 configurations (1 solution tabu, 1 move tabu and 1 property tabu) on 2 data sets (32 and 64 queens), so it will run 6 solvers.

Every solverBenchmark entity contains a solver configuration (for example a local search solver) and one or more unsolvedSolutionFile entities. It will run the solver configuration on each of those unsolved solution files. A name is optional and generated if absent.

The common part of multiple solverBenchmark entities can be extracted to the inheritedSolverBenchmark entity, but that can still be overwritten per solverBenchmark entity. Note that inherited solver phases such as <constructionHeuristic> or <localSearch> are not overwritten but instead are added to the head of the solver phases list.

You need to specify a benchmarkDirectory (relative to the working directory). The best solution of each solver run and a handy overview HTML webpage will be written in that directory.

The benchmarker supports outputting statistics as graphs and CSV (comma separated values) files to the benchmarkDirectory.

To configure graph and CSV output of a statistic, just add a solverStatisticType line:


<solverBenchmarkSuite>
  <benchmarkDirectory>local/data/nqueens/solved</benchmarkDirectory>
  <solverStatisticType>BEST_SOLUTION_CHANGED</solverStatisticType>
  ...
</solverBenchmarkSuite>

Multiple solverStatisticType entries are allowed. Some statistic types might influence performance noticeably. The following types are are supported:

Continuous planning is the technique of planning one or more upcoming planning windows at the same time and repeating that process every week (or every day). Because time infinite, there are an infinite future windows, so planning all future windows is impossible. Instead we plan only a number of upcoming planning windows.

Past planning windows are immutable. The first upcoming planning window is considered stable (unlikely to change), while later upcoming planning windows are considered draft (likely to change during the next planning effort). Distant future planning windows are not planned at all.

Past planning windows have locked planning entities: the planning entities can no longer be changed (they are locked in place), but some of them are still needed in the working memory, as they might affect some of the score constraints that apply on the upcoming planning entities. For example: when an employee should not work more than 5 days in a row, he shouldn't work today and tomorrow if he worked the past 4 days already.

Sometimes some planning entities are semi-locked: they can be changed, but occur a certain score penalty if they differ from their original place. For example: avoid rescheduling hospital beds less than 2 days before the patient arrives (unless it's really worth it), avoid changing the airplane gate (or worse, the terminal) during the 2 hours before boarding, ...


Notice the difference between the original planning of November 1th and the new planning of November 5th: some planning facts (F, H, I, J, K) changed, which results in unrelated planning entities (G) changing too.

To do real-time planning, first combine backup planning and continuous planning with short planning windows to lower the burden of real-time planning. Don't configure any Termination, just terminate early when you need the results or subscribe to the BestSolutionChangedEvent (the latter doesn't guarantee yet that every ProblemFactChange has been processed already).

While the Solver is solving, an outside event might want to change one of the problem facts, for example an airplane is delayed and needs the runway at a later time. Do not change the problem fact instances used by the Solver while it is solving, as that will corrupt it. Instead, add a ProblemFactChange to the Solver which it will execute as soon as the timing is right.

public interface Solver {

   ...

   boolean addProblemFactChange(ProblemFactChange problemFactChange);

   ...

}
public interface ProblemFactChange {

    void doChange(SolutionDirector solutionDirector);

}

Here's an example:

    public void deleteComputer(final CloudComputer cloudComputer) {
        solver.addProblemFactChange(new ProblemFactChange() {
            public void doChange(SolutionDirector solutionDirector) {
                CloudBalance cloudBalance = (CloudBalance) solutionDirector.getWorkingSolution();
                WorkingMemory workingMemory = solutionDirector.getWorkingMemory();
                // First remove the planning fact from all planning entities that use it
                for (CloudAssignment cloudAssignment : cloudBalance.getCloudAssignmentList()) {
                    if (ObjectUtils.equals(cloudAssignment.getCloudComputer(), cloudComputer)) {
                        FactHandle cloudAssignmentHandle = workingMemory.getFactHandle(cloudAssignment);
                        cloudAssignment.setCloudComputer(null);
                        workingMemory.retract(cloudAssignmentHandle);
                    }
                }
                // Next remove it the planning fact itself
                for (Iterator<CloudComputer> it = cloudBalance.getCloudComputerList().iterator(); it.hasNext(); ) {
                    CloudComputer workingCloudComputer = it.next();
                    if (ObjectUtils.equals(workingCloudComputer, cloudComputer)) {
                        FactHandle cloudComputerHandle = workingMemory.getFactHandle(workingCloudComputer);
                        workingMemory.retract(cloudComputerHandle);
                        it.remove(); // remove from list
                        break;
                    }
                }
            }
        });
    }

In essence, the Solver will stop, run the ProblemFactChange and restart. Each SolverPhase will be run again. Each configured Termination (except terminateEarly) will reset.