JBoss.orgCommunity Documentation
Drools Planner supports several optimization algorithms, but you're probably wondering which is the best one? Although some optimization algorithms generally perform better than others, it really depends on your problem domain. Most solver phases have parameters which can be tweaked. Those parameters can influence the results a lot, even though most solver phases work pretty well out-of-the-box.
Luckily, Drools Planner includes a benchmarker, which allows you to play out different solver phases with different settings against each other, so you can pick the best configuration for your planning problem.
The benchmarker is in a separate artifact called drools-planner-benchmark.
If you use maven, add a dependency in your pom.xml file:
<dependency>
<groupId>org.drools.planner</groupId>
<artifactId>drools-planner-benchmark</artifactId>
<version>...</version>
</dependency>
This is similar for gradle, ivy and buildr. The version must be exactly the same as the
      drools-planner-core version used.
If you use ANT, you've probably already copied the required jars from the download zip's
      binaries directory.
You can build a PlannerBenchmark instance with the
      XmlPlannerBenchmarkFactory. Configure it with a benchmark configuration xml file:
XmlPlannerBenchmarkFactory plannerBenchmarkFactory = new XmlPlannerBenchmarkFactory();
plannerBenchmarkFactory.configure("/org/drools/planner/examples/nqueens/benchmark/nqueensBenchmarkConfig.xml");
PlannerBenchmark plannerBenchmark = benchmarkFactory.buildPlannerBenchmark();
plannerBenchmark.benchmark();
A basic benchmark configuration file looks something like this:
<?xml version="1.0" encoding="UTF-8"?>
<plannerBenchmark>
<benchmarkDirectory>local/data/nqueens</benchmarkDirectory>
<!--<parallelBenchmarkCount>AUTO</parallelBenchmarkCount>-->
<warmUpSecondsSpend>30</warmUpSecondsSpend>
<inheritedSolverBenchmark>
<problemBenchmarks>
<xstreamAnnotatedClass>org.drools.planner.examples.nqueens.domain.NQueens</xstreamAnnotatedClass>
<inputSolutionFile>data/nqueens/unsolved/unsolvedNQueens32.xml</inputSolutionFile>
<inputSolutionFile>data/nqueens/unsolved/unsolvedNQueens64.xml</inputSolutionFile>
<problemStatisticType>BEST_SOLUTION_CHANGED</problemStatisticType>
</problemBenchmarks>
<solver>
<solutionClass>org.drools.planner.examples.nqueens.domain.NQueens</solutionClass>
<planningEntityClass>org.drools.planner.examples.nqueens.domain.Queen</planningEntityClass>
<scoreDirectorFactory>
<scoreDefinitionType>SIMPLE</scoreDefinitionType>
<scoreDrl>/org/drools/planner/examples/nqueens/solver/nQueensScoreRules.drl</scoreDrl>
</scoreDirectorFactory>
<termination>
<maximumSecondsSpend>20</maximumSecondsSpend>
</termination>
<constructionHeuristic>
<constructionHeuristicType>FIRST_FIT_DECREASING</constructionHeuristicType>
<constructionHeuristicPickEarlyType>FIRST_LAST_STEP_SCORE_EQUAL_OR_IMPROVING</constructionHeuristicPickEarlyType>
</constructionHeuristic>
</solver>
</inheritedSolverBenchmark>
<solverBenchmark>
<name>Entity tabu</name>
<solver>
<localSearch>
<changeMoveSelector>
<selectionOrder>ORIGINAL</selectionOrder>
</changeMoveSelector>
<acceptor>
<planningEntityTabuSize>5</planningEntityTabuSize>
</acceptor>
<forager>
<pickEarlyType>NEVER</pickEarlyType>
</forager>
</localSearch>
</solver>
</solverBenchmark>
<solverBenchmark>
<name>Value tabu</name>
<solver>
<localSearch>
<changeMoveSelector>
<selectionOrder>ORIGINAL</selectionOrder>
</changeMoveSelector>
<acceptor>
<planningValueTabuSize>5</planningValueTabuSize>
</acceptor>
<forager>
<pickEarlyType>NEVER</pickEarlyType>
</forager>
</localSearch>
</solver>
</solverBenchmark>
<solverBenchmark>
<name>Move tabu</name>
<solver>
<localSearch>
<changeMoveSelector>
<selectionOrder>ORIGINAL</selectionOrder>
</changeMoveSelector>
<acceptor>
<moveTabuSize>5</moveTabuSize>
</acceptor>
<forager>
<pickEarlyType>NEVER</pickEarlyType>
</forager>
</localSearch>
</solver>
</solverBenchmark>
</plannerBenchmark>
This PlannerBenchmark will try 3 configurations (1 move tabu, 1 entity tabu and 1 value
      tabu) on 2 data sets (32 and 64 queens), so it will run 6 solvers.
Every solverBenchmark element contains a solver configuration (for example with a local
      search solver phase) and one or more inputSolutionFile elements. It will run the solver
      configuration on each of those unsolved solution files. The element name is optional, because
      it is generated if absent. The inputSolutionFile is read by a ProblemIO.
To lower verbosity, the common part of multiple solverBenchmark entities can be extracted
      to the inheritedSolverBenchmark element. Yet, every element can still be overwritten per
      solverBenchmark element. Note that inherited solver phases such as
      <constructionHeuristic> or <localSearch> are not overwritten but
      instead are added to the tail of the solver phases list.
You need to specify a benchmarkDirectory (relative to the working directory). A benchmark
      report will be written in that directory.
It's recommended that the benchmarkDirectory is a directory ignored for source control
        and not cleaned by your build system. This way the generated files are not bloating your source control and they
        aren't lost when doing a build. Usually that directory is called local.
The benchmarker needs to be able to read the input files to contain a Solution write
        the best Solution of each benchmark to an output file. For that it uses a class that
        implements the ProblemIO interface:
public interface ProblemIO {
String getFileExtension();
Solution read(File inputSolutionFile);
void write(Solution solution, File outputSolutionFile);
}
Your input files need to have been written with the same ProblemIO class as they are
          being read by the benchmarker.
By default, a benchmarker uses a XStreamProblemIO instance to read and write
        solutions.
You need to tell the benchmarker about your Solution class which is annotated with
        XStream annotations:
<problemBenchmarks>
<xstreamAnnotatedClass>org.drools.planner.examples.nqueens.domain.NQueens</xstreamAnnotatedClass>
<inputSolutionFile>data/nqueens/unsolved/unsolvedNQueens32.xml</inputSolutionFile>
...
</problemBenchmarks>
Your input files need to have been written with a XStreamProblemIO instance, not just
        any XStream instance, because the XStreamProblemIO uses a customized
        XStream instance.
XStream and XML in general is a very verbose format. Reading or writing large datasets in this format
          can cause an OutOfMemoryError and performance degradation.
Alternatively, you can implement your own ProblemIO implementation and configure it
        with the problemIOClass element:
<problemBenchmarks>
<problemIOClass>org.drools.planner.examples.machinereassignment.persistence.MachineReassignmentProblemIO</problemIOClass>
<inputSolutionFile>data/machinereassignment/input/model_a1_1.txt</inputSolutionFile>
...
</problemBenchmarks>
A ProblemIO implementation must be thread-safe.
The best solution of each benchmark run can be written to the in the benchmarkDirectory.
      By default, this is disabled, because the files are rarely used and considered bloat. Also, on large datasets,
      writing the best solution of each single benchmark can take quite some time and memory (causing an
      OutOfMemoryError), especially in a verbose format like XStream.
You can enable to write the output solution in the benchmarkDirectory with
      writeOutputSolutionEnabled:
<problemBenchmarks>
...
<writeOutputSolutionEnabled>true</writeOutputSolutionEnabled>
...
</problemBenchmarks>
Without a warm up, the results of the first (or first few) benchmarks are not reliable, because they will have lost CPU time on HotSpot JIT compilation (and possibly DRL compilation too).
The avoid that distortion, the benchmarker can run some of the benchmarks for a specified amount of time, before running the real benchmarks. Generally, a warm up of 30 seconds suffices:
<plannerBenchmark>
...
<warmUpSecondsSpend>30</warmUpSecondsSpend>
...
</plannerBenchmark>
After the running a benchmark, a HTML report will be written in the benchmarkDirectory
      with the filename index.html. Open it in your browser. It has a nice overview of your
      benchmark including:
Summary statistics: graphs and tables
Problem statistics per inputSolutionFile
Each solver configuration (ranked): easy to copy and paste.
Benchmark information
The HTML report will use your default locale to format numbers. If you need to share the benchmark report
      with people from another country, you might want to overwrite the benchmarkReportLocale:
<plannerBenchmark>
...
<benchmarkReportLocale>en_US</benchmarkReportLocale>
...
</plannerBenchmark>
The benchmarker supports outputting problem statistics as graphs and CSV (comma separated values) files to
        the benchmarkDirectory.
To configure graph and CSV output of a statistic, just add a problemStatisticType
        line:
<plannerBenchmark>
<benchmarkDirectory>local/data/nqueens/solved</benchmarkDirectory>
<inheritedSolverBenchmark>
<problemBenchmarks>
...
<problemStatisticType>BEST_SOLUTION_CHANGED</problemStatisticType>
<problemStatisticType>CALCULATE_COUNT_PER_SECOND</problemStatisticType>
</problemBenchmarks>
...
</inheritedSolverBenchmark>
...
</plannerBenchmark>
Multiple problemStatisticType elements are allowed. Some statistic types might
        influence performance and benchmark results noticeably. The following types are supported:
To see how the best score evolves over time, add BEST_SOLUTION_CHANGED as a
        problemStatisticType.
The best score over time statistic is very useful to detect abnormalities, such as score traps.
Don't be fooled by the simulated annealing line in this graph. If you give simulated annealing only 5 minutes, it might still be better than 5 minutes of tabu search. That's because this simulated annealing implementation automatically determines its velocity based on the amount of time that can be spend. On the other hand, for the tabu search, what you see is what you'd get.
To see how fast the scores are calculated, add CALCULATE_COUNT_PER_SECOND as a
        problemStatisticType.
The initial high calculate count is typical during solution initialization. In this example, it's far easier to calculate the score of a solution if only a handful exams have been added, in contrast to all of them. After those few seconds of initialization, the calculate count is relatively stable, apart from an occasional stop-the-world garbage collector disruption.
The benchmark report automatically ranks the solvers. The Solver with rank
      0 is called the favorite Solver: it performs best overall, but it might not
      be the best on every problem. It's recommended to use that favorite Solver in
      production.
However, there are different ways of ranking the solvers. You can configure how:
<plannerBenchmark>
...
<solverBenchmarkRankingType>TOTAL_SCORE</solverBenchmarkRankingType>
...
</plannerBenchmark>
The following solverBenchmarkRankingTypes are supported:
TOTAL_SCORE (default): Maximize the overall score, so minimize the overall cost if
          all solutions would be executed.
WORST_SCORE: Minimize the worst case scenario.
TOTAL_RANKING: Maximize the overall ranking. Use this if your datasets differ greatly
          in size or difficulty, producing a difference in Score magnitude.
You can also use a custom ranking, by implementing a Comparator:
<solverBenchmarkRankingComparatorClass>...TotalScoreSolverBenchmarkRankingComparator</solverBenchmarkRankingComparatorClass>
Or a weight factory:
<solverBenchmarkRankingWeightFactoryClass>...TotalRankSolverBenchmarkRankingWeightFactory</solverBenchmarkRankingWeightFactoryClass>
If you have multiple processors available on your computer, you can run multiple benchmarks in parallel on multiple threads to get your benchmarks results faster:
<plannerBenchmark>
...
<parallelBenchmarkCount>AUTO</parallelBenchmarkCount>
...
</plannerBenchmark>
Running too many benchmarks in parallel will affect the results of benchmarks negatively. Leave some processors unused for garbage collection and other processes.
We tweak parallelBenchmarkCount AUTO to maximize the reliability
          and efficiency of the benchmark results.
The following parallelBenchmarkCounts are supported:
1 (default): Run all benchmarks sequentially.
AUTO: Let Planner decide how many benchmarks to run in parallel. This formula is
            based on experience. It's recommended to prefer this over the other parallel enabling options.
Static number: The number of benchmarks to run in parallel.
<parallelBenchmarkCount>2</parallelBenchmarkCount>
JavaScript formula: Formula for the number of benchmarks to run in parallel. It can use the variable
            availableProcessorCount. For example:
<parallelBenchmarkCount>(availableProcessorCount / 2) + 1</parallelBenchmarkCount>
The parallelBenchmarkCount is always limited to the number of available processors.
          If it's higher, it will be automatically decreased.
In the future, we will also support multi-JVM benchmarking. This feature is independent of multi-threaded solving or multi-JVM solving.
Matrix benchmarking is benchmarking a combination of value sets. For example: benchmark 4
      planningEntityTabuSize values (5, 7,
      11 and 13) combined with 3 minimalAcceptedSelection
      values (500, 1000 and 2000), resulting in 12 solver
      configurations.
To reduce the verbosity of such a benchmark configuration, you can use a Freemarker template for the benchmark configuration instead:
<plannerBenchmark>
...
<inheritedSolverBenchmark>
...
</inheritedSolverBenchmark>
<#list [5, 7, 11, 13] as planningEntityTabuSize>
<#list [500, 1000, 2000] as minimalAcceptedSelection>
<solverBenchmark>
<name>entityTabu ${planningEntityTabuSize} acceptedSelection ${minimalAcceptedSelection}</name>
<solver>
<localSearch>
<unionMoveSelector>
<changeMoveSelector/>
<swapMoveSelector/>
</unionMoveSelector>
<acceptor>
<planningEntityTabuSize>${planningEntityTabuSize}</planningEntityTabuSize>
</acceptor>
<forager>
<minimalAcceptedSelection>${minimalAcceptedSelection}</minimalAcceptedSelection>
</forager>
</localSearch>
</solver>
</solverBenchmark>
</#list>
</#list>
</plannerBenchmark>
And configure it with the method configureFromTemplate:
XmlPlannerBenchmarkFactory plannerBenchmarkFactory = new XmlPlannerBenchmarkFactory();
plannerBenchmarkFactory.configureFromTemplate("/org/drools/planner/examples/cloudbalancing/benchmark/cloudBalancingBenchmarkConfigTemplate.xml.ftl");
PlannerBenchmark plannerBenchmark = benchmarkFactory.buildPlannerBenchmark();