JBoss.orgCommunity Documentation
Local search tends to find a feasible solution relatively fast. Because it acts very much like a human, it is also pretty natural to program.
Local search solves a problem by making a move on the current solution which changes it into a better solution. It does that high number of iterations untill its time runs out and it is satisfied with the solution. It needs to start from an initialized solution, therefor it's recommended to configure a construction heuristic solver phase before it.
A move is the change from a solution A to a solution B. For example, below you can see a single move on the starting solution of 4 queens that moves a single queen to another row:
A move can have a small or large impact. In the above example, the move of queen C0 to C2 is a small move. Some moves are the same move type. These are some possibilities for move types in n queens:
Move a single queen to another row. This is a small move. For example, move queen C0 to C2.
Move all queens a number of rows down or up. This a big move.
Move a single queen to another column. This is a small move. For example, move queen C2 to A0 (placing it on top of queen A0).
Add a queen to the board at a certain row and column.
Remove a queen from the board.
Because we have decided that all queens will be on the board at all times and each queen has an appointed column (for performance reasons), only the first 2 move types are usable in our example. Furthermore, we 're only using the first move type in the example because we think it gives the best performance, but you are welcome to prove us wrong.
Each of your move types will be an implementation of the Move
interface:
public interface Move {
boolean isMoveDoable(EvaluationHandler evaluationHandler);
Move createUndoMove(EvaluationHandler evaluationHandler);
void doMove(EvaluationHandler evaluationHandler);
}
Let's take a look at the Move
implementation for 4 queens which moves a queen to a
different row:
public class YChangeMove implements Move {
private Queen queen;
private int toY;
public YChangeMove(Queen queen, int toY) {
this.queen = queen;
this.toY = toY;
}
// ... see below
}
An instance of YChangeMove
moves a queen from its current y to a different y.
Drools Planner calls the doMove(WorkingMemory)
method to do a move. The
Move
implementation must notify the working memory of any changes it does on the solution
facts:
public void doMove(WorkingMemory workingMemory) {
FactHandle queenHandle = workingMemory.getFactHandle(queen);
queen.setY(toY);
workingMemory.update(queenHandle, queen); // after changes are made
}
You need to call the workingMemory.update(FactHandle, Object)
method after modifying the
fact. Note that you can alter multiple facts in a single move and effectively create a big move (also known as a
coarse-grained move).
Drools Planner automatically filters out non doable moves by calling the
isDoable(WorkingMemory)
method on a move. A non doable move is:
A move that changes nothing on the current solution. For example, moving queen B0 to row 0 is not doable.
A move that is impossible to do on the current solution. For example, moving queen B0 to row 10 is not doable because it would move it outside the board limits.
In the n queens example, a move which moves the queen from its current row to the same row isn't doable:
public boolean isMoveDoable(WorkingMemory workingMemory) {
int fromY = queen.getY();
return fromY != toY;
}
Because we won't generate a move which can move a queen outside the board limits, we don't need to check it. A move that is currently not doable can become doable on a later solution.
Each move has an undo move: a move (usually of the same type) which does the exact opposite. In the above example the undo move of C0 to C2 would be the move C2 to C0. An undo move can be created from a move, but only before the move has been done on the current solution.
public Move createUndoMove(WorkingMemory workingMemory) {
return new YChangeMove(queen, queen.getY());
}
Notice that if C0 would have already been moved to C2, the undo move would create the move C2 to C2, instead of the move C2 to C0.
The local search solver can do and undo a move more than once, even on different (successive) solutions.
A move must implement the equals()
and hashcode()
methods. 2 moves which
make the same change on a solution, must be equal.
public boolean equals(Object o) {
if (this == o) {
return true;
} else if (o instanceof YChangeMove) {
YChangeMove other = (YChangeMove) o;
return new EqualsBuilder()
.append(queen, other.queen)
.append(toY, other.toY)
.isEquals();
} else {
return false;
}
}
public int hashCode() {
return new HashCodeBuilder()
.append(queen)
.append(toY)
.toHashCode();
}
In the above example, the Queen
class uses the default Object
equal()
and hashcode()
implementations. Notice that it checks if the other
move is an instance of the same move type. This is important because a move will be compared to a move with another
move type if you're using more then 1 move type.
It's also recommended to implement the toString()
method as it allows you to read Drools
Planner's logging more easily:
public String toString() {
return queen + " => " + toY;
}
Now that we can make a single move, let's take a look at generating moves.
At each solution, local search will try all possible moves and pick the best move to change to the next solution. It's up to you to generate those moves. Let's take a look at all the possible moves on the starting solution of 4 queens:
As you can see, not all the moves are doable. At the starting solution we have 12 doable moves (n *
(n - 1)
), one of which will be move which changes the starting solution into the next solution. Notice
that the number of possible solutions is 256 (n ^ n
), much more that the amount of doable moves.
Don't create a move to every possible solution. Instead use moves which can be sequentially combined to reach every
possible solution.
It's highly recommended that you verify all solutions are connected by your move set. This means that by combining a finite number of moves you can reach any solution from any solution. Otherwise you're already excluding solutions at the start. Especially if you're using only big moves, you should check it. Just because big moves outperform small moves in a short test run, it doesn't mean that they will outperform them in a long test run.
You can mix different move types. Usually you're better off preferring small (fine-grained) moves over big (course-grained) moves because the score delta calculation will pay off more. However, as the traveling tournament example proves, if you can remove a hard constraint by using a certain set of big moves, you can win performance and scalability. Try it yourself: run both the simple (small moves) and the smart (big moves) version of the traveling tournament example. The smart version evaluates a lot less unfeasible solutions, which enables it to outperform and outscale the simple version.
Move generation currently happens with a MoveFactory
:
public class NQueensMoveFactory extends CachedMoveListMoveFactory {
public List<Move> createMoveList(Solution solution) {
NQueens nQueens = (NQueens) solution;
List<Move> moveList = new ArrayList<Move>();
for (Queen queen : nQueens.getQueenList()) {
for (int n : nQueens.createNList()) {
moveList.add(new YChangeMove(queen, n));
}
}
return moveList;
}
}
But we might be making move generation part of the DRL's in the future.
A step is the winning move. The local search solver tries every move on the current solution and picks the best accepted move as the step:
Because the move B0 to B3 has the highest score (-3
), it is picked as
the next step. Notice that C0 to C3 (not shown) could also have been picked because it also has
the score -3
. If multiple moves have the same highest score, one is picked randomly, in this case
B0 to B3.
The step is made and from that new solution, the local search solver tries all the possible moves again, to decide the next step after that. It continually does this in a loop, and we get something like this:
Notice that the local search solver doesn't use a search tree, but a search path. The search path is highlighted by the green arrows. At each step it tries all possible moves, but unless it's the step, it doesn't investigate that solution further. This is one of the reasons why local search is very scalable.
As you can see, the local search solver solves the 4 queens problem by starting with the starting solution and make the following steps sequentially:
B0 to B3
D0 to B2
A0 to B1
If we turn on INFO
logging for the category org.drools.planner
:
<log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/"> <category name="org.drools.planner"> <priority value="info" /> </category> ... </log4j:configuration>
Then those steps are reflected into the logging:
INFO Solving with random seed (0). INFO Starting with time spend (0), score (-6), new best score (-6). INFO Step index (0), time spend (4), score (-3), new best score (-3), accepted move size (12) for picked step ([Queen-1] 1 @ 0 => 3). INFO Step index (1), time spend (7), score (-1), new best score (-1), accepted move size (12) for picked step ([Queen-0] 0 @ 0 => 1). INFO Step index (2), time spend (10), score (0), new best score (0), accepted move size (12) for picked step ([Queen-3] 3 @ 0 => 2). INFO Solved with time spend (10) for best score (0) with average calculate count per second (7300).
Notice that the logging uses the toString()
method of our Move
implementation: [Queen-1] 1 @ 0 => 3
.
The local search solver solves the 4 queens problem in 3 steps, by evaluating only 37 possible solutions (3 steps with 12 moves each + 1 starting solution), which is only fraction of all 256 possible solutions. It solves 16 queens in 31 steps, by evaluating only 7441 out of 18446744073709551616 possible solutions.
A simple local search always takes improving moves. This may seem like a good thing, but it's not. It suffers from a number of problems:
It can get stuck in a local optimum. For example if it reaches a solution X with a score -1 and there is no improving move, it is forced to take a next step that leads to a solution Y with score -2, after that however, it's very real that it will pick the step back to solution X with score -1. It will then start looping between solution X and Y.
It can start walking in its own footsteps, picking the same next step at every step.
Of course Drools Planner implements better local searches, such as tabu search and simulated annealing which can avoid these problems. We recommend to never use a simple local search, unless you're absolutely sure there are no local optima in your planning problem.
The local search solver decides the next step with the aid of 3 configurable components:
A selector which selects (or generates) the possible moves of the current solution.
An acceptor which filters out unacceptable moves. It can also weigh a move it accepts.
A forager which gathers all accepted moves and picks the next step from them.
In the above example the selector generated the moves shown with the blue lines, the acceptor accepted all of them and the forager picked the move B0 to B3.
If we turn on DEBUG
logging for the category org.drools.planner
, we can
see the decision making in the log:
INFO Solving with random seed (0). INFO Starting with time spend (0), score (-6), new best score (-6). DEBUG Ignoring not doable move ([Queen-0] 0 @ 0 => 0). DEBUG Move score (-4), accept chance (1.0) for move ([Queen-0] 0 @ 0 => 1). DEBUG Move score (-4), accept chance (1.0) for move ([Queen-0] 0 @ 0 => 2). DEBUG Move score (-4), accept chance (1.0) for move ([Queen-0] 0 @ 0 => 3). ... DEBUG Move score (-3), accept chance (1.0) for move ([Queen-1] 1 @ 0 => 3). ... DEBUG Move score (-3), accept chance (1.0) for move ([Queen-2] 2 @ 0 => 3). ... DEBUG Move score (-4), accept chance (1.0) for move ([Queen-3] 3 @ 0 => 3). INFO Step index (0), time spend (6), score (-3), new best score (-3), accepted move size (12) for picked step ([Queen-1] 1 @ 0 => 3). ...
A selector is currently based on a MoveFactory
.
<selector>
<moveFactoryClass>org.drools.planner.examples.nqueens.solver.NQueensMoveFactory</moveFactoryClass>
</selector>
You're not obligated to generate the same set of moves at each step. It's generally a good idea to use several selectors, mixing fine grained moves and course grained moves:
<selector>
<selector>
<moveFactoryClass>org.drools.planner.examples.nurserostering.solver.move.factory.EmployeeChangeMoveFactory</moveFactoryClass>
</selector>
<selector>
<moveFactoryClass>org.drools.planner.examples.nurserostering.solver.move.factory.AssignmentSwitchMoveFactory</moveFactoryClass>
</selector>
<selector>
<moveFactoryClass>org.drools.planner.examples.nurserostering.solver.move.factory.AssignmentPillarPartSwitchMoveFactory</moveFactoryClass>
</selector>
</selector>
An acceptor is used (together with a forager) to active tabu search, simulated annealing, great deluge, ...
For each move it generates an accept chance. If a move is rejected it is given an accept chance of
0.0
.
You can implement your own Acceptor
, although the build-in acceptors should suffice for
most needs. You can also combine multiple acceptors.
When tabu search takes steps it creates tabu's. It does not accept a move as the next step if that move breaks tabu. Drools Planner implements several tabu types:
Solution tabu makes recently visited solutions tabu. It does not accept a move that leads to one of those solutions. If you can spare the memory, don't be cheap on the tabu size. We recommend this type of tabu because it tends to give the best results and requires little or no tweaking.
<acceptor>
<completeSolutionTabuSize>1000</completeSolutionTabuSize>
</acceptor>
Move tabu makes recent steps tabu. It does not accept a move equal to one of those steps.
<acceptor>
<completeMoveTabuSize>7</completeMoveTabuSize>
</acceptor>
Undo move tabu makes the undo move of recent steps tabu.
<acceptor>
<completeUndoMoveTabuSize>7</completeUndoMoveTabuSize>
</acceptor>
Property tabu makes a property of recent steps tabu. For example, it can make the queen tabu, so that a recently moved queen can't be moved.
<acceptor>
<completePropertyTabuSize>5</completePropertyTabuSize>
</acceptor>
To use property tabu, your moves must implement the TabuPropertyEnabled
interface,
for example:
public class YChangeMove implements Move, TabuPropertyEnabled {
private Queen queen;
private int toY;
// ...
public List<? extends Object> getTabuPropertyList() {
return Collections.singletonList(queen);
}
}
You can also make multiple properties tabu (with OR or AND semantics):
public List<? extends Object> getTabuPropertyList() {
// tabu with other moves that contain the same leftExam OR the same rightExam
return Arrays.asList(leftExam, rightExam);
}
public List<? extends Object> getTabuPropertyList() {
// tabu with other moves that contain the same exam AND the same toPeriod (but not necessary the same toRoom)
return Collections.singletonList(Arrays.asList(exam, toPeriod));
}
You can even combine tabu types:
<acceptor>
<completeSolutionTabuSize>1000</completeSolutionTabuSize>
<completeMoveTabuSize>7</completeMoveTabuSize>
</acceptor>
If you pick a too small tabu size, your solver can still get stuck in a local optimum. On the other hand, with the exception of solution tabu, if you pick a too large tabu size, your solver can get stuck by bouncing of the walls. Use the benchmarker to fine tweak your configuration. Experiments teach us that it is generally best to use a prime number for the move tabu, undo move tabu or property tabu size.
A tabu search acceptor should be combined with a high or no subset selection.
Simulated annealing does not always pick the move with the highest score, neither does it evaluate many
moves per step. At least at first. Instead, it gives unimproving moves also a chance to be picked, depending on
its score and the time gradient of the Termination
. In the end, it gradually turns into a
simple local search, only accepting improving moves.
In many use cases, simulated annealing surpasses tabu search. By changing a few lines of configuration, you can easily switch from tabu search to simulated annealing and back.
Start with a simulatedAnnealingStartingTemperature
set to the maximum score delta a
single move can cause. Use the Benchmarker
to tweak the value.
<acceptor>
<simulatedAnnealingStartingTemperature>2hard/100soft</simulatedAnnealingStartingTemperature>
</acceptor>
<forager>
<minimalAcceptedSelection>4</minimalAcceptedSelection>
</forager>
A simulated annealing acceptor should be combined with a low subset selection. The classic algorithm uses
a minimalAcceptedSelection
of 1
, but usually 4
performs
better.
You can even combine it with a tabu acceptor at the same time. Use a lower tabu size than in a pure tabu search configuration.
<acceptor>
<simulatedAnnealingStartingTemperature>10.0</simulatedAnnealingStartingTemperature>
<completePropertyTabuSize>5</completePropertyTabuSize>
</acceptor>
<forager>
<minimalAcceptedSelection>4</minimalAcceptedSelection>
</forager>
This differs from phasing, another powerful technique, where first simulated annealing is used, followed by tabu search.
A forager gathers all accepted moves and picks the move which is the next step. Normally it picks the accepted move with the highest score. If several accepted moves have the highest score, one is picked randomly, weighted on their accept chance.
You can implement your own Forager
, although the build-in forager should suffice for most
needs.
When there are many possible moves, it becomes inefficient to evaluate all of them at every step. To evaluate only a random subset of all the moves, use:
An minimalAcceptedSelection
integer, which specifies how many accepted moves should
have be evaluated during each step. By default it is positive infinity, so all accepted moves are evaluated
at every step.
<forager>
<minimalAcceptedSelection>1000</minimalAcceptedSelection>
</forager>
Unlike the n queens problem, real world problems require the use of subset selection. Start from an
minimalAcceptedSelection
that takes a step in less then 2 seconds. Turn on INFO logging to
see the step times. Use the Benchmarker
to tweak the value.
A forager can pick a move early during a step, ignoring subsequent selected moves. There are 3 pick early types:
NEVER
: A move is never picked early: all accepted moves are evaluated that the
selection allows. This is the default.
<forager>
<pickEarlyType>NEVER</pickEarlyType>
</forager>
FIRST_BEST_SCORE_IMPROVING
: Pick the first accepted move that improves the best
score. If none improve the best score, it behaves exactly like the pickEarlyType NEVER.
<forager>
<pickEarlyType>FIRST_BEST_SCORE_IMPROVING</pickEarlyType>
</forager>
FIRST_LAST_STEP_SCORE_IMPROVING
: Pick the first accepted move that improves the
last step score. If none improve the last step score, it behaves exactly like the pickEarlyType
NEVER.
<forager>
<pickEarlyType>FIRST_LAST_STEP_SCORE_IMPROVING</pickEarlyType>
</forager>
Because the current solution can degrade (especially in tabu search and simulated annealing), the local search solver remembers the best solution it has encountered through the entire search path. Each time the current solution is better than the last best solution, the current solution is cloned and referenced as the new best solution.
You can listen to solver events, including when the best solution changes during solving, by adding a
SolverEventListener
to the Solver
:
public interface Solver {
// ...
void addEventListener(SolverEventListener eventListener);
void removeEventListener(SolverEventListener eventListener);
}
Sooner or later the local search solver will have to stop solving. This can be because of a number of reasons: the time is up, the perfect score has been reached, ... The only thing you can't depend on is on finding the optimal solution (unless you know the optimal score), because a local search algorithm doesn't know it when it finds the optimal solution. For real-life problems this doesn't turn out to be much of a problem, because finding the optimal solution would take billions of years, so you 'll want to terminate sooner anyway.
You can configure when a local search solver needs to stop by configuring a Termination
. A
Termination
can calculate a time gradient, which is a ratio between the time already spend
solving and the expected entire solving time.
You can implement your own Termination
, although the build-in Terminations should suffice
for most needs.
Terminates when an amount of time has been reached:
<termination>
<maximumMinutesSpend>2</maximumMinutesSpend>
</termination>
or
<termination>
<maximumHoursSpend>1</maximumHoursSpend>
</termination>
Note that the time taken by a StartingSolutionInitializer
also is taken into account by
this Termination. So if you give the solver 2 minutes to solve something, but the initializer takes 1 minute, the
local search solver will only have a minute left.
Note that if you use this Termination, you will most likely sacrifice reproducibility. The best solution will depend on available CPU time, not only because it influences the amount of steps taken, but also because time gradient based algorithms (such as simulated annealing) will probably act differently on each run.
Terminates when an amount of steps has been reached:
<termination>
<maximumStepCount>100</maximumStepCount>
</termination>
Terminates when a certain score has been reached. You can use this Termination if you know the perfect score, for example for 4 queens:
<termination>
<scoreAttained>0</scoreAttained>
</termination>
You can also use this Termination to terminate once it reaches a feasible solution. For a solver problem with hard and soft constraints, it could look like this:
<termination>
<scoreAttained>0hard/-5000soft</scoreAttained>
</termination>
Terminates when the best score hasn't improved in a number of steps:
<termination>
<maximumUnimprovedStepCount>100</maximumUnimprovedStepCount>
</termination>
If it hasn't improved recently, it's probably not going to improve soon anyway and it's not worth the effort to continue. We have observed that once a new best solution is found (even after a long time of no improvement on the best solution), the next few step tend to improve the best solution too.
Terminations can be combined, for example: terminate after 100 steps or if a score of 0 has been reached:
<termination>
<terminationCompositionStyle>OR</terminationCompositionStyle>
<maximumStepCount>100</maximumStepCount>
<scoreAttained>0</scoreAttained>
</termination>
Alternatively you can use AND, for example: terminate after reaching a feasible score of at least -100 and no improvements in 5 steps:
<termination>
<terminationCompositionStyle>AND</terminationCompositionStyle>
<maximumUnimprovedStepCount>5</maximumUnimprovedStepCount>
<scoreAttained>-100</scoreAttained>
</termination>
This ensures it doesn't just terminate after finding a feasible solution, but also makes any obvious improvements on that solution before terminating.
Sometimes you 'll want to terminate a Solver early from another thread, for example because a user action or
a server restart. That cannot be configured by a Termination
as it's impossible to predict when
and if it will occur. Therefor the Solver
interface has these 2 thread-safe methods:
public interface Solver {
// ...
boolean terminateEarly();
boolean isTerminateEarly();
}
If you call the terminateEarly()
method from another thread, the
Solver
will terminate at its earliest convenience and the solve()
method
will return in the original solver thread.
It is easy to plug in a custom Selector
, Acceptor
,
Forager
or Termination
by extending the abstract class and also the config
class.
For example, to use a custom Selector
, extend the AbstractSelector
class
(see AllMovesOfOneExamSelector
), extend the SelectorConfig
class (see
AllMovesOfOneExamSelectorConfig
) and configure it in the configuration XML:
<selector class="org.drools.planner.examples.examination.solver.selector.AllMovesOfOneExamSelectorConfig"/>
If you build a better implementation that's not domain specific, consider adding it as a patch in our issue tracker and we'll take it along in future refactors and optimize it.