JBoss.orgCommunity Documentation
The Kie Server is a modular, standalone server component that can be used to instantiate and execute rules and processes. It exposes this functionality via REST, JMS and Java interfaces to client application. It also provides seamless integration with the Kie Workbench.
At its core, the Kie Server is a configurable web application packaged as a WAR file. Distributions are availables for pure web containers (like Tomcat) and for JEE 6 and JEE 7 containers.
Most capabilities on the Kie Server are configurable, and based on the concepts of extensions. Each extension can be enabled/disabled independently, allowing the user to configure the server to its need.
The current version of the Kie Server ships with two default extensions:
BRM: provides support for the execution of Business Rules using the Drools rules engine.
BPM: provides support for the execution of Business Processes using the jBPM process engine. It supports:
process execution
task execution
assynchronous job execution
Both extensions enabled by default, but can be disabled by setting the corresponding property (see configuration chapter for details).
This server was designed to have a low footprint, with minimal memory consumption, and therefore, to be easily deployable on a cloud environment. Each instance of this server can open and instantiate multiple Kie Containers which allows you to execute multiple services in parallel.
Kie Server: execution server purely focusing on providing runtime environment for both rules and processes. These capabilities are provided by Kie Server Extensions. More capabilities can be added by further extensions (e.g. customer could add his own extensions in case of missing functionality that will then use infrastructure of the KIE Server). A Kie Server instance is a standalone Kie Server executing on a given application server/web container. A Kie Server instantiates and provides support for multiple Kie Containers.
Kie Server Extension: a "plugin" for the Kie Server that adds capabilities to the server. The Kie Server ships with two default kie server extensions: BRM and BPM.
Kie Container: an in-memory instantiation of a kjar, allowing for the instantiation and usage of its assets (domain models, processes, rules, etc). A Kie Server exposes Kie Containers through a standard API over transport protocols like REST and JMS.
Controller: a server-backed REST endpoint that will be responsible for managing KIE Server instances. Such end point must provide following capabilities:
respond to connect requests
sync all registered containers on the corresponding Kie Server ID
respond to disconnect requests
Kie Server state: currently known state of given Kie Server instance. This is a local storage (by default in file) that maintains the following information:
list of registered controllers
list of known containers
kie server configuration
The server state is persisted upon receival of events like: Kie Container created, Kie Container is disposed, controller accepts registration of Kie Server instance, etc.
Kie Server ID: an arbitrary assigned identifier to which configurations are assigned. At boot, each Kie Server Instance is assigned an ID, and that ID is matched to a configuration on the controller. The Kie Server Instance fetches and uses that configuration to setup itself.
The KIE Server is distributed as a web application archive (WAR) file. The WAR file comes in three different packagings:
To install the KIE Execution Server and verify it is running, complete the following steps:
Deploy the WAR file into your web container.
Create a user with the role of kie-server
on the container.
Test that you can access the execution engine by navigating to the endpoint in a
browser window: http://SERVER:PORT/CONTEXT/services/rest/server/
.
When prompted for username/password, type in the username and password that you created in step 2.
Once authenticated, you will see an XML response in the form of engine status, similar to this:
Example 22.1. Sample handshaking server response
<response type="SUCCESS" msg="KIE Server info">
<kie-server-info>
<version>6.5.0.CR1</version>
</kie-server-info>
</response>
The Kie Server accepts a number of bootstrap switches (system properties) to configure the behaviour of the server. The following is a table of all the supported switches.
Table 22.1. Kie Server bootstrap switches
If you are running both KIE Server and KIE Workbench you must configure KIE Server to use a different Data Source to KIE Workbench using the org.kie.server.persistence.ds property. KIE Workbench uses a jBPM Executor Service that can conflict with KIE Server if they share the same Data Source.
Download and unzip the Tomcat distribution. Let's call the root of the
distribution TOMCAT_HOME
. This directory is named after the Tomcat
version, so for example apache-tomcat-7.0.55
.
Download kie-server-6.5.0.CR1-webc.war and place it
into TOMCAT_HOME/webapps
.
Configure user(s) and role(s). Make sure that file
TOMCAT_HOME/conf/tomcat-users.xml
contains the following username and
role definition. You can of course choose different username and password, just make
sure that the user has role kie-server
:
Example 22.2. Username and role definition for Tomcat
<role rolename="kie-server"/>
<user username="serveruser" password="my.s3cr3t.pass" roles="kie-server"/>
Start the server by running TOMCAT_HOME/bin/startup.[sh|bat]
. You can
check out the Tomcat logs in TOMCAT_HOME/logs
to see if the application
deployed successfully. Please read the table above for the bootstrap switches that can
be used to properly configure the instance. For
instance:
./startup.sh -Dorg.kie.server.id=first-kie-server
-Dorg.kie.server.location=http://localhost:8080/kie-server/services/rest/server
Verify the server is running. Go to
http://SERVER:PORT/CONTEXT/services/rest/server/
and type the specified
username and password. You should see simple XML message with basic information about
the server.
Download and unzip the WildFly distribution. Let's call the root of the
distribution WILDFLY_HOME
. This directory is named after the WildFly
version, so for example wildfly-8.2.0.Final
.
Download kie-server-6.5.0.CR1-ee7.war and place it
into WILDFLY_HOME/standalone/deployments
.
Configure user(s) and role(s). Execute the following command
WILDFLY_HOME/bin/add-user.[sh|bat] -a -u 'kieserver' -p 'kieserver1!' -ro
'kie-server'
. You can of course choose different username and password, just
make sure that the user has role kie-server
.
Start the server by running WILDFLY_HOME/bin/standalone.[sh|bat] -c
standalone-full.xml <bootstrap_switches>
. You can check out the standard
output or WildFly logs in WILDFLY_HOME/standalone/logs
to see if the
application deployed successfully. Please read the table above for the bootstrap
switches that can be used to properly configure the instance. For
instance:
./standalone.sh --server-config=standalone-full.xml
-Djboss.socket.binding.port-offset=150
-Dorg.kie.server.id=first-kie-server
-Dorg.kie.server.location=http://localhost:8230/kie-server/services/rest/server
Verify the server is running. Go to
http://SERVER:PORT/CONTEXT/services/rest/server/
and type the specified
username and password. You should see simple XML message with basic information about
the server.
Server setup and registration changed significantly from versions 6.2 and before. The following applies only to version 6.3 and forward.
A managed instance is one that requires a controller to be available to properly startup the Kie Server instance.
A Controller is a component responsible for keeping and managing a Kie Server Configuration in centralized way. Each controller can manager multiple configurations at once and there can be multiple controllers in the environment. Managed KIE Servers can be configured with a list of controllers but will connect to only one at a time.
At startup, if a Kie Server is configured with a list of controllers, it will try succesivelly to connect to each of them until a connection is successfully stablished with one of them. If for any reason a connection can't be stablished, the server will not start, even if there is local storage available with configuration. This happens by design in order to ensure consistency. For instance, if the Kie Server was down and the configuration has changed, this restriction guarantees that it will run with up to date configuration or not at all.
The configuration sets, among other things:
kie containers to be deployed and started
configuration items - currently this is a place holder for further enhancements that will allow remotely configure KIE Execution Server components - timers, persistence, etc
The Controller, besides providing configuration management, is also responsible for overall management of Kie Servers. It provides a REST api that is divided into two parts:
the controller itself that is exposed to interact with KIE Execution Server instances
an administration API that allows to remotely manage Kie Server instances:
add/remove servers
add/remove containers to/from the servers
start/stop containers on servers
The controller deals only with the Kie Server configuration or definition to put it differently. It does not handle any runtime components of KIE Execution Server instances. They are always considered remote to controller. The controller is responsible for persisting the configuration to preserve restarts of the controller itself. It should manage the synchronization as well in case multiple controllers are configured to keep all definitions up to date on all instances of the controller.
By default controller is shipped with Kie Workbench and provides a fully featured management interface (both REST api and UI). It uses underlying git repository as persistent store and thus when GIT repositories are clustered (using Apache Zookeeper and Apache Helix) it will cover the controllers synchronization as well.
The diagram above illustrates the single controller (workbench) setup with multiple Kie Server instances managed by it.
The diagram below illustrates the clustered setup where there are multiple instances of controller synchronized over Zookeeper.
In the above diagram we can see that the Kie Server instances are capable of connecting to any controllers, but they will connect to only one. Each instance will attempt to connect to controller as long as it can reach one. Once connection is established with one of the controllers it will skip the others.
There are two approaches that users can take when working with managed KIE Server instances:
Configuration first: with this approach, a user will start working with the controller (either UI or REST api) and create and configure Kie Server definitions. That consists basically of an identification for the server definition (id and name + optionally version for improved readability) and the configuration for the Kie Containers to run on the server.
Registration first: with this approach, the Kie Server instances are started first and auto register themselves on controller. The user then can configure the Kie Containers. This option simply skips the registration step done in the first approach and populates it with server id, name and version directly upon auto registration. There are no other differences between the two approaches.
An unmanaged Kie Server is in turn just a standalone instance, and thus must be configured individually using REST/JMS api from the Kie Server itself. There is no controller involved. The configuration is automatically persisted by the server into a file and that is used as the internal server state, in case of restarts.
The configuration is updated during the following operations:
deploy Kie Container
undeploy Kie Container
start Kie Container
stop Kie Container
In most use cases, the Kie Server should be executed in managed mode as that provides some benefits, like a web user interface (if using the workbench as a controller) and some facilities for clustering.
Once your Execution Server is registered, you can start adding Kie Containers to it.
Kie Containers are self contained environments that have been provisioned to hold instances of your packaged and deployed rule instances.
Start by clicking the + icon next to the Execution Server where you want to deploy your Container. This will bring up the New Container screen.
If you know the Group Name, Artifact Id and Version (GAV) of your deployed package, then you can enter those details and click the Ok button to select that instance (and provide a name for the Container);
If you don't know these values, you can search KIE Workbench for all packages that can be deployed. Click the Search button without entering any value in the search field (you can narrow your search by entering any term that you know exists in the package that you want to deploy).
INSERT SCREENSHOT HERE
The figure above shows that there are three deployable packages available to be used as containers on the Execution Server. Select the one that you want by clicking the Select button. This will auto-populate the GAV and you can then click the Ok button to use this deployable as the new Container.
Enter a name for this Container at the top and then press the Ok button.
The Container name must be unique inside each execution server and must not contain any spaces.
Just below the GAV row, you will see an uneditable row that shows you the URL for your Container against which you will be able to execute REST commands.
Containers within the Execution Server can be started, stopped and updated from within KIE Workbench.
Once registered, a Container is in the 'Stopped' mode. It can be started by first selecting it and then clicking the Start button. You can also select multiple Containers and start them all at the same time.
Once the Container is in the 'Running' mode, a green arrow appears next to it. If there are any errors starting the Container(s), red icons appear next to Containers and the Execution Server that they are deployed on.
You should check the logs of both the Execution Server and the current Business Central to see what the errors are before redeploying the Containers (and possibly the Execution Server).
Similar to starting a Container, select the Container(s) that you want to stop (or delete) and click the Stop button (which replaces the Start button for that Container once it has entered the 'Running' mode) or the Delete button.
You can update deployed KieContainers
without restarting the
Execution Server. This is useful in cases where the Business Rules change,
creating new versions of packages to be provisioned.
You can have multiple versions of the same package provisioned and deployed,
each to a different KieContainer
.
To update deployments in a KieContainer
dynamically, click on the
icon next to the Container. This will open up the Container Info screen. An
example of this screen is shown here:
INSERT SCREENSHOT HERE
The Container Info screen is a useful tool because it not only allows you to see
the endpoint for this KieContainer
, but it also allows you to
either manually or automatically refresh the provision if an update is
available. The update can be manual or automatic:
Manual Update: To manually update a
KieContainer
, enter the new Version number in the Version
box and click on the Update button. You can of
course, update the Group Id or the Artifact Id , if these have changed as well.
Once updated, the Execution server updates the container and shows you the
resolved GAV attributes at the bottom of the screen in the Resolved Release Id section.
Automatic Update: If you want a deployed
Container to always have the latest version of your deployment without manually
editing it, you will need to set the Version property to the value of
LATEST
and start a Scanner
. This will ensure
that the deployed provision always contains the latest version. The Scanner can
be started just once on demand by clicking the Scan Now button or you can start
it in the background with scans happening at a specified interval (in
seconds).You can also set this value to LATEST
when you are first
creating this deployment. The Resolved Release
Id in this case will show you the actual, latest version
number.
The Execution Server supports the following commands via the REST API.
Please note the following before using these commands:
The base URL for these will remain as the endpoint defined earlier
(for example:
http://SERVER:PORT/CONTEXT/services/rest/server/
)
All requests require basic HTTP Authentication for the role kie-server as indicated earlier.
Returns the Execution Server information
Example 22.3. Example Server Response
<response type="SUCCESS" msg="KIE Server info">
<kie-server-info>
<version>6.2.0.redhat-1</version>
</kie-server-info>
</response>
Using POST HTTP method, you can execute various commands on the Execution Server. E.g: create-container, list-containers, dispose-container and call-container.
Following is the full list of commands:
CreateContainerCommand
GetServerInfoCommand
ListContainersCommand
CallContainerCommand
DisposeContainerCommand
GetContainerInfoCommand
GetScannerInfoCommand
UpdateScannerCommand
UpdateReleaseIdCommand
The commands itself can be found in the org.kie.server.api.commands
package.
Returns a list of containers that have been created on this Execution Server.
Example 22.4. Example Server Response
<response type="SUCCESS" msg="List of created containers">
<kie-containers>
<kie-container container-id="MyProjectContainer" status="STARTED">
<release-id>
<artifact-id>Project1</artifact-id>
<group-id>com.redhat</group-id>
<version>1.0</version>
</release-id>
<resolved-release-id>
<artifact-id>Project1</artifact-id>
<group-id>com.redhat</group-id>
<version>1.0</version>
</resolved-release-id>
</kie-container>
</kie-containers>
</response>
The endpoint supports also filtering based on ReleaseId
and container status. Examples:
/containers?groupId=org.example
- returns only containers with the specified groupId/containers?groupId=org.example&artifactId=project1&version=1.0.0.Final
- returns only containers with the specified ReleaseId
/containers?status=started,failed
- returns containers which are either started or failed
Returns the status and information about a particular container. For example,
executing
http://SERVER:PORT/CONTEXT/services/rest/server/containers/MyProjectContainer
could return the following example container info.
Example 22.5. Example Server Response
<response type="SUCCESS" msg="Info for container MyProjectContainer">
<kie-container container-id="MyProjectContainer" status="STARTED">
<release-id>
<artifact-id>Project1</artifact-id>
<group-id>com.redhat</group-id>
<version>1.0</version>
</release-id>
<resolved-release-id>
<artifact-id>Project1</artifact-id>
<group-id>com.redhat</group-id>
<version>1.0</version>
</resolved-release-id>
</kie-container>
</response>
Allows you to create a new Container in the Execution Server. For example, to
create a Container with the id of MyRESTContainer the complete endpoint will be:
http://SERVER:PORT/CONTEXT/services/rest/server/containers/MyRESTContainer
.
An example of request is:
Example 22.6. Example Request to create a container
<kie-container container-id="MyRESTContainer">
<release-id>
<artifact-id>Project1</artifact-id>
<group-id>com.redhat</group-id>
<version>1.0</version>
</release-id>
</kie-container>
And the response from the server, if successful, would be be:
Example 22.7. Example Server Response when creating a container
<response type="SUCCESS" msg="Container MyRESTContainer successfully deployed with module com.redhat:Project1:1.0">
<kie-container container-id="MyProjectContainer" status="STARTED">
<release-id>
<artifact-id>Project1</artifact-id>
<group-id>com.redhat</group-id>
<version>1.0</version>
</release-id>
<resolved-release-id>
<artifact-id>Project1</artifact-id>
<group-id>com.redhat</group-id>
<version>1.0</version>
</resolved-release-id>
</kie-container>
</response>
Disposes the Container specified by the id. For example, executing
http://SERVER:PORT/CONTEXT/services/rest/server/containers/MyProjectContainer
using the DELETE HTTP method will return the following server response:
Example 22.8. Example Server Response disposing a container
<response type="SUCCESS" msg="Container MyProjectContainer successfully disposed."/>
Executes operations and commands against the specified Container. You can send
commands to this Container in the body of the POST request. For example, to fire all rules for
Container with id MyRESTContainer
(http://SERVER:PORT/CONTEXT/services/rest/server/containers/instances/MyRESTContainer
),
you would send the fire-all-rules command to it as shown below (in the body of the POST
request):
Following is the list of supported commands:
AgendaGroupSetFocusCommand
ClearActivationGroupCommand
ClearAgendaCommand
ClearAgendaGroupCommand
ClearRuleFlowGroupCommand
DeleteCommand
InsertObjectCommand
ModifyCommand
GetObjectCommand
InsertElementsCommand
FireAllRulesCommand
QueryCommand
SetGlobalCommand
GetGlobalCommand
GetObjectsCommand
BatchExecutionCommand
These commands can be found in the org.drools.core.command.runtime
package.
Returns the full release id for the Container specified by the id.
Example 22.10. Example Server Response
<response type="SUCCESS" msg="ReleaseId for container MyProjectContainer">
<release-id>
<artifact-id>Project1</artifact-id>
<group-id>com.redhat</group-id>
<version>1.0</version>
</release-id>
</response>
Allows you to update the release id of the container deployment. Send the new complete release id to the Server.
Example 22.11. Example Server Request
<release-id>
<artifact-id>Project1</artifact-id>
<group-id>com.redhat</group-id>
<version>1.1</version>
</release-id>
The Server will respond with a success or error message, similar to the one below:
Example 22.12. Example Server Response
<response type="SUCCESS" msg="Release id successfully updated.">
<release-id>
<artifact-id>Project1</artifact-id>
<group-id>com.redhat</group-id>
<version>1.0</version>
</release-id>
</response>
Returns information about the scanner for this Container's automatic updates.
Example 22.13. Example Server Response
<response type="SUCCESS" msg="Scanner info successfully retrieved">
<kie-scanner status="DISPOSED"/>
</response>
Allows you to start or stop a scanner that controls polling for updated
Container deployments. To start the scanner, send a request similar to:
http://SERVER:PORT/CONTEXT/services/rest/server/containers/{container-id}/scanner
with the following POST data.
Example 22.14. Example Server Request to start the scanner
<kie-scanner status="STARTED" poll-interval="20"/>
The poll-interval attribute is in seconds. The response from the server will be similar to:
Example 22.15. Example Server Response
<response type="SUCCESS" msg="Kie scanner successfully created.">
<kie-scanner status="STARTED"/>
</response>
To stop the Scanner, replace the status with DISPOSED
and remove
the poll-interval attribute.
Commands outlined in this section can be sent with any REST client, whether it is curl, RESTEasy or .NET based application. However, when sending requests from Java based application, users can utilize out of the box native client for remote communication with Execution Server. This client is part of the org.kie:kie-server-client project. It doesn't allow creating XML request, therefore it is necessary generate them before, for example, using Drools API.
Example 22.16. Generate XML request
import java.util.ArrayList;
import java.util.List;
import org.drools.core.command.impl.GenericCommand;
import org.drools.core.command.runtime.BatchExecutionCommandImpl;
import org.drools.core.command.runtime.rule.FireAllRulesCommand;
import org.drools.core.command.runtime.rule.InsertObjectCommand;
import org.kie.api.command.BatchExecutionCommand;
import org.kie.internal.runtime.helper.BatchExecutionHelper;
public class DecisionClient {
public static void main(String args[]) {
Bean1 bean1 = new Bean1();
bean1.setName("Robert");
InsertObjectCommand insertObjectCommand = new InsertObjectCommand(bean1, "f1");
FireAllRulesCommand fireAllRulesCommand = new FireAllRulesCommand("myFireCommand");
List<GenericCommand<?>> commands = new ArrayList<GenericCommand<?>>();
commands.add(insertObjectCommand);
commands.add(fireAllRulesCommand);
BatchExecutionCommand command = new BatchExecutionCommandImpl(commands);
String xStreamXml = BatchExecutionHelper.newXStreamMarshaller().toXML(command); // actual XML request
}
}
Once the request is generated it can be sent using kie-server-client as follows:
Example 22.17. Sending XML request with kie-server-client
import org.kie.server.api.model.ServiceResponse;
import org.kie.server.client.KieServicesClient;
import org.kie.server.client.KieServicesConfiguration;
import org.kie.server.client.KieServicesFactory;
//user "anton" must have role "kie-server" assigned
KieServicesConfiguration config = KieServicesFactory.
newRestConfiguration("http://localhost:8080/kie-server/services/rest/server",
"anton",
"password1!");
KieServicesClient client = KieServicesFactory.newKieServicesClient(config);
// the request "xStreamXml" we generated in previous step
// "ListenerReproducer" is the name of the Container
ServiceResponse<String> response = client.executeCommands("ListenerReproducer", xStreamXml);
System.out.println(response.getResult());
When the Planner capability is enabled, the Kie Server supports the following additional REST APIs. As usual, all these APIs are also available through JMS and the Java client API. Please also note:
The base URL for these will remain as the endpoint defined earlier (for example
http://SERVER:PORT/CONTEXT/services/rest/server/
).
All requests require basic HTTP Authentication for the role kie-server as indicated earlier.
To get a specific marshalling format, add the HTTP headers Content-Type
and optional
X-KIE-ContentType
in the HTTP request. For example:
Content-Type: application/xml
X-KIE-ContentType: xstream
The example requests and responses used below presume that a kie container is build using the optacloud example
of OptaPlanner Workbench, by calling a PUT
on
/services/rest/server/containers/optacloud-kiecontainer-1
with this content:
<kie-container container-id="optacloud-kiecontainer-1">
<release-id>
<group-id>opta</group-id>
<artifact-id>optacloud</artifact-id>
<version>1.0.0</version>
</release-id>
</kie-container>
Returns the list of solvers created in the container.
Example 22.18. Example Server Response (XStream)
<org.kie.server.api.model.ServiceResponse>
<type>SUCCESS</type>
<msg>Solvers list successfully retrieved from container 'optacloud-kiecontainer-1'</msg>
<result class="org.kie.server.api.model.instance.SolverInstanceList">
<solvers>
<solver-instance>
<container-id>optacloud-kiecontainer-1</container-id>
<solver-id>solver1</solver-id>
<solver-config-file>opta/optacloud/cloudSolverConfig.solver.xml</solver-config-file>
<status>NOT_SOLVING</status>
</solver-instance>
<solver-instance>
<container-id>optacloud-kiecontainer-1</container-id>
<solver-id>solver2</solver-id>
<solver-config-file>opta/optacloud/cloudSolverConfig.solver.xml</solver-config-file>
<status>NOT_SOLVING</status>
</solver-instance>
</solvers>
</result>
</org.kie.server.api.model.ServiceResponse>
Example 22.19. Example Server Response (JSON)
{
"type" : "SUCCESS",
"msg" : "Solvers list successfully retrieved from container 'optacloud-kiecontainer-1'",
"result" : {
"solver-instance-list" : {
"solver" : [ {
"status" : "NOT_SOLVING",
"container-id" : "optacloud-kiecontainer-1",
"solver-id" : "solver1",
"solver-config-file" : "opta/optacloud/cloudSolverConfig.solver.xml"
}, {
"status" : "NOT_SOLVING",
"container-id" : "optacloud-kiecontainer-1",
"solver-id" : "solver2",
"solver-config-file" : "opta/optacloud/cloudSolverConfig.solver.xml"
} ]
}
}
}
Creates a new solver with the given {solverId}
in the container {containerId}
. The
request's body is a marshalled SolverInstance entity that must specify the solver configuration file.
The following is an example of the request and the corresponding response.
Example 22.20. Example Server Request (XStream)
<solver-instance>
<solver-config-file>opta/optacloud/cloudSolverConfig.solver.xml</solver-config-file>
</solver-instance>
Example 22.21. Example Server Response (XStream)
<org.kie.server.api.model.ServiceResponse>
<type>SUCCESS</type>
<msg>Solver 'solver1' successfully created in container 'optacloud-kiecontainer-1'</msg>
<result class="solver-instance">
<container-id>optacloud-kiecontainer-1</container-id>
<solver-id>solver1</solver-id>
<solver-config-file>opta/optacloud/cloudSolverConfig.solver.xml</solver-config-file>
<status>NOT_SOLVING</status>
</result>
</org.kie.server.api.model.ServiceResponse>
Example 22.22. Example Server Request (JSON)
{
"solver-config-file" : "opta/optacloud/cloudSolverConfig.solver.xml"
}
Example 22.23. Example Server Response (JSON)
{
"type" : "SUCCESS",
"msg" : "Solver 'solver1' successfully created in container 'optacloud-kiecontainer-1'",
"result" : {
"solver-instance" : {
"container-id" : "optacloud-kiecontainer-1",
"solver-id" : "solver1",
"solver-config-file" : "opta/optacloud/cloudSolverConfig.solver.xml",
"status" : "NOT_SOLVING"
}
}
}
Returns the current state of the solver {solverId}
in container
{containerId}
.
Example 22.24. Example Server Response (XStream)
<org.kie.server.api.model.ServiceResponse>
<type>SUCCESS</type>
<msg>Solver 'solver1' state successfully retrieved from container 'optacloud-kiecontainer-1'</msg>
<result class="solver-instance">
<container-id>optacloud-kiecontainer-1</container-id>
<solver-id>solver1</solver-id>
<solver-config-file>opta/optacloud/cloudSolverConfig.solver.xml</solver-config-file>
<status>NOT_SOLVING</status>
</result>
</org.kie.server.api.model.ServiceResponse>
Example 22.25. Example Server Response (JSON)
{
"type" : "SUCCESS",
"msg" : "Solver 'solver1' state successfully retrieved from container 'optacloud-kiecontainer-1'",
"result" : {
"solver-instance" : {
"container-id" : "optacloud-kiecontainer-1",
"solver-id" : "solver1",
"solver-config-file" : "opta/optacloud/cloudSolverConfig.solver.xml",
"status" : "NOT_SOLVING"
}
}
}
Updates the state of the {solverId} in container {containerId}, most notably to start solving. The request's
body is a marshalled SolverInstance
and can either request the solver to solve a planning problem or to
stop solving one. The SolverInstance state determines which operation should be executed and can be set one of two
possible values:
SOLVING: starts the solver if it is not executing yet. The request's body must also contain the problem's data to be solved.
NOT_SOLVING: requests the solver to terminate early, if it is running. All other attributes are ignored.
For example, to solve an optacloud problem with 2 computers and 1 process:
Example 22.26. Example Server Request (XStream)
<solver-instance>
<status>SOLVING</status>
<planning-problem class="opta.optacloud.CloudSolution">
<computerList>
<opta.optacloud.Computer>
<cpuPower>10</cpuPower>
<memory>4</memory>
<networkBandwidth>100</networkBandwidth>
<cost>1000</cost>
</opta.optacloud.Computer>
<opta.optacloud.Computer>
<cpuPower>20</cpuPower>
<memory>8</memory>
<networkBandwidth>100</networkBandwidth>
<cost>3000</cost>
</opta.optacloud.Computer>
</computerList>
<processList>
<opta.optacloud.Process>
<requiredCpuPower>1</requiredCpuPower>
<requiredMemory>7</requiredMemory>
<requiredNetworkBandwidth>1</requiredNetworkBandwidth>
</opta.optacloud.Process>
</processList>
</planning-problem>
</solver-instance>
Notice that the response does not contain the best solution yet, because solving can take seconds, minutes, hours or days and this would time out the HTTP request:
Example 22.27. Example Server Response (XStream)
<org.kie.server.api.model.ServiceResponse>
<type>SUCCESS</type>
<msg>Solver 'solver1' from container 'optacloud-kiecontainer-1' successfully updated.</msg>
<result class="solver-instance">
<container-id>optacloud-kiecontainer-1</container-id>
<solver-id>solver1</solver-id>
<solver-config-file>opta/optacloud/cloudSolverConfig.solver.xml</solver-config-file>
<status>SOLVING</status>
</result>
</org.kie.server.api.model.ServiceResponse>
Instead, it's solving asynchronously and you need to call the bestsolution URL to get the best solution.
For example, to terminate solving:
Example 22.28. Example Server Request (XStream)
<solver-instance>
<status>NOT_SOLVING</status>
</solver-instance>
Example 22.29. Example Server Response (XStream)
<org.kie.server.api.model.ServiceResponse>
<type>SUCCESS</type>
<msg>Solver 'solver1' from container 'optacloud-kiecontainer-1' successfully updated.</msg>
<result class="solver-instance">
<container-id>optacloud-kiecontainer-1</container-id>
<solver-id>solver1</solver-id>
<solver-config-file>opta/optacloud/cloudSolverConfig.solver.xml</solver-config-file>
<status>TERMINATING_EARLY</status>
<score class="org.optaplanner.core.api.score.buildin.hardsoft.HardSoftScore">
<hardScore>0</hardScore>
<softScore>-3000</softScore>
</score>
</result>
</org.kie.server.api.model.ServiceResponse>
This doesn't delete the solver, the best solution can still be retrieved.
Returns the best solution found at the time the request is made. If the solver hasn't terminated yet (so the
status
field is still SOLVING
), it will return the best solution found up to
then, but later calls can return a better solution.
For example, the problem submitted above would return this solution, with the process assigned to the second computer (because the first one doesn't have enough memory).
Example 22.30. Example Server Response (XStream)
<org.kie.server.api.model.ServiceResponse>
<type>SUCCESS</type>
<msg>Best computed solution for 'solver1' successfully retrieved from container 'optacloud-kiecontainer-1'</msg>
<result class="solver-instance">
<container-id>optacloud-kiecontainer-1</container-id>
<solver-id>solver1</solver-id>
<solver-config-file>opta/optacloud/cloudSolverConfig.solver.xml</solver-config-file>
<status>SOLVING</status>
<score class="org.optaplanner.core.api.score.buildin.hardsoft.HardSoftScore">
<hardScore>0</hardScore>
<softScore>-3000</softScore>
</score>
<best-solution class="opta.optacloud.CloudSolution">
<score class="org.optaplanner.core.api.score.buildin.hardsoft.HardSoftScore" reference="../../score" />
<computerList>
<opta.optacloud.Computer>
<cpuPower>10</cpuPower>
<memory>4</memory>
<networkBandwidth>100</networkBandwidth>
<cost>1000</cost>
</opta.optacloud.Computer>
<opta.optacloud.Computer>
<cpuPower>20</cpuPower>
<memory>8</memory>
<networkBandwidth>100</networkBandwidth>
<cost>3000</cost>
</opta.optacloud.Computer>
</computerList>
<processList>
<opta.optacloud.Process>
<requiredCpuPower>1</requiredCpuPower>
<requiredMemory>7</requiredMemory>
<requiredNetworkBandwidth>1</requiredNetworkBandwidth>
<computer reference="../../../computerList/opta.optacloud.Computer[2]" />
</opta.optacloud.Process>
</processList>
</best-solution>
</result>
</org.kie.server.api.model.ServiceResponse>
Disposes the solver {solverId}
in container {containerId}
. If it hasn't terminated
yet, it terminates it first.
Example 22.31. Example Server Response (XStream)
<org.kie.server.api.model.ServiceResponse>
<type>SUCCESS</type>
<msg>Solver 'solver1' successfully disposed from container 'optacloud-kiecontainer-1'</msg>
</org.kie.server.api.model.ServiceResponse>
Example 22.32. Example Server Response (JSON)
{
"type" : "SUCCESS",
"msg" : "Solver 'solver1' successfully disposed from container 'optacloud-kiecontainer-1'"
}
When you have Managed Kie Server setup, you need to manage Kie Servers and Containers via a Controller. Generally, it's done by workbench UI but you may also use Controller REST API.
The controller base URL is provided by kie-wb war deployment, which
would be the same as org.kie.server.controller property.
(for example:
http://localhost:8080/kie-wb/rest/controller
)
All requests require basic HTTP Authentication for the role kie-server as indicated earlier.
Returns a list of Kie Server templates
Example 22.33. Example Server Response
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<server-template-list>
<server-template>
<server-id>demo</server-id>
<server-name>demo</server-name>
<container-specs>
<container-id>hr</container-id>
<container-name>hr</container-name>
<server-template-key>
<server-id>demo</server-id>
</server-template-key>
<release-id>
<artifact-id>HR</artifact-id>
<group-id>org.jbpm</group-id>
<version>1.0</version>
</release-id>
<configs>
<entry>
<key>RULE</key>
<value xsi:type="ruleConfig" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<scanner-status>STOPPED</scanner-status>
</value>
</entry>
<entry>
<key>PROCESS</key>
<value xsi:type="processConfig" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<strategy>Singleton</strategy>
<kie-base-name></kie-base-name>
<kie-session-name></kie-session-name>
<merge-mode>Merge Collections</merge-mode>
</value>
</entry>
</configs>
<status>STARTED</status>
</container-specs>
<configs/>
<server-instances>
<server-instance-id>demo@localhost:8230</server-instance-id>
<server-name>demo@localhost:8230</server-name>
<server-template-id>demo</server-template-id>
<server-url>http://localhost:8230/kie-server/services/rest/server</server-url>
</server-instances>
<capabilities>RULE</capabilities>
<capabilities>PROCESS</capabilities>
<capabilities>PLANNING</capabilities>
</server-template>
</server-template-list>
Returns a Kie Server template
Example 22.34. Example Server Response
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<server-template-details>
<server-id>product-demo</server-id>
<server-name>product-demo</server-name>
<container-specs>
<container-id>hr</container-id>
<container-name>hr</container-name>
<server-template-key>
<server-id>demo</server-id>
</server-template-key>
<release-id>
<artifact-id>HR</artifact-id>
<group-id>org.jbpm</group-id>
<version>1.0</version>
</release-id>
<configs>
<entry>
<key>RULE</key>
<value xsi:type="ruleConfig" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<scanner-status>STOPPED</scanner-status>
</value>
</entry>
<entry>
<key>PROCESS</key>
<value xsi:type="processConfig" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<strategy>Singleton</strategy>
<kie-base-name></kie-base-name>
<kie-session-name></kie-session-name>
<merge-mode>Merge Collections</merge-mode>
</value>
</entry>
</configs>
<status>STARTED</status>
</container-specs>
<configs/>
<server-instances>
<server-instance-id>demo@localhost:8230</server-instance-id>
<server-name>demo@localhost:8230</server-name>
<server-template-id>demo</server-template-id>
<server-url>http://localhost:8230/kie-server/services/rest/server</server-url>
</server-instances>
<capabilities>RULE</capabilities>
<capabilities>PROCESS</capabilities>
<capabilities>PLANNING</capabilities>
</server-template-details>
Creates a new Kie Server template with the specified id
Example 22.35. Example Request to create a new Kie Server template
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<server-template-details>
<server-id>test-demo</server-id>
<server-name>test-demo</server-name>
<configs/>
<capabilities>RULE</capabilities>
<capabilities>PROCESS</capabilities>
<capabilities>PLANNING</capabilities>
</server-template-details>
Returns all containers on given server
Example 22.36. Example Server Response
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<container-spec-list>
<container-spec>
<container-id>hr</container-id>
<container-name>hr</container-name>
<server-template-key>
<server-id>demo</server-id>
</server-template-key>
<release-id>
<artifact-id>HR</artifact-id>
<group-id>org.jbpm</group-id>
<version>1.0</version>
</release-id>
<configs>
<entry>
<key>RULE</key>
<value xsi:type="ruleConfig" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<scanner-status>STOPPED</scanner-status>
</value>
</entry>
<entry>
<key>PROCESS</key>
<value xsi:type="processConfig" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<strategy>Singleton</strategy>
<kie-base-name></kie-base-name>
<kie-session-name></kie-session-name>
<merge-mode>Merge Collections</merge-mode>
</value>
</entry>
</configs>
<status>STARTED</status>
</container-spec>
</container-spec-list>
Returns the Container information including its release id and configuration
Example 22.37. Example Server Response
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<container-spec-details>
<container-id>hr</container-id>
<container-name>hr</container-name>
<server-template-key>
<server-id>demo</server-id>
</server-template-key>
<release-id>
<artifact-id>HR</artifact-id>
<group-id>org.jbpm</group-id>
<version>1.0</version>
</release-id>
<configs>
<entry>
<key>PROCESS</key>
<value xsi:type="processConfig" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<strategy>Singleton</strategy>
<kie-base-name></kie-base-name>
<kie-session-name></kie-session-name>
<merge-mode>Merge Collections</merge-mode>
</value>
</entry>
<entry>
<key>RULE</key>
<value xsi:type="ruleConfig" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<scanner-status>STOPPED</scanner-status>
</value>
</entry>
</configs>
<status>STARTED</status>
</container-spec-details>
Creates a new Container with the specified containerId and the given release id and optionally configuration
Example 22.38. Example Server Request
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<container-spec-details>
<container-id>hr</container-id>
<container-name>hr</container-name>
<server-template-key>
<server-id>demo</server-id>
</server-template-key>
<release-id>
<artifact-id>HR</artifact-id>
<group-id>org.jbpm</group-id>
<version>1.0</version>
</release-id>
<configs>
<entry>
<key>PROCESS</key>
<value xsi:type="processConfig" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<strategy>Singleton</strategy>
<kie-base-name></kie-base-name>
<kie-session-name></kie-session-name>
<merge-mode>Merge Collections</merge-mode>
</value>
</entry>
<entry>
<key>RULE</key>
<value xsi:type="ruleConfig" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<scanner-status>STOPPED</scanner-status>
</value>
</entry>
</configs>
<status>STARTED</status>
</container-spec-details
Disposes a Container with the specified containerId
Starts the Container. No request body required
The Kie Server has a great Java API to wrap REST or JMS requests to be sent to the server. In this section we will explore some of the possibilities of this API.
if you are a Maven user, make sure you have at least the following dependencies in the project's pom.xml
Example 22.39. Maven Dependencies
<dependency>
<groupId>org.kie.server</groupId>
<artifactId>kie-server-client</artifactId>
<version>${kie.api.version}</version>
</dependency>
<!-- Logging -->
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
<version>1.1.2</version>
</dependency>
<!-- Drools Commands -->
<dependency>
<groupId>org.drools</groupId>
<artifactId>drools-compiler</artifactId>
<scope>runtime</scope>
<version>${kie.api.version}</version>
</dependency>
The version kie.api.version depends on the Kie Server version you are using. For jBPM 6.3, for example, you can use 6.3.1-SNAPSHOT.
The client requires a configuration object where you set most of the server communication aspects, such as the
protocol (REST and JMS) credentials and the payload format (XStream, JAXB and JSON are the supported formats at the
moment). The first thing to do is create your configuration then create the KieServicesClient
object, the entry point for starting the server communication.
See the source below where we use a REST client configuration:
Example 22.40. Client Configuration Example
import org.kie.server.api.marshalling.MarshallingFormat;
import org.kie.server.client.KieServicesClient;
import org.kie.server.client.KieServicesConfiguration;
import org.kie.server.client.KieServicesFactory;
public class DecisionServerTest {
private static final String URL = "http://localhost:8080/kie-server/services/rest/server";
private static final String USER = "kieserver";
private static final String PASSWORD = "kieserver1!";
private static final MarshallingFormat FORMAT = MarshallingFormat.JSON;
private KieServicesConfiguration conf;
private KieServicesClient kieServicesClient;
public void initialize() {
conf = KieServicesFactory.newRestConfiguration(URL, USER, PASSWORD);
conf.setMarshallingFormat(FORMAT);
kieServicesClient = KieServicesFactory.newKieServicesClient(conf);
}
In version 6.5 KIE Server Client JMS integration has been enhanced with possibility to use various interaction patterns. Currently available are:
request reply (which is the default) - that makes the JMS integration synchronous - it blocks client until it gets the response back - not suited for JMS transactional use case
fire and forget - makes the integration one way only, suitable for notification like integration with kie server - makes perfect fit for transactional JMS delivery - deliver message to kie server only if transaction that ckie server client was invoked in was committed successfully
async with callback - allows to not block a client after sending message to kie server and receive response asynchronously - can be integrated with transactional JMS delivery
Response handlers can be either set globally - when KieServicesConfiguration is created or it can be changed on runtime on individual client instances (like RuleServiceClient, ProcessServicesClient, etc)
While 'fire and forget' and 'request reply' patterns do not require any additional configuration 'async with callback' does. And the main thing is actually the callback. KIE Server CLient comes with one out of the box - `BlockingResponseCallback` that provides basic support backed by blocking queue internally. Size of the queue is confgurable and thus allow receiving multiple messages, though intention of this callback is that it will only receive one message at a time - so it's like one message (request) and then one response per client interaction.
Kie Server Client when switching response handler is not thread safe, meaning change of the handler will affect all threads using same client instance. So in case of dynamic changes of the handler it's recommended to use separate client instances. A good approach is to maintain set of clients that use dedicated response handler and then use these clients dependeing on which handler is required.
Example
client 1 will use fire and forget while client 2 will use request reply. So client 1 can be used to start processes and client 2 can be used to query for user tasks.
Users can provide their own callbacks by implementing org.kie.server.client.jms.ResponseCallback interface.
InitialContext context = ...;
Queue requestQueue = (Queue) context.lookup("jms/queue/KIE.SERVER.REQUEST"));
Queue responseQueue = (Queue) context.lookup("jms/queue/KIE.SERVER.RESPONSE");
ConnectionFactory connectionFactory = (ConnectionFactory) context.lookup("jms/RemoteConnectionFactory");
KieServicesConfiguration jmsConfiguration = KieServicesFactory.newJMSConfiguration( connectionFactory, requestQueue, responseQueue, "user", "password");
// here you set response handler globally
jmsConfiguration.setResponseHandler(new FireAndForgetResponseHandler());
Alternatively, might be actually more common, is to set the handler on individual clients before they are used
ProcessServiceClient processClient = client.getServicesClient(ProcessServicesClient.class);
// change response handler for processClient others are not affected
processClient.setResponseHandler(new FireAndForgetResponseHandler());
All the service responses are represented by the object
org.kie.server.api.model.ServiceResponse<T>
where T is the type of the payload. It has the
following attributes:
String msg: The response message;
org.kie.server.api.model.ServiceResponse.ResponseType
type: the
response type enum, which can be SUCCESS or FAILURE;
T result: The actual payload of the response, the requested object.
Notice that this is the same object returned if you are using REST or JMS, in another words it is agnostic to protocol.
Decision Server initially only supported rules execution, starting in version 6.3 it started supporting
business process execution. To know what exactly your server support, you can list the server capabilities by
accessing the object org.kie.server.api.model.KieServerInfo
using the
client:
Example 22.41. Listing Server capabilities
public void listCapabilities() {
KieServerInfo serverInfo = kieServicesClient.getServerInfo().getResult();
System.out.print("Server capabilities:");
for(String capability: serverInfo.getCapabilities()) {
System.out.print(" " + capability);
}
System.out.println();
}
If the server supports rules and process, the following should be printed when you run the code
above:
Server capabilities: BRM KieServer BPM
If you want to publish a kjar to receive requests, you must publish it in a container. The container is
represented in the client by the object org.kie.server.api.model.KieContainerResource
, and a list of
resources is org.kie.server.api.model.KieContainerResourceList
. Here's an example of how to print a
list of containers:
Example 22.42. Listing Kie Containers
public void listContainers() {
KieContainerResourceList containersList = kieServicesClient.listContainers().getResult();
List<KieContainerResource> kieContainers = containersList.getContainers();
System.out.println("Available containers: ");
for (KieContainerResource container : kieContainers) {
System.out.println("\t" + container.getContainerId() + " (" + container.getReleaseId() + ")");
}
}
It is also possible to list the containers based on specific ReleaseId
(and its individual parts) or container status:
Example 22.43. Listing Kie Containers with custom filter
public void listContainersWithFilter() {
// the following filter will match only containers with ReleaseId "org.example:contatner:1.0.0.Final" and status FAILED
KieContainerResourceFilter filter = new KieContainerResourceFilter.Builder()
.releaseId("org.example", "container", "1.0.0.Final")
.status(KieContainerStatus.FAILED)
.build();
KieContainerResourceList containersList = kieServicesClient.listContainers(filter).getResult();
List<KieContainerResource> kieContainers = containersList.getContainers();
System.out.println("Available containers: ");
for (KieContainerResource container : kieContainers) {
System.out.println("\t" + container.getContainerId() + " (" + container.getReleaseId() + ")");
}
}
You can use the client to dispose and create containers. If you dispose a containers, a ServiceResponse will be returned with Void payload(no payload) and if you create it, the KieContainerResource object itself will be returned in the response. Sample code:
Example 22.44. Disposing and creating containers
public void disposeAndCreateContainer() {
System.out.println("== Disposing and creating containers ==");
List<KieContainerResource> kieContainers = kieServicesClient.listContainers().getResult().getContainers();
if (kieContainers.size() == 0) {
System.out.println("No containers available...");
return;
}
KieContainerResource container = kieContainers.get(0);
String containerId = container.getContainerId();
ServiceResponse<Void> responseDispose = kieServicesClient.disposeContainer(containerId);
if (responseDispose.getType() == ResponseType.FAILURE) {
System.out.println("Error disposing " + containerId + ". Message: ");
System.out.println(responseDispose.getMsg());
return;
}
System.out.println("Success Disposing container " + containerId);
System.out.println("Trying to recreate the container...");
ServiceResponse<KieContainerResource> createResponse = kieServicesClient.createContainer(containerId, container);
if(createResponse.getType() == ResponseType.FAILURE) {
System.out.println("Error creating " + containerId + ". Message: ");
System.out.println(responseDispose.getMsg());
return;
}
System.out.println("Container recreated with success!");
}
The KieServicesClient is also the entry point for others clients to perform specific operations, such as send
BRMS commands and manage processes. Currently from the KieServicesClient you can have access to the following
services available in org.kie.server.client
package:
JobServicesClient: This client allows you to schedule, cancel, requeue and get job requests;
ProcessServicesClient: Allows you to start, signal abort process; complete and abort work items among other capabilities;
QueryServicesClient: The powerful query client allows you to query process, process nodes and process variables;
RuleServicesClient: The simple, but powerful rules client can be used to send commands to the server to perform rules related operations(insert objects in the working memory, fire rules, get globals...);
UserTaskServicesClient: Finally, the user tasks clients allows you to perform all operations with an user tasks(start, claim, cancel, etc) and query tasks by certain fields(process instances id, user, etc).
For further information about these interfaces check github: https://github.com/droolsjbpm/droolsjbpm-integration/tree/master/kie-server-parent/kie-server-remote/kie-server-client/src/main/java/org/kie/server/client
You can have access to any of these clients using the method getServicesClient
in the
KieServicesClient class. For example: RuleServicesClient rulesClient =
kieServicesClient.getServicesClient(RuleServicesClient.class);
To build commands to the server you must use the class org.kie.api.command.KieCommands, that can be created
using org.kie.api.KieServices.get().getCommands()
. The command to be send must be a BatchExecutionCommand or a single command(if a single command is sent, the server wraps it
into a BatchExecutionCommand):
Example 22.45. Sending commands to a container
public void executeCommands() {
System.out.println("== Sending commands to the server ==");
RuleServicesClient rulesClient = kieServicesClient.getServicesClient(RuleServicesClient.class);
KieCommands commandsFactory = KieServices.Factory.get().getCommands();
Command<?> insert = commandsFactory.newInsert("Some String OBJ");
Command<?> fireAllRules = commandsFactory.newFireAllRules();
Command<?> batchCommand = commandsFactory.newBatchExecution(Arrays.asList(insert, fireAllRules));
ServiceResponse<String> executeResponse = rulesClient.executeCommands("hello", batchCommand);
if(executeResponse.getType() == ResponseType.SUCCESS) {
System.out.println("Commands executed with success! Response: ");
System.out.println(executeResponse.getResult());
}
else {
System.out.println("Error executing rules. Message: ");
System.out.println(executeResponse.getMsg());
}
}
The result in this case is a String with the command execution result. In our case it will print the following:
== Sending commands to the server ==
Commands executed with success! Response:
{
"results" : [ ],
"facts" : [ ]
}
* You must add org.drools:drools-compiler dependency to have this part working
To list process definitions we use the QueryClient. The methods of the QueryClient usually uses pagination, which means that besides the query you are making, you must also provide the current page and the number of results per page. In the code below the query for process definitions from the given container starts on page 0 and list 1000 results, in another words, the 1000 first results.
Example 22.46. Listing Business Processes Definitions Example
public void listProcesses() {
System.out.println("== Listing Business Processes ==");
QueryServicesClient queryClient = kieServicesClient.getServicesClient(QueryServicesClient.class);
List<ProcessDefinition> findProcessesByContainerId = queryClient.findProcessesByContainerId("rewards", 0, 1000);
for (ProcessDefinition def : findProcessesByContainerId) {
System.out.println(def.getName() + " - " + def.getId() + " v" + def.getVersion());
}
}