JBoss.orgCommunity Documentation

Chapter 22. KIE Execution Server

22.1. Overview
22.1.1. Glossary
22.2. Installing the KIE Server
22.2.1. Bootstrap switches
22.2.2. Installation details for different containers
22.3. Kie Server setup
22.3.1. Managed Kie Server
22.3.2. Unmanaged KIE Execution Server
22.4. Creating a Kie Container
22.5. Managing Containers
22.5.1. Starting a Container
22.5.2. Stopping and Deleting a Container
22.5.3. Updating a Container
22.6. Kie Server REST API
22.6.1. [GET] /
22.6.2. [POST] /
22.6.3. [GET] /containers
22.6.4. ⁠[GET] /containers/{id}
22.6.5. [PUT] /containers/{id}
22.6.6. [DELETE] /containers/{id}
22.6.7. [POST] /containers/instances/{id}
22.6.8. [GET] /containers/{id}/release-id
22.6.9. [POST] /containers/{id}/release-id
22.6.10. [GET] /containers/{id}/scanner
22.6.11. [POST] /containers/{id}/scanner
22.6.12. Native REST client for Execution Server
22.7. OptaPlanner REST API
22.7.1. [GET] /containers/{containerId}/solvers
22.7.2. [PUT] /containers/{containerId}/solvers/{solverId}
22.7.3. [GET] /containers/{containerId}/solvers/{solverId}
22.7.4. [POST] /containers/{containerId}/solvers/{solverId}
22.7.5. [GET] /containers/{containerId}/solvers/{solverId}/bestsolution
22.7.6. [DELETE] /containers/{containerId}/solvers/{solverId}
22.8. Controller REST API
22.8.1. [GET] /management/servers
22.8.2. [GET] /management/server/{id}
22.8.3. [PUT] /management/server/{id}
22.8.4. [DELETE] /management/server/{id}
22.8.5. [GET] /management/server/{id}/containers
22.8.6. [GET] /management/server/{id}/containers/{containerId}
22.8.7. [PUT] /management/server/{id}/containers/{containerId}
22.8.8. [DELETE] /management/server/{id}/containers/{containerId}
22.8.9. [POST] /management/server/{id}/containers/{containerId}/status/started
22.8.10. [POST] /management/server/{id}/containers/{containerId}/status/stopped
22.9. Kie Server Java Client API
22.9.1. Maven Configuration
22.9.2. Client Configuration
22.9.3. Server Response
22.9.4. Server Capabilities
22.9.5. Kie Containers
22.9.6. Managing Containers
22.9.7. Available Clients for the Decision Server
22.9.8. Sending commands to the server
22.9.9. Listing available business processes

The Kie Server is a modular, standalone server component that can be used to instantiate and execute rules and processes. It exposes this functionality via REST, JMS and Java interfaces to client application. It also provides seamless integration with the Kie Workbench.

At its core, the Kie Server is a configurable web application packaged as a WAR file. Distributions are availables for pure web containers (like Tomcat) and for JEE 6 and JEE 7 containers.

Most capabilities on the Kie Server are configurable, and based on the concepts of extensions. Each extension can be enabled/disabled independently, allowing the user to configure the server to its need.

The current version of the Kie Server ships with two default extensions:

Both extensions enabled by default, but can be disabled by setting the corresponding property (see configuration chapter for details).

This server was designed to have a low footprint, with minimal memory consumption, and therefore, to be easily deployable on a cloud environment. Each instance of this server can open and instantiate multiple Kie Containers which allows you to execute multiple services in parallel.

The KIE Server is distributed as a web application archive (WAR) file. The WAR file comes in three different packagings:

To install the KIE Execution Server and verify it is running, complete the following steps:

The Kie Server accepts a number of bootstrap switches (system properties) to configure the behaviour of the server. The following is a table of all the supported switches.

Table 22.1. Kie Server bootstrap switches

PropertyValueDescriptionRequired
org.drools.server.ext.disabledboolean (default is "false")If true, disables the BRM support (i.e. rules support).No
org.jbpm.server.ext.disabledboolean (default is "false")If true, disables the BPM support (i.e. processes support)No
org.kie.server.idstringAn arbitrary ID to be assigned to this server. If a remote controller is configured, this is the ID under which the server will connect to the controller to fetch the kie container configurations.No. If not provided, an ID is automatically generated.
org.kie.server.userstring (default is "kieserver")User name used to connect with the kieserver from the controller, required when running in managed modeNo
org.kie.server.pwdstring (default is "kieserver1!")Password used to connect with the kieserver from the controller, required when running in managed modeNo
org.kie.server.controllercomma separated list of urlsList of urls to controller REST endpoint. E.g.: http://localhost:8080/kie-wb/rest/controllerYes when using a controller
org.kie.server.controller.userstring (default is "kieserver")Username used to connect to the controller REST apiYes when using a controller
org.kie.server.controller.pwdstring (default is "kieserver1!")Password used to connect to the controller REST apiYes when using a controller
org.kie.server.locationURL location of kie server instanceThe URL used by the controller to call back on this server. E.g.: http://localhost:8230/kie-server/services/rest/serverYes when using a controller
org.kie.server.domainstringJAAS LoginContext domain that shall be used to authenticate users when using JMSNo
org.kie.server.bypass.auth.userboolean (default is "false")Allows to bypass the authenticated user for task related operations e.g. queriesNo
org.kie.server.repovalid file system path (default is ".")Location on local file system where kie server state files will be storedNo
org.kie.server.persistence.dsstringDatasource JNDI nameYes when BPM support enabled
org.kie.server.persistence.tmstringTransaction manager platform for Hibernate properties setYes when BPM support enabled
org.kie.server.persistence.dialectstringHibernate dialect to be usedYes when BPM support enabled
org.jbpm.ht.callbackstringOne of supported callbacks for Task Service (default jaas)No
org.jbpm.ht.custom.callbackstringCustom implementation of UserGroupCallback in case org.jbpm.ht.callback was set to ‘custom’No
kie.maven.settings.customvalid file system path Location of custom settings.xml for maven configurationNo
org.kie.executor.intervalinteger (default is 3)Number of time units between polls by executorNo
org.kie.executor.pool.sizeinteger (default is 1)Number of threads in the pool for async workNo
org.kie.executor.retry.countinteger (default is 3)Number of retries to handle errorsNo
org.kie.executor.timeunitTimeUnit (default is "SECONDS")TimeUnit representing intervalNo
org.kie.executor.disabledboolean (default is "false")Disables executor completelyNo
kie.server.jms.queues.responsestring (default is "queue/KIE.SERVER.RESPONSE")JNDI name of response queue for JMSNo
org.kie.server.controller.connectlong (default is 10000)Waiting time in milliseconds between repeated attempts to connect kie server to controller when kie server starts upNo
org.drools.server.filter.classesboolean (default is "false")If true, accept only classes which are annotated with @org.kie.api.remote.Remotable or @javax.xml.bind.annotation.XmlRootElement as extra JAXB classesNo
    


A managed instance is one that requires a controller to be available to properly startup the Kie Server instance.

A Controller is a component responsible for keeping and managing a Kie Server Configuration in centralized way. Each controller can manager multiple configurations at once and there can be multiple controllers in the environment. Managed KIE Servers can be configured with a list of controllers but will connect to only one at a time.

At startup, if a Kie Server is configured with a list of controllers, it will try succesivelly to connect to each of them until a connection is successfully stablished with one of them. If for any reason a connection can't be stablished, the server will not start, even if there is local storage available with configuration. This happens by design in order to ensure consistency. For instance, if the Kie Server was down and the configuration has changed, this restriction guarantees that it will run with up to date configuration or not at all.

The configuration sets, among other things:

The Controller, besides providing configuration management, is also responsible for overall management of Kie Servers. It provides a REST api that is divided into two parts:

The controller deals only with the Kie Server configuration or definition to put it differently. It does not handle any runtime components of KIE Execution Server instances. They are always considered remote to controller. The controller is responsible for persisting the configuration to preserve restarts of the controller itself. It should manage the synchronization as well in case multiple controllers are configured to keep all definitions up to date on all instances of the controller.

By default controller is shipped with Kie Workbench and provides a fully featured management interface (both REST api and UI). It uses underlying git repository as persistent store and thus when GIT repositories are clustered (using Apache Zookeeper and Apache Helix) it will cover the controllers synchronization as well.

The diagram above illustrates the single controller (workbench) setup with multiple Kie Server instances managed by it.

The diagram below illustrates the clustered setup where there are multiple instances of controller synchronized over Zookeeper.

In the above diagram we can see that the Kie Server instances are capable of connecting to any controllers, but they will connect to only one. Each instance will attempt to connect to controller as long as it can reach one. Once connection is established with one of the controllers it will skip the others.

Once your Execution Server is registered, you can start adding Kie Containers to it.

Kie Containers are self contained environments that have been provisioned to hold instances of your packaged and deployed rule instances.

Containers within the Execution Server can be started, stopped and updated from within KIE Workbench.⁠

You can update deployed KieContainers without restarting the Execution Server. This is useful in cases where the Business Rules change, creating new versions of packages to be provisioned.

You can have multiple versions of the same package provisioned and deployed, each to a different KieContainer.

To update deployments in a KieContainer dynamically, click on the icon next to the Container. This will open up the Container Info screen. An example of this screen is shown here:

The Container Info screen is a useful tool because it not only allows you to see the endpoint for this KieContainer, but it also allows you to either manually or automatically refresh the provision if an update is available. The update can be manual or automatic:

Manual Update: To manually update a KieContainer, enter the new Version number in the Version box and click on the Update button. You can of course, update the Group Id or the Artifact Id , if these have changed as well. Once updated, the Execution server updates the container and shows you the resolved GAV attributes at the bottom of the screen in the Resolved Release Id section.

Automatic Update: If you want a deployed Container to always have the latest version of your deployment without manually editing it, you will need to set the Version property to the value of LATEST and start a Scanner. This will ensure that the deployed provision always contains the latest version. The Scanner can be started just once on demand by clicking the Scan Now button or you can start it in the background with scans happening at a specified interval (in seconds).You can also set this value to LATEST when you are first creating this deployment. The Resolved Release Id in this case will show you the actual, latest version number.

The Execution Server supports the following commands via the REST API.

Please note the following before using these commands:

Commands outlined in this section can be sent with any REST client, whether it is curl, RESTEasy or .NET based application. However, when sending requests from Java based application, users can utilize out of the box native client for remote communication with Execution Server. This client is part of the org.kie:kie-server-client project. It doesn't allow creating XML request, therefore it is necessary generate them before, for example, using Drools API.


Once the request is generated it can be sent using kie-server-client as follows:


When the Planner capability is enabled, the Kie Server supports the following additional REST APIs. As usual, all these APIs are also available through JMS and the Java client API. Please also note:

The example requests and responses used below presume that a kie container is build using the optacloud example of OptaPlanner Workbench, by calling a PUT on /services/rest/server/containers/optacloud-kiecontainer-1 with this content:

<kie-container container-id="optacloud-kiecontainer-1">
  <release-id>
    <group-id>opta</group-id>
    <artifact-id>optacloud</artifact-id> 
    <version>1.0.0</version> 
  </release-id> 
</kie-container>

Updates the state of the {solverId} in container {containerId}, most notably to start solving. The request's body is a marshalled SolverInstance and can either request the solver to solve a planning problem or to stop solving one. The SolverInstance state determines which operation should be executed and can be set one of two possible values:

For example, to solve an optacloud problem with 2 computers and 1 process:


Notice that the response does not contain the best solution yet, because solving can take seconds, minutes, hours or days and this would time out the HTTP request:


Instead, it's solving asynchronously and you need to call the bestsolution URL to get the best solution.

Returns the best solution found at the time the request is made. If the solver hasn't terminated yet (so the status field is still SOLVING), it will return the best solution found up to then, but later calls can return a better solution.⁠

For example, the problem submitted above would return this solution, with the process assigned to the second computer (because the first one doesn't have enough memory).


When you have Managed Kie Server setup, you need to manage Kie Servers and Containers via a Controller. Generally, it's done by workbench UI but you may also use Controller REST API.

The Kie Server has a great Java API to wrap REST or JMS requests to be sent to the server. In this section we will explore some of the possibilities of this API.

The client requires a configuration object where you set most of the server communication aspects, such as the protocol (REST and JMS) credentials and the payload format (XStream, JAXB and JSON are the supported formats at the moment). The first thing to do is create your configuration then create the KieServicesClient object, the entry point for starting the server communication. See the source below where we use a REST client configuration:


In version 6.5 KIE Server Client JMS integration has been enhanced with possibility to use various interaction patterns. Currently available are:

Response handlers can be either set globally - when KieServicesConfiguration is created or it can be changed on runtime on individual client instances (like RuleServiceClient, ProcessServicesClient, etc)

While 'fire and forget' and 'request reply' patterns do not require any additional configuration 'async with callback' does. And the main thing is actually the callback. KIE Server CLient comes with one out of the box - `BlockingResponseCallback` that provides basic support backed by blocking queue internally. Size of the queue is confgurable and thus allow receiving multiple messages, though intention of this callback is that it will only receive one message at a time - so it's like one message (request) and then one response per client interaction.

Example

client 1 will use fire and forget while client 2 will use request reply. So client 1 can be used to start processes and client 2 can be used to query for user tasks.

Users can provide their own callbacks by implementing org.kie.server.client.jms.ResponseCallback interface.

InitialContext context = ...;
Queue requestQueue = (Queue) context.lookup("jms/queue/KIE.SERVER.REQUEST"));
Queue responseQueue = (Queue) context.lookup("jms/queue/KIE.SERVER.RESPONSE");
ConnectionFactory connectionFactory = (ConnectionFactory) context.lookup("jms/RemoteConnectionFactory");
KieServicesConfiguration jmsConfiguration = KieServicesFactory.newJMSConfiguration( connectionFactory, requestQueue, responseQueue, "user", "password");
// here you set response handler globally
jmsConfiguration.setResponseHandler(new FireAndForgetResponseHandler());

Alternatively, might be actually more common, is to set the handler on individual clients before they are used

ProcessServiceClient processClient = client.getServicesClient(ProcessServicesClient.class);
// change response handler for processClient others are not affected
processClient.setResponseHandler(new FireAndForgetResponseHandler());

If you want to publish a kjar to receive requests, you must publish it in a container. The container is represented in the client by the object org.kie.server.api.model.KieContainerResource, and a list of resources is org.kie.server.api.model.KieContainerResourceList. Here's an example of how to print a list of containers:


It is also possible to list the containers based on specific ReleaseId (and its individual parts) or container status:


To build commands to the server you must use the class org.kie.api.command.KieCommands, that can be created using org.kie.api.KieServices.get().getCommands(). The command to be send must be a BatchExecutionCommand or a single command(if a single command is sent, the server wraps it into a BatchExecutionCommand):