Three Ways to Create Docker Images for Java

Screen-Shot-2018-04-03-at-4.40.29-PM

Introduction

Long before Dockerfiles, Java developers worked with single deployment units (WARs, JARs, EARs, etc.).  As you likely know by now, it is best practice to work in micro-services, deploying a small number of deployment units per JVM.  Instead of one giant, monolithic application, you build your application such that each service can run on its own.

This is where Docker comes in!  If you wish to upgrade a service, rather than redeploying your jar/war/ear to a new instance of an application server, you can just build a new Docker image with the upgraded deployment unit.

In this post, I will review 3 different ways to create Docker images for Java applications.

Prerequisites

  • Docker is installed
  • Maven is installed (for example #1)
  • You have a simple Spring Boot application (I used the Spring Initializr project generator with a Spring Web dependency)

1.  Package-only Build

In a package-only build, we will let Maven (or your build tool of choice) control the build process.

Unzip the Spring Initializr project you generated as part of the prerequisites.  In the parent folder of your Spring Boot application, create a Dockerfile.  In a terminal, run:

$ unzip demo.zip
$ cd demo
$ nano Dockerfile

Paste the following and save:

# we will use openjdk 8 with alpine as it is a very small linux distro
FROM openjdk:8-jre-alpine3.9

# copy the packaged jar file into our docker image
COPY target/demo-0.0.1-SNAPSHOT.jar /demo.jar

# set the startup command to execute the jar
CMD ["java", "-jar", "/demo.jar"]
  • The FROM layer denotes which parent image to use for our child image
  • The COPY layer will copy the local jar previously built by Maven into our image
  • The CMD layer tells Docker the command to run inside the container once the previous steps have been executed

Now, let’s package our application into a .jar using Maven:

$ mvn clean package

…and then build the Docker image.  The following command tells Docker to fetch the Dockerfile in the current directory (the period at the end of the command).  We build using the username/image name convention, although this is not mandatory.  The -t flag denotes a Docker tag, which in this case is 1.0-SNAPSHOT.  If you don’t provide a tag, Docker will default to the tag :latest.

$ docker build -t anna/docker-package-only-build-demo:1.0-SNAPSHOT .

To run the container from the image we just created:

$ docker run -d -p 8080:8080 anna/docker-package-only-build-demo:1.0-SNAPSHOT

-d will run the container in the background (detached mode), and -p will map our local port 8080 to the container’s port of 8080.

Navigate to localhost:8080, and you should see the following:

Screenshot from 2020-02-25 17-16-48

Once you are satisfied with your testing, stop the container.

$ docker stop <container_id>

Pros to this approach:

  • Results in a light-weight Docker image
  • Does not require Maven to be included in the Docker image
  • Does not require any of our application’s dependencies to be packaged into the image
  • You can still utilize your local Maven cache upon application layer changes, as opposed to methods 2 and 3 which we will discuss later

Cons to this approach:

  • Requires Maven to be installed on the host machine
  • The Docker build will fail if the Maven build fails/is not executed beforehand — this becomes a problem when you want to integrate with services that automatically “just build” using the present Dockerfile

2.  Normal Docker Build

In a “normal” Docker build, Docker will control the build process.

Modify the previous Dockerfile to contain the following:

# select parent image
FROM maven:3.6.3-jdk-8

# copy the source tree and the pom.xml to our new container
COPY ./ ./

# package our application code
RUN mvn clean package

# set the startup command to execute the jar
CMD ["java", "-jar", "target/demo-0.0.1-SNAPSHOT.jar"]

Now, let’s build a new image as we did in Step 1:

$ docker build -t anna/docker-normal-build-demo:1.0-SNAPSHOT .

And run the container:

$ docker run -d -p 8080:8080 anna/docker-normal-build-demo:1.0-SNAPSHOT

Again, to test your container, navigate to localhost:8080.  Stop the container once you are finished testing.

Pros to this approach:

  • Docker controls the build process, therefore this method does not require the build tool to be installed on the host machine beforehand
  • Integrates well with services that automatically “just build” using the present Dockerfile

Cons to this approach:

  • Results in the largest Docker image of our 3 methods
  • This build method not only packaged our app, but all of its dependencies and the build tool itself, which is not necessary to run the executable
  • If the application layer is rebuilt, the mvn package command will force all Maven dependencies to be pulled from the remote repository all over again (you lose the local Maven cache)

3.  Multi-stage Build (The ideal way)

With multi-stage Docker builds, we use multiple FROM statements for each build stage.  Every FROM statement creates a new base layer, and discards everything we don’t need from the previous FROM stage.

Modify your Dockerfile to contain the following:

# the first stage of our build will use a maven 3.6.1 parent image
FROM maven:3.6.1-jdk-8-alpine AS MAVEN_BUILD

# copy the pom and src code to the container
COPY ./ ./

# package our application code
RUN mvn clean package

# the second stage of our build will use open jdk 8 on alpine 3.9
FROM openjdk:8-jre-alpine3.9

# copy only the artifacts we need from the first stage and discard the rest
COPY --from=MAVEN_BUILD /docker-multi-stage-build-demo/target/demo-0.0.1-SNAPSHOT.jar /demo.jar

# set the startup command to execute the jar
CMD ["java", "-jar", "/demo.jar"]

Build the image:

$ docker build -t anna/docker-multi-stage-build-demo:1.0-SNAPSHOT .

And then run the container:

$ docker run -d -p 8080:8080 anna/docker-multi-stage-build-demo:1.0-SNAPSHOT

Pros to this approach:

  • Results in a light-weight Docker image
  • Does not require the build tool to be installed on the host machine beforehand (Docker controls the build process)
  • Integrates well with services that automatically “just build” using the present Dockerfile
  • Only artifacts we need are copied from one stage to the next (i.e., our application’s dependencies are not packaged into the final image as in the previous method)
  • Create as many build stages as you need
  • Stop at any particular stage on an image build using the –target flag, i.e.
    docker build --target MAVEN_BUILD -t anna/docker-multi-stage-build-demo:1.0-SNAPSHOT .

Cons to this approach:

  • If the application layer is rebuilt, the mvn package command will force all Maven dependencies to be pulled from the remote repository all over again (you lose the local Maven cache)

Verification:  How big are the images?

In a terminal, run:

docker image ls

You should see something like the following:

Screenshot from 2020-02-25 15-52-59

As you can see, the multi-stage build resulted in our smallest image, whereas the normal build resulted in our largest image.  This should be expected since the normal build included our application code, all of its dependencies, and our build tooling, and our multi-stage build contained only what we needed.

Conclusion

Of the three Docker image build methods we covered, Multi-stage builds are the way to go.  You get the best of both worlds when packaging your application code — Docker controls the build, but you extract only the artifacts you need.  This becomes particularly important when storing containers on the cloud.

  • You spend less time building and transferring your containers on the cloud, as your image is much smaller
  • Cost — the smaller your image, the cheaper it will be to store
  • Smaller surface area, aka removing additional dependencies from our image makes it less prone to attacks

Thanks for following along, and I hope this helps!  You can find the source code for all three examples in my github, here.

How to build a Hashicorp Vault server using Packer and Terraform on DigitalOcean

Introduction

In this tutorial, I will guide you step-by-step on how to create an image running a pre-configured Hashicorp Vault server, using Packer to create the image, and then using Terraform to deploy the image to a DigitalOcean droplet.

DigitalOcean

DigitalOcean is an Infrastructure as a Service (IaaS) provider. It offers developers an easy-to-use, scalable solution on spinning up Virtual Machines (VMs), referred to as “droplets.” Droplets run on virtual hardware, and can be monitored, secured, and backed-up using the DigitalOcean interface.

Hashicorp Vault

Vault is a tool used for managing secrets. A secret is what you might think it alludes to — data we want to hide from outside the system. For example, it could be a password, certificate, or an API key. Vault manages storage, generation, and encryption of secrets, among other functionality.

Hashicorp Packer

Packer is an “Infrastructure as Code” automation tool used for creating machine images. It comes out of the box with support to build images using DigitalOcean.

Hashicorp Terraform

Terraform is another “Infrastructure as Code” tool, used for the provisioning and management of system infrastructure.

Prerequisites

Step 1 — Install Packer

The recommended installation method for Packer is to install via a precompiled binary.

Navigate to the /tmp directory and download the binary appropriate for your system:

$ cd /tmp
$ wget https://link_to_your_desired_binary

Unzip it to/usr/local/packer:

$ mkdir /usr/local/packer
$ unzip your_download.zip -d /usr/local/packer

Now, add it to your path. Run the following:

$ mv /usr/local/packer/packer /usr/local/bin

Verify that Packer was properly installed:

$  packer -version

The terminal should output the version you downloaded.

Step 2 — Install Terraform

The recommended installation method for Terraform is to install via a precompiled binary.

Navigate to the /tmp directory and download the binary appropriate for your system:

$ cd /tmp
$ wget https://link_to_your_desired_binary

Unzip it to/usr/local/terraform:

$ mkdir /usr/local/terraform
$ unzip your_download.zip -d /usr/local/terraform

Now, add it to your path. Run the following:

$ mv /usr/local/terraform/terraform /usr/local/bin

Verify that Terraform was properly installed:

$  terraform -version

The terminal should output the version you downloaded.

Step 3 — Create an Installation Script for Vault

Remember, one of our primary goals is to set up an image with a pre-configured Vault server. For now, let’s just worry about the installation pieces. Later, we will use Packer to handle the configuration for us.

Let’s start by creating a central location to store all of our scipts and configuration files.

$ mkdir -p digitalocean-packer-terraform/packer/vault_configs

The -p flag ensures that the digitalocean-packer-transform folder is created first, if it doesn’t already exist.

In the vault_configs directory, create a script called install_vault.sh.

Add the following to your install script. This file will later be picked up by our Packer configuration and execute it before creating the image.

#!/usr/bin/env bash

# update and install unzip
sudo apt-get update
sudo apt-get install unzip -y

# download and install vault
cd /tmp
wget https://releases.hashicorp.com/vault/1.3.1/vault_1.3.1_linux_amd64.zip
unzip vault_*.zip
sudo cp vault /usr/local/bin 

# enable autocompletion for vault flags, subcommands, and arguments
vault -autocomplete-install
complete -C /usr/local/bin/vault vault

# prevent memory from being swapped to disk without running the process as root
sudo setcap cap_ipc_lock=+ep /usr/local/bin/vault

# create the vault.d directory in /etc
sudo mkdir --parents /etc/vault.d

# move the config files to their appropriate locations
sudo mv /home/vault/vault.hcl /etc/vault.d/vault.hcl
sudo mv /home/vault/vault.service /etc/systemd/system/vault.service

# create a system user 
sudo useradd --system --home /etc/vault.d --shell /bin/false vault

# give ownership of everything in the vault.d directory to the vault user
sudo chown --recursive vault:vault /etc/vault.d

# give read/write access to the vault.hcl file
sudo chown 640 /etc/vault.d/vault.hcl

# enable and start the vault server
sudo systemctl enable vault
sudo systemctl start vault

Step 4 — Configure the Vault Server

Now, we need to configure the Vault server. Let’s create an HCL configuration file in the vault_configs directory. This file will be picked up locally by Packer and will later be used when automating the creation of our image.

With your favorite text editor, create the file:

$ vim vault.hcl

Add the following to your configuration file.

listener "tcp" {
 address     = "127.0.0.1:8200"
 tls_disable = 1
}

storage "file" {
 path = "/home/vault/data"
}
  • listener defines where Vault will listen for API requests
  • storage defines the physical back-end Vault will use

There are other options available, but for this use case these are the two primary configurations we need (both are required). For more configuration options, see here.

Step 5 — Configure Vault to Run as a Service

If we want Vault to automatically start the server on boot, we will need to configure a .service file. This will later live in /etc/systemd/system on the remote machine.

Create a file called vault.service in the vault_configs directory.

$ vim vault.service

Add the following:

[Unit]
Description="HashiCorp Vault - A tool for managing secrets"
Documentation=https://www.vaultproject.io/docs/
Requires=network-online.target
After=network-online.target
ConditionFileNotEmpty=/etc/vault.d/vault.hcl
StartLimitIntervalSec=60
StartLimitBurst=3

[Service]
User=vault
Group=vault
ProtectSystem=full
ProtectHome=read-only
PrivateTmp=yes
PrivateDevices=yes
SecureBits=keep-caps
AmbientCapabilities=CAP_IPC_LOCK
Capabilities=CAP_IPC_LOCK+ep
CapabilityBoundingSet=CAP_SYSLOG CAP_IPC_LOCK
NoNewPrivileges=yes
ExecStart=/usr/local/bin/vault server -config=/etc/vault.d/vault.hcl
ExecReload=/bin/kill --signal HUP $MAINPID
KillMode=process
KillSignal=SIGINT
Restart=on-failure
RestartSec=5
TimeoutStopSec=30
StartLimitInterval=60
StartLimitIntervalSec=60
StartLimitBurst=3
LimitNOFILE=65536
LimitMEMLOCK=infinity

[Install]
WantedBy=multi-user.target

I won’t go into the details of every parameter, but you can find more information on their definitions here.

Step 6 — Define User Variables for Packer

Now that we have what we need for Vault to be installed properly, let’s move onto the Packer comoponents. Create a variables.json file in the packer directory. This file will store our global variables, which will later be referenced in another JSON file.

In the packer directory, create a file called variables.json

$ cd ..
$ vim variables.json

Define the following. Copy and paste your DigitalOcean API key accordingly (leave the quotation marks).

{
    "do_api_token": "INSERT_YOUR_DIGITAL_OCEAN_TOKEN_HERE"
}

Step 7 — Create the Template

Packer provides various builders to create a machine and generate an image from it. A builder is a Packer component that takes a JSON template file as input and outputs the desired image, based upon how we configure the template.

Let’s build out our template for the image.

$ vim template.json

To begin building our template, add the following:

{
  "variables": {
      "do_api_token": ""
  },

The variables section tells Packer what variables we have defined. Here, we have a do_api_token variable defined with a default value of an empty string.

Now, we will move onto the builders section of the template. There are various configurations available for the digitalocean builder template, which Packer provides by default. At a bare minimum, you must define: api_token, region, image, and size.

After the variables section, add:

"builders": [
    {
      "droplet_name": "vault",
      "snapshot_name": "vault",
      "type": "digitalocean",
      "ssh_username": "root",
      "api_token": "{{ user `do_api_token` }}",
      "image": "ubuntu-18-04-x64",
      "region": "nyc1",
      "size": "1gb"
    }],
  • Note that you did not have to define your API token for api_token. This is because we already created it in our variables.json file. The back-ticks indicate a reference variable, do_api_token.
  • region, image, and size come from the slugs you get from the JSON response using the DigitalOcean API. (It might be helpful to append | json_pp after your CURL requests in the terminal, which will pipe the JSON output in pretty-print format) . It makes it easier to read in the terminal. You can also use a tool like Postman, if you prefer).

Finally, we will move onto the provisioners section of our template. Provisioners can do multiple tasks, such as create users and install packages. For our example, the provisioners section will run a series of steps (in the order you define in the template).

Add the following to template.json, after the builders section:

"provisioners": [
  {
    "type": "shell",
    "inline": [
      "mkdir -p /home/vault/data"
    ]
  },
    { "type": "file",
      "source": "vault_configs/vault.service",
      "destination": "/home/vault/vault.service"
    },
    { "type": "file",
      "source": "vault_configs/vault.hcl",
      "destination": "/home/vault/vault.hcl"
    },
    {
      "type": "shell",
      "script": "vault_configs/vault_install.sh"
    }
  ]
}
  • The first provisioner is an inline command run in the terminal, and creates a directory /home/vault/data on the remote machine.
  • The second provisioner transfer the vault.service file you created in Step 5 to /home/vault/vault.service on the remote machine.
  • The third provisioner transfers the config.hcl file you created in Step 4 to /home/vault/vault.hcl on the remote machine.
  • Finally, the last provisioner tells Packer to run the vault_install.sh script you created in Step 3.

You have now finished creating the template.

Step 8 — Create the Image using Packer

Now that we have all the necessary components in place, we can run Packer to create our image/snapshot.

From the packer directory, run the following command:

$ packer validate -var-file=variables.json template.json

validate will check our JSON files for syntax and configuration errors.

You should see the following:

Template validated successfully.

We can now build the image. Run the following:

$ packer build -var-file=variables.json template.json

The -var-file flag tells Packer to set our user variables (do_api_token) to the specified values in the variables.json file.

It takes about ~1 minute to create the snapshot. You should see quite a bit of terminal output. You should see something like the following:

Selection_006

Save the ID number given to you in the console, you will need it later to use with Terraform.

You should now be able to see the image we created in your DigitalOcean Dashboard. On the left panel, select Images. You should see the “vault” image we just created, under the Snapshots tab:

Selection_007

Step 9 — Create Variable Definitions for Terraform

Similar to the variables.json file we created for Packer, we are going to define variables for Terraform to use, and later pass them via command-line.

Create a folder outside of the packer directory called terraform, and create a file called variables.tfvars:

$ mkdir -p ../terraform
$ cd ../terraform
$ vim variables.tfvars

Add the following:

do_api_token = "INSERT_YOUR_DIGITAL_OCEAN_TOKEN_HERE"
image_id = "INSERT_THE_IMAGE_ID_CREATED_BY_PACKER"

If you lost the image_id, you can use the doctl client to obtain it by running:

$ doctl compute image list-user

Step 10 — Create an Entry Point for Terraform

Now, we can create an entry point for Terraform to use to start up a Droplet from the image.

Terraform uses its own configuration language in a declarative fashion. Each .tf file describes the intended goal (in our case, to install a Vault server), rather than the steps required to reach said goal.

In the terraform directory, create a file called main.tf:

$ vim main.tf

First, we will begin by declaring our input variables, similar to how we did in Step 6 as a part of the Packer template.

Add the following to main.tf:

# Set the variable value in *.tfvars file
variable "do_api_token" {}
variable "image_id" {}

Next, we will configure the digitalocean provider.

After the variables section, add the following.

# Configure the DigitalOcean Provider to use our DO token
provider "digitalocean" {
  token = "${var.do_api_token}"
}

The provider section tells Terraform how to interact with the DigitalOcean API, and what resources to expose (we will define the resources section next).

You will notice that we don’t actually declare a value for token, it will reference do_api_token that we declared in our variables.tf file. There are also additional arguments we can use with the digitalocean provider, aside from token, however, unlike token, they are optional.

Lastly, we will define a resource for Terraform to use. A resource describes various infrastructure objects we want to use. The digitialocean provider has many resources available. For this example, I am using the digitalocean_droplet resource.

Define the resource after the provider section as follows:

# Create a new droplet from an existing image
resource "digitalocean_droplet" "web" {
  image  = "${var.image_id}"
  name   = "vault"
  region = "nyc1"
  size   = "1gb"
}

image, name, region, and size are all required arguments for the digitalocean_droplet resource. There are others available as well, you can check the documentation for more information.

Notice again that we are referencing the image_id defined in the variables.tf file.

Step 11 — Create a Droplet from our Packer Image

Now, we can use Terraform to create a Droplet from the image we created. In the terraform directory, run the following:

$ terraform init

This command will initialize a directory containing Terraform configuration files. After running the init command, you should see the following:

Selection_008

Next, let’s get a preview of the changes Terraform is going to make using the plan command.

# terraform plan -var-file=variables.tfvars

Your terminal should output something like the following:

Selection_009

Check that your resource looks as expected.

Finally, let’s tell Terraform to apply our changes:

$ terraform apply var-file=variables.tfvars

Double-checking you want to apply the changes, Terraform will verify. Type yes:

Selection_010

Your terminal will output the following:

Selection_011

You can now navigate to the DigitalOcean Control Panel and see that the Droplet was created. Select Droplets in the navigation pane:

Selection_012

Excellent. We now have a Virtual Machine with a pre-configured Vault server! You should receive an e-mail from DigitalOcean with the login information.

Step 11 — Verify that the Vault Server has Started

Using the login information that was e-mailed to you for your new Droplet, ssh into the remote machine.

$ ssh root@your.ip.address

You will be prompted with the following message:

Are you sure you want to continue connecting (yes/no)?

(Type yes).

Login with the password that was e-mailed to you. You will be prompted to change your password.

Enter the following command to check the status of our pre-configured server:

$ systemctl status vault

You should see the following:

Selection_013

Conclusion

Now that you know how to setup a pre-configured Vault server using Packer and Terraform, try building upon this example to add configurations of your own. For example, try configuring Vault to use Consul as a backend and use Terraform to apply your changes. The possibilities are endless, but most of all, have fun!

The completed project can be found on my github here.

Discovery Workshops, the Behavior-Driven Development way

There are several types of discovery workshops, depending on your development approach.  In this post, I will go over workshops geared towards BDD specifically.

What is a discovery workshop?

According to the Cucumber, a discovery workshop is “a conversation where technical and business people collaborate to explore, discover and agree as much as they can about the desired behavior for a User Story.”

How do you conduct a discovery workshop?

There are several discovery workshop models, these are just a few:

  • Example Mapping
    • Uses a pack of four separately colored index cards to map rules (a summary of constraints/acceptance criteria the team has agreed upon) to examples (illustrations/cases of the acceptance criteria)

map

  • OOPSI Mapping (Outcomes, Outputs, Processes, Scenarios, Inputs)
    • Similar to Example Mapping, uses separately colored Post-it Notes to map shared processes/relationships between outputs and scenarios.

oopzi

  • Feature Mapping
    • Also uses separately colored Post-it Notes.  The team picks a story from the backlog, identifies the actors involved, breaks the story down into tasks, and maps those tasks to specific examples.

questions

When should you hold a discovery workshop?

As late as possible before development on a new User Story begins, in order to prevent details from being lost.  Conducting a discovery workshop as late as possible also gives the team enough leg room to shift their plans should new details surface.

Who should attend?

A good rule-of-thumb is 3-6 people, but at a bare minimum your Three Amigos should be present:  a product owner, a developer, and a tester.  Your product owner will identify the problem the team should be trying to solve, your developer will address how to build a solution around said problem, and your tester will address any edge cases that could arise.  In my experience, it’s also helpful to have a UX person handy during these meetings, as they are very close to the end-user and they can often times point out any flaws in a feature’s requirements or acceptance criteria.

How long does a discovery workshop take?

This depends on which model you use.  For Event Mapping, discovery workshop should ideally only last about 25-30 minutes per story (when I was at IBM we even used to place a timer in the middle of the table to ensure no time was wasted).  If you need any more time than this, it’s likely the story is far too large and should be broken down, or some of the specifics are missing.  In the latter case, you should set aside the story as the product owner needs to do more research.

Why bother?

The purpose of a discovery workshop is to give all stakeholders, both technical and non-technical, a shared understanding of the work at hand.  Doing so encourages cross-functional collaboration, an increase in feedback, and covers any lost details or incorrect assumptions made.

Conclusion

A discovery workshop is a very important piece of the BDD lifecycle, among other agile development approaches.  Without it, you are sure to run into miscommunications and your team won’t discover any unknowns, which could really hamper your project’s success.

Maven profiles with Cucumber JVM

Profiles are not available for use on the JVM in Cucumber.  Oftentimes, though, you may want to run specific scenarios based on the environment you want to test.  For instance, you may have a web application you are testing, in two separate environments.  You might also want to test those using different web drivers.


Feature: Log in to web application

@dev
Scenario:  Dev environment
Given dev URL

@qa
Scenario: Test environment
Given test URL

While we cannot achieve this behavior using the JVM alone, we can use Maven  or Gradle profiles.

For a background on Maven’s Build Profiles, see here.  For this tutorial, I used the archetype generated from Cucumber’s 10 Minute Tutorial.

Remember, our goal is to run different scenarios based on their tags using profiles, and have the ability to switch web drivers.  I added four profiles to my pom, as shown here.  One for each environment (dev and qa), and one for each web driver (chrome and firefox).

You will also need to add the maven-surefire plugin and set the value for cucumber.options.  Here is where we rely on –tags to run the specific set of tests we’re looking for.

Now, you can run tests using your desired profile(s), for instance:

$ mvn test -P qa,firefox

For this example, you will see the following:

Selection_002

Notice how only one of our two tests is run.  Since we are using the qa profile, only scenarios tagged with @qa are run.

To debug your configuration and check what active profiles are running, you can use the Maven Help Plugin.

For more information on how to achieve this using Gradle, see the Migrating Maven profiles and properties section of the Gradle docs.

Thank you for following along, and I hope this helps!

 

Connecting a JMS Client to a remote unmanaged (standalone) Kie Server

A reader commented that he was having issues connecting to a remote (standalone) instance of the Kie Server (thank you, Theodoros, for inspiring this write up).

Upon reproducing/researching the issue he mentioned, I noticed others seemed to have suffered from similar error messages.

Here is the process I took to connect a JMS client to a remote standalone Kie Server instance.

Disclaimer: There are several permutations of processes you could take to get this working.  I am no way claiming this setup is the best and/or recommended, as I still need to verify this with my team mates, so please keep that in mind.

Server Side

  1.  Download Wildfly 10.1.0.Final.
    wget http://download.jboss.org/wildfly/10.1.0.Final/wildfly-10.1.0.Final.zip
  2. Add a user/password, so we can authenticate on the client side.
    (-a is for the ApplicationManagement realm)

    ./add-user.sh -a -u 'admin' -p 'admin' -ro 'admin, guest, kie-server'

    (default is ManagementRealm)

    ./add-user.sh -u 'admin' -p 'admin' -ro 'admin, guest, kie-server'
  3. Obtain a Kie Server war (if you wish, build from source).For Wildfly, you should use the kie-server-*-SNAPSHOT-ee7.war.  For simplicity, rename this to kie-server.war and then copy it to Wildfly’s deployment directory.
    cp kie-server.war ~/wildfly-10.1.0.Final/standalone/deployments/
  4.  Start the server.
    ./standalone.sh --server-config=standalone-full.xml

Client Side

  1. Clone the sample client and run (mvn clean install)We expect the client to establish a connection to the remote Kie Server by retrieving the Kie Server’s RemoteConnectionFactory via its initial context.
    package org.anbaker;
    
    import static org.junit.Assert.assertEquals;
    
    import java.net.URL;
    import java.util.Properties;
    
    import javax.jms.ConnectionFactory;
    import javax.jms.Queue;
    import javax.naming.InitialContext;
    import javax.naming.NamingException;
    
    import org.junit.Test;
    import org.kie.server.api.model.KieServerStateInfo;
    import org.kie.server.api.model.ServiceResponse;
    import org.kie.server.api.model.ServiceResponse.ResponseType;
    import org.kie.server.client.KieServicesClient;
    import org.kie.server.client.KieServicesConfiguration;
    import org.kie.server.client.KieServicesFactory;
    
    public class TestRemoteAPI {
    
    private static final String REMOTING_URL = "http://the.server.ip.address:8080/kie-server/services/rest/server";
    
    private static final String USER = "admin";
    private static final String PASSWORD = "admin";
    
    private static final String CONNECTION_FACTORY = new String("jms/RemoteConnectionFactory");
    private static final String REQUEST_QUEUE_JNDI = new String("jms/queue/KIE.SERVER.REQUEST");
    private static final String RESPONSE_QUEUE_JNDI = new String("jms/queue/KIE.SERVER.RESPONSE");
    
    private KieServicesConfiguration conf;
    private KieServicesClient kieServicesClient;
    
    private static InitialContext getRemoteInitialContext(URL url, String user, String password) {
    
    Properties initialProps = new Properties();
    
    initialProps.setProperty(InitialContext.INITIAL_CONTEXT_FACTORY,
    "org.jboss.naming.remote.client.InitialContextFactory");
    initialProps.setProperty(InitialContext.PROVIDER_URL, "remote://" + url.getHost() + ":4447");
    initialProps.setProperty(InitialContext.SECURITY_PRINCIPAL, user);
    initialProps.setProperty(InitialContext.SECURITY_CREDENTIALS, password);
    
    for (Object keyObj : initialProps.keySet()) {
    String key = (String) keyObj;
    System.setProperty(key, (String) initialProps.get(key));
    }
    try {
    return new InitialContext(initialProps);
    } catch (NamingException e) {
    throw new IllegalStateException("Could not construct initial context for JMS", e);
    }
    }
    
    @Test
    public void testClientConnectionToRemoteKieServer() throws Exception {
    
    InitialContext context = getRemoteInitialContext(new URL(REMOTING_URL), USER, PASSWORD);
    
    Queue requestQueue = (Queue) context.lookup(REQUEST_QUEUE_JNDI);
    Queue responseQueue = (Queue) context.lookup(RESPONSE_QUEUE_JNDI);
    ConnectionFactory connectionFactory = (ConnectionFactory) context.lookup(CONNECTION_FACTORY);
    
    conf = KieServicesFactory.newJMSConfiguration(connectionFactory, requestQueue, responseQueue, USER, PASSWORD);
    
    // you will need to add any custom classes needed for your kjars
    // Set<Class<?>> extraClassList = new HashSet<Class<?>>();
    // extraClassList.add(YourCustomClass.class);
    // conf.addExtraClasses(extraClassList);
    
    kieServicesClient = KieServicesFactory.newKieServicesClient(conf);
    
    ServiceResponse<KieServerStateInfo> response = kieServicesClient.getServerState();
    
    assertEquals(ResponseType.SUCCESS, response.getType());
    }
    }
    

    (You will need these dependencies in your pom, FYI)

       <dependency>
         <groupId>org.kie.server</groupId>
         <artifactId>kie-server-client</artifactId>
         <version>6.5.0.Final</version>
         <scope>provided</scope>
       </dependency>
    
       <dependency>
         <groupId>org.wildfly</groupId>
         <artifactId>wildfly-jms-client-bom</artifactId>
         <version>10.1.0.Final</version>
         <type>pom</type>
       </dependency>
  2.  Once you build the example, you’ll find that we can’t connect to the server.  You may see an error similar to the following:
     javax.naming.NamingException: Failed to create remoting connection [Root exception is java.lang.RuntimeException: Operation failed with status WAITING]
     at org.jboss.naming.remote.client.ClientUtil.namingException(ClientUtil.java:36)
     at org.jboss.naming.remote.client.InitialContextFactory.getInitialContext(InitialContextFactory.java:117)
     at javax.naming.spi.NamingManager.getInitialContext(NamingManager.java:684)
     at javax.naming.InitialContext.getDefaultInitCtx(InitialContext.java:313)
     at javax.naming.InitialContext.init(InitialContext.java:244)
     at javax.naming.InitialContext.(InitialContext.java:216)
     at TestDemo2.testJms(TestDemo2.java:37)
     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
     at java.lang.reflect.Method.invoke(Method.java:498)
     at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
     at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
     at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
     at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
     at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
     at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
     at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
     at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
     at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
     at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
     at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
     at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
     at java.lang.reflect.Method.invoke(Method.java:498)
     at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
     at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
     at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
     at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
     at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)
    Caused by: java.lang.RuntimeException: Operation failed with status WAITING
     at org.jboss.naming.remote.protocol.IoFutureHelper.get(IoFutureHelper.java:89)
     at org.jboss.naming.remote.client.cache.ConnectionCache.get(ConnectionCache.java:42)
     at org.jboss.naming.remote.client.InitialContextFactory.createConnection(InitialContextFactory.java:153)
     at org.jboss.naming.remote.client.InitialContextFactory.getOrCreateConnection(InitialContextFactory.java:126)
     at org.jboss.naming.remote.client.InitialContextFactory.getInitialContext(InitialContextFactory.java:106)
     ... 34 more
  3. To resolve this, I had to make a few changes to our standalone-full.xml (bold indicate additions):
       <subsystem xmlns="urn:jboss:domain:remoting:3.0">
         <endpoint/>
         <http-connector name="http-remoting-connector" connector-ref="default" security-realm="ApplicationRealm"/>
         <connector name="remoting-connector" socket-binding="remoting"/>
       </subsystem>
                                   .
                                   .
                                   .
     <interfaces>
       <interface name="management">
         <inet-address value="${jboss.bind.address.management:127.0.0.1}"/>
       </interface>
       <interface name="public">
         <inet-address value="${jboss.bind.address:127.0.0.1SERVER.IP.ADDRESS}"/>
       </interface>
       <interface name="unsecure">
         <inet-address value="${jboss.bind.address.unsecure:127.0.0.1}"/>
       </interface>
     </interfaces>
                                   . 
                                   . 
                                   .
      <socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}">
        <socket-binding name="management-http" interface="management" port="${jboss.management.http.port:9990}"/>
        <socket-binding name="management-https" interface="management" port="${jboss.management.https.port:9993}"/>
        <socket-binding name="ajp" port="${jboss.ajp.port:8009}"/>
        <socket-binding name="http" port="${jboss.http.port:8080}"/>
        <socket-binding name="https" port="${jboss.https.port:8443}"/>
        <socket-binding name="iiop" interface="unsecure" port="3528"/>
        <socket-binding name="iiop-ssl" interface="unsecure" port="3529"/>
        <socket-binding name="txn-recovery-environment" port="4712"/>
        <socket-binding name="txn-status-manager" port="4713"/>
        <socket-binding name="remoting" interface="public" port="4447"/>
        <outbound-socket-binding name="mail-smtp">
          <remote-destination host="localhost" port="25"/>
        </outbound-socket-binding>
      </socket-binding-group>
  4. Restart your Wildfly instance, and you should find that the test runs successfully.

 

Hope this helps! 🙂

How to create an uber-jar out of a kjar and its dependencies

We recently had a request for the ability to bundle a kjar and its dependencies into one uber-jar, as opposed to relying solely on dependency management mechanisms.  Since a kjar is essentially just a Maven project, we can utilize the maven-shade-plugin to handle this for us.

Note that at the time of this posting, this functionality is not currently possible via the jBPM Workbench.


Let’s assume that in a separate module, you have created a model object, Person.  The Person POJO contains the following dependencies:

 <dependency>
   <groupId>io.swagger</groupId>
   <artifactId>swagger-jaxrs</artifactId>
   <version>1.5.0</version>
   <scope>provided</scope>
 </dependency> 

 <dependency>
   <groupId>info.cukes</groupId>
   <artifactId>cucumber-junit</artifactId>
   <version>1.2.5</version>
 </dependency>

Let’s assume your kjar is contained in a separate module with a dependency on the above model class:

 <dependency>
   <groupId>org.demo</groupId>
   <artifactId>demo-model</artifactId>
   <version>1.0-SNAPSHOT</version>
 </dependency>

We can include the following in our kjar pom in the <build> section:

...
 <plugin>
   <groupId>org.apache.maven.plugins</groupId>
   <artifactId>maven-shade-plugin</artifactId>
   <version>3.1.0</version>
   <executions>
     <execution>
       <phase>package</phase>
       <goals>
         <goal>shade</goal>
       </goals>
       <configuration>
         <transformers>
           <transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
             <mainClass>org.demo.model.Person</mainClass>
           </transformer>
         </transformers>
      </configuration>
     </execution>
   </executions>
 </plugin>
 ...

Since we have defined the main class as our Person object, this class + any transitive compile dependencies of it will be extracted and included with our kjar classes/resources.

If we rebuild our kjar, we should find that it includes not only the Person class, but all of its dependencies (including those which are transitive!).

Before creating the uber-kjar:

[anbaker@localhost target]$ jar tf demo-kjar-1.0-SNAPSHOT.jar 
META-INF/
META-INF/MANIFEST.MF
META-INF/kmodule.xml
rules.drl
META-INF/maven/
META-INF/maven/org.demo/
META-INF/maven/org.demo/demo-kjar/
META-INF/maven/org.demo/demo-kjar/pom.xml
META-INF/maven/org.demo/demo-kjar/pom.properties

After creating the uber-kjar:

[anbaker@localhost target]$ jar tf demo-kjar-1.0-SNAPSHOT.jar 
META-INF/MANIFEST.MF
META-INF/
META-INF/kmodule.xml
rules.drl
META-INF/maven/
META-INF/maven/org.demo/
META-INF/maven/org.demo/demo-kjar/
META-INF/maven/org.demo/demo-kjar/pom.xml
META-INF/maven/org.demo/demo-kjar/pom.properties
org/
org/demo/
org/demo/model/
org/demo/model/Person.class
META-INF/maven/org.demo/demo-model/
META-INF/maven/org.demo/demo-model/pom.xml
META-INF/maven/org.demo/demo-model/pom.properties
cucumber/
cucumber/api/
cucumber/api/junit/
cucumber/api/junit/Cucumber.class
...

It is important to note that the dependencies on swagger-jaxrs will not be extracted and included with our uber-kjar, as it has a scope of provided.

The maven-shade-plugin also offers the ability to relocate classes, which might come in handy if your kjar relies on a particular dependency version that differs from what the server provides.  I found this tutorial helpful in walking through that process.

The demo project (including unit tests) is located here, should you wish to see it in action.

As always, if you have any questions, please comment below.

Eclipse, go faster!

My university’s CSE program had two favorite dev tools:  Eclipse or emacs (which was later replaced with your text editor of choice :)).  You could tell me I should switch to IntelliJ, but being not that far removed from college, I’m just not ready.

Anyway, recently I found this blog to be helpful in improving Eclipse performance.  Take a look at what might apply to you given your environment.

If you’re looking to tweak HotSpot JVM options (those prefaced with -X in your eclipse.ini), the author references options specific to <= JDK 7.  Options which are specific to JDK 8 are located here.

Here is an abstraction of what we’re looking at in memory.  I’m keeping it generic since this varies based on the implementation/JDK (i.e., PermGen in non-heap (pre-JDK 8) vs.  Metaspace in non-heap (JDK 8)).

jvm

 

Typically, when tweaking the memory pool / heap, these are the options we worry about:

-Xmn: (n)ursery size — this part of the heap is for younger objects (recommended ~Xmx/2 or ~Xmx/4)
-Xms: (s)tart size — initial size of heap (1024X and > 1 MB.  default value is Xmn + size of nursing home)
-Xmx: (m)a(x) size — entire size of memory pool (1024X and > 2 MB)

-Xmn1024m
-Xms1024m
-Xmx2048m

Oracle recommends setting the nursery to the max heap size / (2 or 4) as this is where garbage collection tends to occur.  If the nursery is too small, only smaller objects will get collected and they will get collected very frequently.  If the nursery is too large, the collector will only cleanup large objects (and take its time doing so).

Feel free to comment if you have any additional suggestions!