September 21, 2021
This is part 1 in our 3 part series providing step by step instructions to install and deploy IBM Maximo Manage for MAS.
In this series we will provide detailed instructions to install and deploy IBM Maximo Manage for MAS. To make the process more approachable and make sure each step is demystified, we have broken down the installation into three sections.
For our examples, we are using DigitalOcean (https://digitalocean.com) as our cloud provider. They are a cost-effective alternative to AWS, Azure, or GCP and have been very supportive of our efforts deploying Maximo in their environment.
We are also using OKD (https://www.okd.io), which is the community edition of OpenShift. While OKD provides a fully functioning OpenShift instance, it does not come with the same support levels as the commercial OpenShift product from IBM and may not be appropriate for production deployments.
We have provided a number of supporting scripts to support performing the installation on our GitHub repo https://github.com/sharptree/do-okd-deploy-mas8. These automate the process of creating the DigitalOcean droplets, networking components and images necessary for installing OKD. The scripts specifically work with DigitalOcean, if you are using AWS, Azure, or GCP you can use the scripts for reference, but will need to to perform these steps manually. In later parts, in addition to the manual configuration instructions, we provide scripts to automate these steps and make the process easier.
If you have questions or would like support deploying your own instance of MAS 8 Manage, please contact us at [email protected]
Updated May 9th, 2022 Reverted to OKD 4.6 and MAS 8.6 due to defects in the MAS 8.7 dependencies.
OpenShift Community Edition (OKD) is an open source community edition of the OpenShift platform and can be installed on premise without RedHat licensing. In this guide we will provide the steps to install Maximo Application Suite, deployed on OKD using Digital Ocean as the hosting provider.
Before getting started you will need to register a domain that can be used to deploy your OKD cluster and delegate DNS resolution to DigitalOcean. For this tutorial we will use sharptree.app
as our domain.
We use Google as our registrar as they provide a simple and reliable registrar service. You can register your new domain at https://domains.google/. You will then need to delegate the nameservers to DigitalOcean by changing the NS records to the following:
The following screenshot provides an example taken from our DNS entries. Notice that you need to click the Manage name servers
link to access the name server entries. Different registrars will have different procedures, consult your registrar's documentation.
Then you can use the following guide to add the domain to DigitalOcean's DNS services, https://docs.digitalocean.com/products/networking/dns/how-to/add-domains.
During the installation process make sure you wait for each step to complete before moving on to the next step. In some cases such as applying an upgrade it may take an hour or more for the process to complete, however attempting to apply additional configuration changes while other changes are pending can lead to unstable states and make troubleshooting very difficult.
The following steps assume the installation is performed on a Redhat Enterprise Linux variant in a droplet hosted by Digital Ocean, in this case we are using Fedora. It is possible to perform the installation from a desktop remote to Digital Ocean, but adds some complexity that we are not covering here.
Create a new Droplet, minimal resources are required for this so a 1 CPU, 1GB memory, 25GB Disk image is sufficient and must be Fedora as CentOS as other RedHat derivatives do not have the required libraries, although RedHat Enterprise Linux (RHEL) may, but DigitalOcean does not support RHEL.
SSH to the new instance and update to the latest patches.
sudo yum -y update && yum -y upgrade
Install the required libraries and utilities.
sudo yum -y install git libvirt wget nano unzip fcct jq certbot java openssl s3cmd python3
Install yq
for YAML parsing later.
pip install yq
GitHub user dustymabe has create an excellent set of base scripts for installing OKD on DigitalOcean. We have used these as the basis for our scripts, you may want to check out the original scripts at https://github.com/dustymabe/digitalocean-okd-install
For the Sharptree scripts, clone the project to the working droplet with the command below.
git clone https://github.com/sharptree/do-okd-deploy-mas8.git
To perform the installation, we will need the oc
, kubectl
, doctl
and aws
utilities. Download the each using the following commands. Note that the Digital Ocean and AWS clients are current as of the time of writing, but are updated frequently, please check to ensure you are using the latest release. It is critical that you use the 4.6 release of OKD installer and client, do not use a newer version.
wget https://github.com/openshift/okd/releases/download/4.6.0-0.okd-2021-02-14-205305/openshift-client-linux-4.6.0-0.okd-2021-02-14-205305.tar.gz
wget https://github.com/openshift/okd/releases/download/4.6.0-0.okd-2021-02-14-205305/openshift-install-linux-4.6.0-0.okd-2021-02-14-205305.tar.gz
wget https://github.com/digitalocean/doctl/releases/download/v1.73.0/doctl-1.73.0-linux-amd64.tar.gz
wget https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip
Unarchive the downloaded files and copy them onto the user path, again names may be slightly different based on the versions downloaded. After un-archiving and moving to the path, clean up the file downloads.
tar xzvf openshift-client-linux-4.6.0-0.okd-2021-02-14-205305.tar.gz
tar xzvf openshift-install-linux-4.6.0-0.okd-2021-02-14-205305.tar.gz
tar xzfv doctl-1.73.0-linux-amd64.tar.gz
sudo mv kubectl oc openshift-install doctl /usr/local/bin/
rm -f openshift-*
rm -f doctl-*
Install the AWS CLI.
unzip awscli-exe-linux-x86_64.zip
./aws/install
rm -R -f aws
rm -f awscli-exe-linux-x86_64.zip
Log into the DigitalOcean console, select the API menu item on the right navigation column (it is near the bottom). Under the Tokens/Keys section, Personal access tokens header, click the Generate New Token button.
Enter a token name, such as "okd", with Read and Write access granted, then click the Generate Token button.
After the token is created it will be displayed under the new token name. Make sure you copy this and keep it in a safe location. This value is only visible after creating it, if you lose it you will need to generate a new token and the old token will no longer be valid.
Run the following command to initialize the Digital Ocean command line utility.
doctl auth init
When prompted provide your Digital Ocean personal access token that was obtained in the previous step.
Next, while still on the Digital Ocean API menu, Tokens/Keys section, under the Spaces access keys header, click the Generate New Key button.
Ender a name, such as "okd" and click the checkbox next to the new name.
The key value will be displayed along with a secret. Like the personal access token, copy these to a safe location as the secret is only displayed at the time of creation and cannot be recovered. If you lose it, you will need to regenerate the key and update your scripts.
Open the config
file with a text editor and set the variables. The following are example values used for creating the cluster.
For the DROPLET_KEYPAIR
value you can use doctl compute ssh-key list
to find the IDs for the SSH keys available in your DigitalOcean environment.
The OKD install requires Fedora CoreOS, which is not a default image for DigitalOcean. For the FCOS_IMAGE_URL
there may be newer versions available at https://getfedora.org/coreos/download?tab=cloud_operators&stream=stable, but the version listed here has been tested to work with the OKD 4.6 installed while versions greater than CoreOS 33 do not work. The script will handle creating a new image for you, but if you want to do this manually or are just curious about the process, you can refer to https://docs.fedoraproject.org/en-US/fedora-coreos/provisioning-digitalocean/#_creating_a_digitalocean_custom_image
From the do-okd-deploy-mas
directory run the command ./digitalocean-okd-install.sh --install
to execute the script.
You will be prompted to provide the digital ocean spaces key, secret, and personal access token. You can provide these as parameters using the --spaces-key
, --spaces-secret
, and --token
flags respectively.
The process will take up to half an hour to complete. If during the install the CSR approval process times out, edit the digitalocean-okd-install
script and comment out the steps prior to the CSR approval and rerun the install. It will resume and complete the installation.
./digitalocean-okd-install
Copy the generated kubeconfig
file to your user's home directory .kube
directory.
cp generated-files/auth/kubeconfig ~/.kube/config
Note the kubeadmin
password value that is displayed at the end of the installation process. You will need this for accessing the web console later. Should you miss it, you can find it later in the ./generated-files/auth/kubeadmin-password
file.
To access the OKD console you will need a SSL certificate that is good for the the application subdomain and api URL. We will use Let's Encrypt for our certificates.
This step is only necessary for the first time you install a cluster / domain combination. If you have previously obtained certificates you can reuse the previously issued certificates.
By default the Sharptree OKD install scripts will have created a CAA record for Let's Encrypt. However, you should verify that the CAA record was created and if the record was not created, follow these steps to add it. In the DigitalOcean console, select the Networking menu item, select the Domains tab, and then select the domain for your OKD cluster. Navigate to the CAA tab and enter @
for the HOSTNAME, letsencrypt.org
for the AUTHORITY GRANTED FOR, issuewild
for the TAG and the defaults for the rest, then click the Create Record button.
15.1. Request the certificate using DNS challenges because we do not have a web server available for the domain, replace [maximo.sharptree.app]
with your cluster name and domain, also replace [email protected]]
with your admin email. We will continue to use maximo.sharptree.app
in the remaining examples, be sure to replace this value with your domain when following the examples.
certbot certonly --manual --preferred-challenges=dns \--server https://acme-v02.api.letsencrypt.org/directory \--agree-tos \-d *.apps.[maximo.sharptree.app] -d api.[maximo.sharptree.app]
15.2. Select the TXT tab enter the value provided by CertBot challenge in the VALUE field and the hostname minus the domain in the HOSTNAME field, for example _acme-challenge.api
.
Wait a few minutes for the DNS change to propagate, this typically takes less than five minutes, but may take as much as an hour. You can verify that the change has been made by checking a tool like Google's online admin toolbox dig utility, enter the following url and then enter your full ACME TXT record, for example _acme-challenge.apps.maximo.sharptree.app
to verify the entry.
https://toolbox.googleapps.com/apps/dig/#TXT/
The certbot output will provide the complete URL to verify the TXT record with the Google Admin Toolbox utility
15.3. Press enter / return in the terminal window and a second challenge should be displayed, for example _acme-challenge.apps.maximo.sharptree.app
. Select the TXT tab enter the value provided by CertBot challenge in the VALUE field and the hostname minus the domain in the HOSTNAME field, for example _acme-challenge.apps
.
15.4. You may need to wait several minutes for the DNS changes to propagate. You can check the current status from your desktop using the dig
command, such as dig _acme-challenge..api.maximo.sharptree.app TXT
. Once both TXT records are available, complete the challenge, at which point you will have a valid certificate.
Note that two challenges will be presented, one for
*.apps
and another forapi
. Be careful to enter and verify each one before moving on to the next, otherwise the certbot will fail.
Now, we need to import the certificates to OKD using the oc
utility. In this case we are using api.maximo.sharptree.app
as our base certificate domain, edit the following command with the path to your certificates. The first command imports the certificate for the apps.maximo.sharptree.app
wildcard subdomain and the second is for the api configuration domain, api.maximo.sharptree.app
oc create secret tls letsencrypt-cert -n openshift-ingress \--cert=/etc/letsencrypt/live/apps.maximo.sharptree.app/fullchain.pem \--key=/etc/letsencrypt/live/apps.maximo.sharptree.app/privkey.pem \--dry-run=client -o yaml | oc apply -f -
oc create secret tls letsencrypt-cert -n openshift-config \--cert=/etc/letsencrypt/live/apps.maximo.sharptree.app/fullchain.pem \--key=/etc/letsencrypt/live/apps.maximo.sharptree.app/privkey.pem \-o yaml | oc apply -f -
To apply certificates run the following commands.
oc patch ingresscontroller.operator default \--type=merge -p \'{"spec":{"defaultCertificate": {"name": "letsencrypt-cert"}}}' \-n openshift-ingress-operator
oc patch apiserver cluster \--type=merge -p \'{"spec":{"servingCerts": {"namedCertificates":[{"names": ["api.maximo.sharptree.app"],"servingCertificate": {"name": "letsencrypt-cert"}}]}}}'
It may take several minutes for the change to take effect as the cluster ingress and api services are restarted after the change. Be patient as it can take over 10 minutes for everything to restart.
At the time of writing, OKD 4.9 is available, however you should not upgrade as this version is not supported by Maximo. You can apply any version 4.6 patches that are available, but do not upgrade to the next version as this will cause the Maximo installation to fail.
From the left side navigation menu, select Administration
and then select Cluster Settings
. From the Channel
menu make ensure that stable-4.6
is selected as shown below.
After everything comes back up, login and make sure that all pods have correctly started. In testing there have been times when he csi pod has not started correctly and been placed in a Crash Backoff Loop. While this may eventually self resolve, you can delete the pod and it will automatically be recreated. Watch to make sure it successfully restarts. You can review an overview of the pod deployment state in the Cluster Inventory panel on the console Overview page. Below is an example of a cluster with one failing pod and three pending.
Verify that all pods are running and there are no remaining errors.
Once everything is running and successfully applied, log in to the OKD web console at https://console-openshift-console.apps.[cluster].[domain]/ replacing the cluster and domain values with yours.
Log in using the kubeadmin
credentials that were displayed at the end of the installation or the ./generated-files/auth/kubeadmin-password
file. Then from the kube:admin
dropdown menu, select Copy login command
.
Click the Show Token
link and then copy the Log in with this token
section displayed and execute the command from your installation terminal. This will create a value OpenShift Command (oc) session.
You may need to add additional worker nodes to the cluster after the initial installation to add capacity or change configurations. To add a new worker droplet to the cluster you will need the ignition files that were generated as part of the initial install and are in the generated-files
directory under the digitalocean-okd-install project directory. Make sure you retain these files as you will need them in the future to add nodes.
You will need many of the values from the initial installation, including the region, ssh key, size, image id and the vpc uuid.
The new node should be placed in the same region as the rest of the cluster. You can list the available regions with the following command.
doctl compute region list
The SSH key identifier is needed again and should be the same as the from the initial install. To get the available SSH key identifiers run the following command.
doctl compute ssh-key list
The machine size defines the resources to be added. You can list the available machine sizes with the following command.
doctl compute size list
The image is the CoreOS droplet image that was created as part of the installation procedure. The name is the name of the file that was found as part of step 15 of the installation. The latest versions can be found here: https://getfedora.org/coreos/download?tab=cloud_operators&stream=stable. At the time of writing the latest file name was fedora-coreos-33.20210104.3.0-digitalocean.x86_64.qcow2.gz
and the following command can be used to find the image identifier, replace fedora-coreos-33.20210104.3.0-digitalocean.x86_64.qcow2.gz
with the name of the image used in the initial install.
Note that only Fedora CoreOS 33 is compatible with OKD 4.6 as of the time of writing. You are strongly encouraged to use the versions listed here as new versions may not work correctly.
doctl compute image list-user -o json | jq -r ".[] | select(.name == \"fedora-coreos-33.20210104.3.0-digitalocean.x86_64.qcow2.gz\").id"
To get the VPN UUID execute the following command replacing [DOMAIN_NAME] with the top level domain name for your cluster, for example sharptree.app.
doctl vpcs list -o json | jq -r ".[] | select(.name == \"[DOMAIN_NAME]\").id"
To create the new droplet, from the digitalocean-okd-install
directory execute the following command with the values values in uppercase and the surrounding brackets replaced with the values obtained in the preceding steps.
doctl compute droplet create okd-worker-[NEXT_WORKER_NUMBER] --region [REGION] \--ssh-keys [SSH_KEY_ID] --size [SIZE_ID] --image [IMAGE_ID] --vpc-uuid [VPC_UUID] \--tag-names "[CLUSTERNAME],[CLUSTERNAME]-worker" \--user-data-file ./generated-files/worker-processed.ign
After the node is created, wait a moment for it to be created and then run the following command to see the pending CSRs.
oc get csr
This should display a list of the CSRs. Look for the CSR that is in a CONDITION of Pending with a SIGNERNAME of kubernetes.io/kube-apiserver-client-kubelet
and using the value from the NAME column, execute the following command.
oc adm certificate approve [CSR_NAME]
After approving the first CSR a new CSR will be generated, so run the oc get csr
command again and find the SIGNERNAME of kubernetes.io/kubelet-serving
and a CONDITION pending. Again run the following command with the new CSR NAME value.
oc adm certificate approve [CSR_NAME]
The new node should now be available. You can verify that the node is available with the following command.
oc get nodes
With CoreOS OKD manages the underlying OS and you will not need to manually update the host OS. However, it may occur that an upgrade is available shortly after installing. Before moving forward make sure that all components are up to date, as it can cause confusion later when steps do not complete because of a pending upgrade. Also, note that the upgrade process can take a very long time to complete as Pods must be drained, transferred, the upgrade performed, and then started again.
The next step is optional, but you should consider configuring an external identity provider as the kubeadmin user is not intended for operational use. Here we are using Microsoft Azure OpenId, but there are my other options and examples.
Documentation regarding configuring identity providers can be found here: https://docs.openshift.com/container-platform/4.7/authentication/identity_providers/configuring-htpasswd-identity-provider.html
Log into the the Azure portal and navigate to the App registration menu https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/RegisteredApps.
Click the New registration button and enter an application name, such as OKD. The redirect URL is https://oauth-openshift.apps.maximo.sharptree.app/oauth2callback/microsoft, replacing maximo.sharptree.app
with your cluster domain. Click the Register button.
Click thew newly created application registration and then click New client secret.
Enter a description, such as okd, and select the duration, at least 12 months, then click add.
The client secret is displayed only when the secret is created so be sure to save it in a safe place. This is required when registering with OKD.
Now create an openshift secret in the openshift-config
space.
oc create secret generic openid-client-secret-microsoft \--from-literal=clientSecret=YOUR_SECRET_HERE -n openshift-config
Run the following command, updating the [CLIENT_ID]
with your Microsoft client Id.
oc apply -f - >/dev/null <<EOFapiVersion: config.openshift.io/v1kind: OAuthmetadata:name: clusterspec:identityProviders:- mappingMethod: claimname: microsoftopenID:claims:email:name:- namepreferredUsername:- preferred_usernameclientID: [CLIENT_ID]clientSecret:name: openid-client-secret-microsoftextraScopes: []issuer: >-https://login.microsoftonline.com/a6a086a0-eb35-4f5e-9cd6-ab18f9a8b239/v2.0type: OpenIDEOF
You should now have an operational OpenShift community edition (OKD) cluster available, which is the foundation for deploying MAS 8 and Manage.
The next step is deploying the supporting services for Manage to the OKD cluster. These supporting components include the Behavior Analytics Service, Suite License Service, MongoDB, and the Maximo Application Suite operator for OpenShift.
If you have questions or would like support deploying your own instance of MAS 8 Manage, please contact us at [email protected]
Date | Changes |
---|---|
2022/05/09 | Reverted references and instructions to MAS 8.6 and OKD 4.6 because of defects in 8.7. |
2022/04/27 | Updated references and instructions for MAS 8.7 and OKD 4.8 |