Auto-Clustering of Stack Templates

Jelastic PaaS provides auto-clustering for the following software templates (with even more coming in the nearest future):

  • Application Servers - GlassFish, Payara, WildFly
  • SQL Databases - MySQL, MariaDB, PostgreSQL
  • NoSQL Database - Couchbase, MongoDB
  • Storage Server - Shared Storage Container

To enable the Auto-Clustering feature for other stacks also, you need to configure the appropriate clusterization package and adjust the target stack templates. After such configurations, this functionality will be available within the topology wizard of the developers' dashboard.

Clusterization Package

You should create a special clusterization package (JSON format), which will describe the desired behavior on the wizard (UI) and point to the JPS package with all the required actions for the automatic configuration of a cluster. Below, we’ll provide all of the parameters with the possible values:

ParameterExample ValuesDescription
convertibletrue | falseDefines if clusterization can (false) or cannot (true) be enabled for the created stack layer.
jpshttps://raw.githubuser…Provides a link to the manifest with the clusterization steps.
defaultStatetrue | falseDefines the default state of the Auto-Clustering switcher on the wizard (true/false for the enabled/disabled state, respectively).
requiredtrue | falseMakes the clusterization either obligatory (true) or optional (false) for the template.
nodeGroupData.
scalingMode
stateless | statefulSelects the preferable scaling mode for the node group (either stateless or stateful).
nodeGroupData.
skipNodeEmails
true | falseSkips (true) sending emails about new nodes addition or notifies as usual (false).
compatibleAddons[
“mysql-auto-cluster”
]
Skips clusterization if add-on from the list (i.e. array of IDs) is installed on the layer.[1]
settings.data{
“scheme”: “master”
}
Provides the default configurations for the cluster installation.
settings.fields{
“type”: “list”,
“caption”: “Scheme”,
…}
Lists additional fields and settings for the cluster.
validation{
“validation”: {
“minCount”: 2,
…}
Configures the validation settings.[2]
description"<p>Ready-to-work scalable MySQL…</p>"Displays a pop-up hint for the Auto-Clustering option in dashboard topology wizard.[3]
skipOnEnvInstalltrue | “http://…” | [“http://…”, “http://…"]Forbids the installation of the package specified via the ON_ENV_INSTALL variable.[4]
targetRegions{
“type”: “vz7”
}
Applies regions filtering for auto-cluster.
extraNodes{
“nodeGroup”: “proxysql”,
…}
Allows displaying the additional layers (uses regular node format) required for auto-clustering directly in the wizard.[5]
recommended{
“cloudlets”: 16
}
Lists recommended resources for the auto-clustering. Currently, it supports the cloudlets option only.
requires[
“proxysql”
]
Lists an array of unpublished nodeType templates that can be used by the extraNodes
Tip: As an example, you can check our production-ready config for MySQL auto-cluster.

An extended description for some of the above-specified parameters:

1. The compatibleAddons setting detects if the clusterization for the layer is already enabled via listed add-ons. It is required for compatibility reasons to support existing solutions with clusterization implemented through the ON_ENV_INSTALL variable (i.e. to avoid repetitive cluster enabling).

2. The validation section allows restricting certain layer parameters (e.g. to forbid scaling of the DAS node in the GF cluster), namely:

  • minCount - the minimum nodes count
  • maxCount - the maximum nodes count
  • minCloudlets - the minimum cloudlets count
  • minExtip - the minimum IPv4 count
  • scalingMode - the scaling mode to be used (cannot be changed)
  • extraNodesCount - the number of “hidden” nodes that would be added in extra layers upon enabling auto-cluster
  • tag - a Docer image that will be set for nodes in extra layer
  • rules - allows redefying validation properties based on the specified settings

For example:

"validation": {
  "minCount": 2,
  "maxCount": 3,
  "minCloudlets": 2,
  "minExtip": 1,
  "scalingMode": "stateless",
  "extraNodesCount": 2,
  "tag": "...",
  "rules": {
    "scheme": {
      "mm": {
        "maxCount": 2
      }
    }
  }
}

3. It is possible to apply localization for the hint in the description parameter through the appropriate keys in the localization file:

  • LT_EnvWizard_Tip_Cluster_default - the default description for the auto-clustering feature
  • LT_EnvWizard_Tip_Cluster_%(nodeType) - the template-specific description for the auto-clustering feature

4. The following values are allowed to be used with the skipOnEnvInstall parameter:

  • true - prevents the execution of any package
  • {url} - blocks execution of the specific package (the one available via the link)
  • ["{url}”, “{url}”, …] - stops installation of any package from the array

5. The extraNodes parameter configures and manages additional layers that will be added to the environment upon enabling auto-clustering. The configuration is the same as for the regular nodes, for example:

"extraNodes":[
  {
      "nodeGroup":"proxysql",
      "nodeType":"proxysql",
      "count":"1",
      "extip":0,
      "validation":{
        "scalingMode":"STATELESS",
        "minCount":2,
        "maxCount":3
      }
  }
]

If needed, it is possible to specify placeholders instead of the values (“count”:"${globals.proxyCount:0}") to dynamically adjust extra nodes based on the data provided via the wizard. This is achieved in the following way, whenever the cluster parameters are changed, the validation.rules section is executed. Here, you can set the required values for the globals.* parameters via setGlobals.

Tip: 

For example, using the code below, you can enable/disable the extra layer based on the Add ProxySQL ("is_proxysql") switcher in the wizard.

{ 
   "validation":{ 
      "rules":{ 
         "is_proxysql":{ 
            "true":{ 
               "setGlobals":{ 
                  "proxyCount":2
               }
            }
         }
      }
   },
   "extraNodes":[ 
      { 
         "nodeGroup":"proxysql",
         "nodeType":"proxysql",
         "count":"${globals.proxyCount:0}"
      }
   ]
}

add extra layer via UI switcher

Also, consider the following specifics:

  • disabling Auto-Clustering also disables the corresponding extra nodes
  • the ${clusterNode.*} placeholder can be used to get various values (nodeGroup, nodeType, fixedCloudlets, flexibleCloudlets, tag, displayName, extip, extipv6, count, restartDelay, scalingMode) of the node, where the clustering is enabled
  • the stack template added to the extra nodes cannot be changed; however, template parameters and version (tag) can be adjusted via UI (unless the values are restricted via the validation section)
  • when trying to disable the extra node manually or if the template is not available on the account, the appropriate error will be shown in the dashboard

6. When configuring the extraNodes and recommended fields, it is possible to utilize the settings placeholder to pass the required settings from the UI form (e.g. ${settings.count}).

After your clusterization package is prepared, you can add it to the appropriate stack template.

Adjust Stack Template

In order to enable a clusterization option for a particular software stack, the appropriate dockerfile should contain the cluster label. It will point to the clusterization package, using one of the following values:

  • “false” - disables clusterization for the stack (by default)
  • “true” - locates the script by attaching the jelastic/cluster.json suffix to the sourceUrl label
    Tip: For example, with the https://raw.githubusercontent.com/jelastic/ source URL, the clusterization script should be located at https://raw.githubusercontent.com/jelastic/jelastic/cluster.json.
  • {relativeUrl} - upends the sourceUrl label with the provided custom value
  • {absoluteUrl} - uses a specified direct link to the clusterization package
Tip: If needed, you can add the cluster label just to the specific template tags so that it will be available for the certain software versions only.

The changes will be applied during template auto-update and manual reimport.

User Experience

While working with the software stacks, which are provided with the out-of-box clusterization support, you can see the appropriate swithch in the central part of the topology wizard.

GlassFish auto-clustering

1. Based on the specific implementation, the Auto-Clustering option can be:

  • mandatory enabled (e.g. for the Couchbase database) mandatory auto-clustering

  • provided with some additional settings (e.g. for the MySQL database) auto-cluster custom configuration fields

  • restricted by resources (e.g. the minimum/maximum nodes count) and options (e.g. scaling mode) auto-cluster topology validation

2. For the nodes created before the clusterization support was implemented for the stack, the Auto-Clustering option can be:

  • disabled (hidden) - usually, this means automated reconfiguration is not possible or can affect the running application
  • already enabled - the same clusterization was installed (e.g. through the ON_ENV_INSTALL variable)

3. While configuring a new JPS package, the default clusterization settings for the stack can be redefined (if allowed) via the appropriate cluster property.

For example:

{
 "nodeGroup": "mysql",
 "cluster": {
   "scheme": "master"
 }
}

4. Some of the peculiarities of the auto-cluster configurations and usage:

  • If the defaultState or required options are true in the cluster config, auto-clustering will be enabled by default for the appropriate template (new layer only). Even in case, the cluster property is not defined explicitly in the JPS package.
  • The ON_ENV_INSTALL variable is ignored if the provided package matches the one from the clusterization configuration (both links are truncated by the “?” symbol and compared).
  • After creating a new layer with auto-clustering, the cluster and clusterTemplate fields are added to the appropriate nodeGroup settings. Be aware that manual adjustments of these settings may affect the cluster/layer operability.
{
  "cluster": {
    "settings": {
      "scheme": "ms"
    },
    "jps": "c11e802f-a35f-4fa6-b666-24859df9f985",
    "enabled": true
  },
  "clusterTemplate": {
    "settings": {
      "fields": [
        {
          "default": "mm",
          "values": {
            "mm": "Master - Master",
            "ms": "Master - Slave"
          },
          "name": "scheme",
          "caption": "Scheme",
          "type": "list"
        }
      ]
    },
    "compatibleAddons": [
      "mysql-auto-cluster"
    ],
    "description": "<p>Ready-to-work scalable MySQL Cluster with master-slave asynchronous replication and ProxySQL load balancer in front of it. Is supplied with embedded Orchestrator GUI for convenient cluster management and provides even load distribution, slaves healthcheck and autodiscovery of newly added DB nodes</p>",
    "nodeGroupData": {
      "scalingMode": "NEW"
    }
  }
}
  • If a layer is initially created without auto-clustering, the corresponding “cluster”: {“enabled”: false} record is added to the nodeGroup settings, which automatically hides the option in UI during the consequent adjustments.
  • Auto-clustering cannot be disabled or reconfigured when editing a topology of the existing environment.
  • If you need to override default auto-clustering implementation or implement your custom one, a complete clusterization package should be provided via the cluster field of the JPS.
  • Enabling/disabling auto-clustering for the layer activates/deactivates the default behavior for the templates with the “clusterEnabled=1” label:
    • only the master node has a DNS host of the {subdomain}-{your_env_name}.{hoster_domain} format
    • SSH access to nodes via password is not restricted
    • applications are deployed to the master node only
    • bindSSL, restartService, and resetServicePassword are performed for the master node only
    • a new node is created during scaling
    • the buildCluster action is called after the layer creation

Now, you know the specifics of the Auto-Clustering feature implementation and can configure clustering solutions on your own.

What’s next?