Jelastic Cluster Orchestrator Architecture
Cluster Orchestrator (Infrastructure Node) is a set of internal components for managing resources, processing requests, analyzing user behaviour and supporting Jelastic system maintenance. Usually, we recommend using a dedicated server for the Cluster Orchestrator as it will provide better stability and performance.
Also, for high availability, Jelastic Platform is capable to run on multiple infrastructure nodes (two and more) each with redundant replicated infrastructure elements on it. In such a way, even if some infrastructure element fails, the comprised end-users containers will remain accessible.
In Jelastic, all infrastructure modules are packed as docker images and provisioned inside Virtuozzo containers. Designed as microservices, they provide each component as a separate, minimal and highly specialized element. Such approach allows to isolate infrastructure modules (i. e. make them independent), so that failures or reconfigurations won’t affect any other services, which creates more robust and resilient system. A partial list of key actions, that infra node components are responsible for, can be found below:
- provisioning (jelcore, jpool)
- templates configuration and clustering binding (jelcore)
- environment’s lifecycle management (jelcore)
- applications deployment (jelcore, jem)
- scalability management (jelcore)
- handling users requests via Shared Resolver (resolver/slb)
- billing (jbilling)
- business analytic tools (pentaho)
- monitoring and health checking (zabbix)
- statistics gathering (jstatistics)
- remote access (ssh gate, webssh gate, RDP gate)
- a collection of service applications and service engine (hcore)
- stacks automations (jem)
So, let’s overview each infrastructure module separately:
- DB (Master)
- DB Backup (Slave)
- SSH Gate
- Uploader (XSSU)
- Docker Engine
- Shared Load Balancer
Apache ZooKeeper is a key-value storage used for coordination of infrastructure modules during start up. Components load order synchronization is performed with the help of a special discovery client (/opt/discovery.jar), pre-deployed to each infra container. It will establish connection with ZooKeeper and initiate data exchange based on the appropriate settings in the /etc/jelastic/settings.conf file:
- provide - data from the current element to be exported to ZooKeeper (e.g. IPs, passwords, ports, etc)
- require - information this module requires from other components
- states - a number of special statuses to indicate element state; check the list of the most common ones below
- INIT - module was started
- PROVIDED - all data from the provide section was sent to ZooKeeper
- STARTED - info from the require section was received from ZooKeeper
- READY - comprised services were started and the current infrastructure module is already working
- dependencies - array of system components the current component depends to
Benefiting on such implementation, the whole infrastructure can be loaded in a proper order. For example, all modules require information from Jelastic DB, which in its own way needs to be updated with the latest data from other components. To ensure this, Jelastic DB waits for the PROVIDED state from all other nodes. Then, receives required data from Zookeeper and continues start up till it’s state changed to READY. With this done, the rest of infra modules can continue their loading, while being sure that required data from DB is available.
JRouter is a NGINX-based HTTP proxy component for distributing requests within the infrastructure node. Additionally, it is responsible for static amount of resources assignment.
Jelastic Platform Database Server is based on the MariaDB 10 software stack.
It is responsible for storing the data like user accounts and environments data, resource consumption history, billing information, etc. This guarantees data integrity and provides easy and faultless synchronization between Shared Load Balancers (where slave MariaDB instances are located).
DB Backup (Slave)
DB backup module is a slave of Main Jelastic Platform Database with set of scripts to periodically dump the database. DB backup is used to:
- create database dumps to avoid locking of the main database
- serve analytic requests to unload the main database
- restore the main database in case of failure
Hivext Core (HCore) is responsible for internal subsystems functioning. It implements the functionality of account management, access rights management, application hosting and script execution. This module can be scaled up horizontally to multiple instances to handle high load.
Jelastic Core (JelCore) is a provisioning subsystem for implementing an environment lifecycle coordination from creation to deletion. It is responsible for performing such actions as:
- create environment
- start/stop environment
- edit environment topology
- enable high availability
- bind domains
- assignment external IP
- equal distribution of containers from one environment among different hardware nodes
- migrate environment
These actions are initiated directly from the dashboard. This module is scaled up to two instances to handle high load.
Statistics Service (JStat) is responsible for collecting statistics about consumed resources on all of the existing environments. Such data is required by Jelastic billing service to implement a fair pay-as-you-go pricing.
Billing Service (JBilling) is responsible for processing the information about hardware resource consumption, calculating the amount for charging and providing billing information.
Jelastic doesn’t provide its own payment gateway. For payment implementation, the integration of JBilling with different external billing systems is applied.
Follow the link to get more information on the billing system in Jelastic.
Pool Manager (JPool) is an internal module for managing software stacks templates and providing IP pools for Jelastic Platform. It is responsible for setting and adding containers on Hardware Nodes, if any new stack is requested for user environment.
JPool initiates creation of a new container using its original source (template from Jelastic repository) and, consequently, if the same type is requested once more, simply makes a copy from the already existing container on Hardware Node.
When a new node is ready (either created or copied), the Pool Manager processes it by setting the required configurations and making available for the appropriate user environment.
Awakener is responsible for “waking up” containers in user environment in case they have been previously suspended due to application inactivity (i.e. absence of incoming requests).
The containers can be turned into hibernation mode because the deployed application was not requested for an extended period of time. As soon as any request arrives to the application, Awakener wakes up the appropriate user’s environment.
In such a way it decreases the resource consumption, without affecting the application’s performance.
SSH Gate provides external SSH access to Jelastic Platform containers. It accepts user connection requests from the Internet and transmits them to a target container via internal platform network. The authentication procedure at Jelastic SSH Gate is performed in two independent steps:
- external - connection from end-user machine to the Gate
- internal - connection from Gate to the requested user container
Both parts of this authentication procedure are performed over the standard SSH protocol basing on a preliminary applied public/private SSH key pair.
The Uploader (XSSU) application is responsible for uploading user application archives and placing them within Uploader Storage. We recommend to have the Uploader Storage as a separate component (which can be mounted via NFS or iSCSI) in order to avoid any interference of uploaded files and packages.
Puppet is an infrastructure component for deploying and updating Hardware Nodes on the Jelastic Platform. It is responsible for gathering the required data from the GIT, SVN, Nexus repositories, RPM Storage and processing it in order to install/update all the components of the Jelastic Infrastructure Node.
Guacamole is an infrastructure module, which provides clientless external remote access to containers via web browser. The module supports standard protocols such as RDP to provide remote desktop access for the .NET/Windows hosting, and SSH protocol to access containers.
Dedicated Docker Engine is required for downloading and storing images for all Docker containers installed by users. Such caching allows to speed up new containers creation and, simultaneously, optimizes space utilization, avoiding duplications in Docker images pool through storing them on a single Hardware Node only.
Shared Load Balancer
The Jelastic Shared Load Balancer (SLB) is a proxy server which consists of internal and external PowerDNS server instances, an NGINX balancer, a MariaDB 10 slave database and a health-checker.
It is responsible for connecting a client side (browser, for example) and user application deployed to Jelastic. Follow the link to get more information about the Shared Load Balancers.
To increase high availability of the system, Jelastic uses several Shared Load Balancers for receiving requests simultaneously, two or more per each region, depending on load. As a result, there can be several entry points for user environments at the same time, which allows to effectively distribute load. Follow the link to get more information about SLB High Availability.
Zabbix Monitoring Solution is used to monitor the main parameters of the Jelastic infrastructure and cluster components such as Hardware and Infrastructure Nodes.
It is responsible for monitoring numerous network parameters, server’s health and integrity. For any event occured, Zabbix uses a flexible notification mechanism, which allows user to configure delivery methods such as email, sms, Jabber or custom scripts. This ensures a fast reaction to server problems.
Follow the link for more information.
Jelastic Cluster Orchestrator is a good example of the microservice architecture. Having all components decomposed to separate instances helps to automate and ease the processes of deployment, upgrades, adding new functionality and overall maintenance of the cluster. The gained own experience, we actively use while offering professional services for the clients who eager to migrate their legacy applications to containers with microservice structure. Contact us for more information and assistance.