Managing Hardware Nodes

Click the Hardware nodes option to see the information about your cluster state and manage the hardware nodes:

Cluster State

Once inside, you’ll see the list of regions, with expandable hardware sets and assigned hardware nodes, that your Jelastic cluster consists of:

The following information about your hardware is available within the columns of this list:
  • Name - hardware node name
  • Load rating - ratio of the allocated and free resources
  • Status - the possible values are:
    • -  ACTIVE - working node to which containers can be added
    • -  EVACUATING - the process of containers’ evacuation is run; such a node can't be edited or removed
    • -  BROKEN - some errors have occurred in node’s work
    • -  EVACUATION_FAILED - the process of evacuation failed
    • -  EVACUATED - the process of evacuation has been successfully completed
    • -  MAINTENANCE - the existing containers on a node are working, but the new ones are not being created
    • -  INFRASTRUCTURE_NODE - infrastructure node, that can't be used for evacuation
  • Virtualization product - defines a container-based virtualization solution used: Parallels Virtuozzo Containers (PVC) or Parallels Cloud Server (PCS)
  • Memory - shows the total amount of physical memory, available at a hardware node
  • Memory load - the amount of memory, consumed by virtual containers and hardware node’s internal processes (comparatively to the overall allocated amount)
  • Memory used by API - percentage of the node’s memory space, consumed by API
  • HDD - displays the overall capacity of the hardware node disk, available for containers creation, and the already allocated disk space
  • Swap - shows the engaged space capacity in the swap partition
  • Load average 5/10/15 min - three separate values of the average load rating for the period, denoted in the column’s name

Tips:

  • if the Admin Panel couldn't get the information about a server, the line with it will be colored in red
  • using the Interval drop-down list at the tool pane, you can select the time frame for the loading of hardware nodes to be displayed for:

    In this way you can analyze data for the last 5, 15, 30 or 60 minutes correspondingly

Selecting a particular node by clicking on it will show the detailed information on this node within the right-hand panel:

Here, several tabs are available:
  • The State tab shows a set of node’s parameters (they can differ due to a particular node’s OS type)
  • The next tab is Load info with the list of all available at the platform stacks and the amount of the appropriate created containers. It can be filtered using the drop-down list at the top by status: Total, Launching, Down, Sleep, Running.
  • Open the Containers tab to see the information about each existing container (its Type, CID, amount of the consumed Memory, HDD usage and an Environment it belongs to). The icons before the type specify the state of each container (green - running, red - stopped, blue - sleeping and grey - problematic). Here a few options are available:
    • Clicking on the particular environment name will redirect you to the Cluster > Environments JCA section, where you can view the detailed info on it. This helps to easily find the full information about a user by CID of the container in his environment.
    • Also, here you can migrate the necessary container by choosing it and clicking the Migrate button at the top. In the opened dialog box specify the destination hardware Node using the drop-down list and click Migrate.
  • In the Evacuation state tab, you can see the Name of a node which is evacuating or has already been evacuated. The current state of evacuation is displayed in the Value column. Using the buttons above, you can Refresh the list to see the latest changes or Stop the evacuation process.
  • And the last is the Cluster info tab. Here you can see the server’s roles: Client, Chunk Server and MDS. The assigned role is marked with a green tick in the appropriate column.

Hardware Node’s Components Check

Components check allows the Jelastic Core to query a particular hardware node and verify whether it is configured properly to be used by the Jelastic Platform.

The main check application runs a list of tests on a hardware node and gathers the information about its state. Each check application represents a separate dynamic module. Herewith, new modules can be easily added to the list of existing ones.

Currently the following modules are used by default:
  • vzsettings checks Virtuozzo/PCS configuration on the hardware node
  • vzshaper checks if the shaper is enabled at the hardware node
  • timezone checks if the hardware node’s timezone is UTC (and corrects this if it isn’t)
  • repositories checks if the hardware node is configured to the correct Spacewalk channel corresponding to the Jelastic version. While the platform updates, it changes the channel to the correct one
  • routes checks the correctness of the routes
  • masquerading checks network masquerading rules
  • ip_tables checks firewall configuration on the hardware node
  • kernel_modules checks if all the required kernel modules are loaded in the kernel
  • docker (for the Linux-based nodes only) checks if the hardware node is configured to the shared storage for Docker templates and corrects this if it isn’t

The results of these checkups can be seen in the State tab.

If the hardware node is configured properly, all the modules will be marked with the green OK label. However, if any configuration tests failed, the corresponding hardware node’s line will be marked in red. You can hover over it to see the message, which explains the issue (for example, “module_name is not enabled”). Also, the same error message can be seen within the State tab as a value for the problematic module.

Add a New Hardware Node

Note: Due to the Parallels and Odin rebranding, the previously used Parallels Cloud Server (PCS) denomination was changed to Virtuozzo and, correspondingly, PCS Storage - to the Virtuozzo Storage one.

Starting with Jelastic 4.7 release, you can fully automatically perform the new Virtuozzo node addition to a Platform. Herewith, for the Virtuozzo Storage node installation, some extra pre-configurations are required. So, in case you need any assistance with this, please contact Jelastic team.

1. Navigate to the Hardware Nodes subsection and click Add at the top toolbar. The Add hardware node dialog box will appear:

add hardware node frame

Fill in the following fields within the form:
  • Display name - name for the hardware node to be displayed in JCA
  • IP address -  LAN of added hardware node with SSH protocol port after the colons “:” mark
  • Credentials - section to specify administrator account’s Login & Password, required for authentication
  • Hardnode Group - select a set of hardware from the list of available ones
  • Status - status the added hardware node will get after installation (ACTIVE, BROKEN, INFRASTRUCTURE_NODE or MAINTENANCE)
  • Install VZ - checkbox, that defines Virtuozzo software re-installation on the hardware node

    Note: With this option, you can update the already added to the cluster hardware to the latest Virtuozzo version; herewith, you should be aware that all of the data on your hardware node will be permanently lost in this case.

    add hardware node frame with Install VZ

Click Add to proceed.

2. The node will appear in the list with INSTALLING status. Upon its selection, you can track the process within the corresponding Installation tab to the right.

installing hardware node

Here, you can see the following information:
  • Start time - actions’ initiation time
  • Action - a particular operation, required for hardware node installation
  • Status - the performed action result (OK, WARNING or ERROR)

Note: In case of an issue occurrence during installation, the corresponding step will be marked with either warning or error status within the list. The first result corresponds to some non-critical failure, so the installation will proceed further, whilst the second one will cause the process interruption.

error while installing hardware node

If installation fails, you’ll get the corresponding corresponding notification at the bottom of installation section and email with the happened problem description. Also, you can hover over error string to get additional info.

As soon as you fix the problem, the process can be continued with the appropriate Resume installation button at the top pane (not available if Install VZ was selected).
The installation process may take up to an hour, so please be patient. You’ll be informed about its completion via email.

Modify the Existing Hardware Node Settings

1. Select the desired node and click Edit at the toolbar above.

edit hardware node button

2. Make the required changes in the Edit hardware node dialog that opens.

It’s possible to change the main parameters you’ve previously stated in the Add hardware node window: Display name, IP address with SSH port, Hardnode Group and Status.

edit hardware node frame

Beside that, with Credentials section you can add a new password for the current hardware (if you’ve previously changed it inside the container) by simply entering it inside the same-named field.

Note that in case you’ve changed credentials on your hardware node, this operation is obligatory for the new password appliance within Jelastic system. Otherwise, the platform will not be able to gather, process and display statistics or any other data for the current hardware node via JCA.

If trying to paste a password that hasn’t been preliminary stated inside the hardware node container, you’ll be shown the appropriate error message, whilst any changes won’t be saved.
3. Once all the required changes are made, click Save at the bottom of the frame and the node settings will be modified.

Migrate/Evacuate Containers

The containers can be transferred to another hardware node if it is needed. You can just move separate containers (migrate them) or all containers on the hardware node (evacuate them). Containers’ migration or evacuation is performed in the simple-online migration mode.

Migration of Containers

1. Navigate to the Cluster > Hardware Nodes section.

2. Click on a particular node and select the Containers tab in the right-hand panel. You'll see the list of containers, that are currently running on this node.

You can select one or several containers (using the Shift and Ctrl buttons) and then click Migrate. In the opened dialog box, specify the destination hardware Node using the drop-down list and click Migrate.

Evacuation of Containers

1. Choose a hardware node and click the Evacuate Containers button on the toolbar.

In the Evacuate Containers dialog that opens, you'll see the list of active hardware nodes, which can be used as target nodes for evacuation.

Here a target node for evacuation can be chosen in several ways:
  • Do not select any node and just click Evacuate.
    The choice of the target node will be done automatically based on such requirements:
    • the node should have a lower load than others
    • the node should not contain containers of the same type and from the same environment
  • Select several nodes and click Evacuate.
    The choice of the target node will be done automatically amongst the selected ones according to the requirements, stated above.
  • Select a needed node from the list and click Evacuate.

    In this case the evacuation will be performed to the chosen node.

3. Navigate to the Evacuation State tab in the right panel to see the current state of this process. Here you can stop the evacuation by choosing the needed node and selecting the Stop button. After this, the already started processes of containers’ migration will be completed, but the rest of the containers will not start migrating.

Note: If any container can not be evacuated (e.g. due to its occupation by some operation) it will be skipped. However, the process of evacuation will continue and the rest of the containers will be migrated. Regardless, as a result you will get the EVACUATION FAILED status (even if just one container was not migrated). Such skipped containers can be migrated manually using the instruction below.

4. After the evacuation completion, you should perform an obligatory check that no containers are left on the node, even if the result of evacuation was successful.

For that execute the vzlist -a command.

Here is an example of the result:

  19350          - suspended    192.168.18.95   v00003.mysql5.5.5.21.19350.env-3515166

 19354         35 running      192.168.18.96   v00001.nginx.19354.env-0779149

 19355         34 running      192.168.18.97   v00003.mysql5.5.5.21.19355.env-0779149

 19358          - suspended    192.168.3.201   v00001.nginx.19358.env-2368696

 19359          - suspended    192.168.11.237  v00004.tomcat7.7.0.28.19359.env-1735657

 19363          - stopped      192.168.18.56   v00001.apache2.2.19363.env-9419253

 19364          - stopped      192.168.18.60   v00003.mysql5.5.5.21.19364.env-9419253

 19369          - stopped      192.168.17.157  v00001.apache2.2.19369.env-9419253

 19402          - stopped      192.168.18.133  v00001.apache2.2.19402.iviabd

5341711     - stopped      -               pool_nginxphp_v00001.nginx_aa0284dc-fe7d-cc43-adbb-be81569a44b6

7644701    - stopped      -               pool_nginxphp_v00001.nginx_aed6d5b1-19bb-904e-9272-0b58146a614e

9679236    - stopped      -               pool_nginxphp_v00001.nginx_9c2e705a-4bb6-ce49-a5cd-d6934a4070c2

Note: Containers with pool in Hostname should be ignored.

Also template hardware node contains CTID 200-216 containers with stacks’ templates. If such containers remain, they should be manually migrated to the destination hardware node using the command vzmigrate.

5. Other containers should be checked for the existence of environments with them:
  • Copy the corresponding part of the domain name (e.g. env-9419253)
  • Navigate to JCA > Cluster > Environments
  • Search the environment by pasting the domain name to the Domain or Alias field.
  • If such environments have been found, escalate this issue to the Jelastic team.

6. After that destroy all containers except CTID 1.

For destroying stopped containers, you can use the following commands:
# vzlist -a | grep -v CTID | grep -v running | awk '{print $1}' | while read i ; do vzctl destroy $i; done

Remove Hardware Node

If you do decide to remove a particular hardware node from your Jelastic cluster, follow the instruction below:

1. Select the hardware node, which you want to remove, from the list.

Note: It is highly recommended to evacuate all the containers from such a hardware node before proceeding further.

2. Then click the Remove button at the tools panel.

3. Confirm your decision by selecting OK in the appeared pop-up window.

force hardware node remove

Tip: You can tick the Force check box during confirmation - in this case the hardware node is going to be removed, even if any containers are left on it.

4. Once this operation is completed, you can update the list of hardware nodes to see the performed changes.

refresh hardware nodes information

Click the Refresh button in order to do it instantly or wait for the regular (every 20 seconds) automatic update.