The described redeployment flow was utilized in the Jelastic 5.6 - 5.7.6 versions and is deprecated since 5.7.7.
To ensure clients’ applications safety during the container redeploy operation, Jelastic PaaS creates the appropriate container backups. Below, we’ll overview the algorithm in details:
1. Ensure there is enough disk space to create a copy of the initial container or, if redeploying node group, for the whole layer.
Upon failure, the initial container is unaffected yet. The appropriate notification (i.e. not enough disk space on the hardware node) is shown for users in the dashboard.
2. A new node with the required image tag is created.
Upon failure, the initial container is unaffected yet. The appropriate error (e.g. registry unavailability, non-supported image OS, etc.) is shown for users in the dashboard.
3. The new node is provided with all the container settings and limits of the initial one.
Upon failure, the initial container is unaffected yet. No user-dedicated errors are provided during this step.
4. The new node is initiated through the docker setup and docker aftercreate procedures. The initial container is stopped.
Upon failure, the initial container (unaffected) is started, while the new one is stopped and, based on the settings, is stored for analysis or removed. The appropriate error (image could not be run) is shown for users in the dashboard with additional info stored in logs.
5. The initial container is switched to the mounted state, allowing to copy user data (volumes) to the new node via the rsync software tool.
Upon failure, the initial container (unaffected) is started, while the new one is stopped and, based on the settings, is stored for analysis or removed. The same error as in the previous step is shown for users in the dashboard, and the extended rsync error description is added to the logs.
6. The new node is tweaked and finalized (i.e. added NFS configuration files, user SSH keys, firewall rules, etc).
Note: In the current redeployment implementation, only the passwords of the users which exists on both the initial and new containers are transferred (i.e. not the users themselves).
Upon failure, the initial container (unaffected) is started, while the new one is stopped and, based on the settings, is stored for analysis or removed. No user-dedicated errors are provided during this step.
7. Both containers are stopped, and their CTID and UID are swapped. The new node is started.
Upon failure, the IDs are swapped back, the initial container is started and the redeploy error is shown for the user in the dashboard.
Now, the customer sees the redeployment operation success in the dashboard and can work with the new node. Herewith, the initial container is temporarily stored as a backup, allowing, if needed, to restore its pre-redeployment state.
With such a flow the actions with initial container are done only during the last step and all the potentially harmful
configurations are performed with a separate node, ensuring the that user container always can be restored.
Restoring Container from Backup
You need to configure storing of the container backups
for the redeploy operation to be able to roll back customer’s container to its pre-redeployment state. By default, backups are stored for a week, and only the latest container update (per account) can be reverted.
If there are some problems with container/application after successful redeploy, the user can contact your support team and request restoring the previously used container version. In order to help your customer, the following administration > cluster
private API methods (i.e. can be run by cluster admin only) should be used:
- GetBackups(appid, session, nodeId) - returns a list of backups assigned to the specified node ID
RestoreBackup(appid, session, nodeId, backupId) - substitutes the specified container with the required backup
Herewith, substitute the parameters as follows:
The flow is simple, use the GetBackups
method to learn the ID of the necessary backup and use it in the RestoreBackup
call to roll back.