Hardware Regions

Click on the Regions option in your JCA > Cluster menu section to see the hardware structure of the platform:

  • Regions (or hardware regions) - independent hardware sets from different data centers; each region can contain multiple host groups
  • Host Groups (or environment regions) - a separate set of servers (hosts) within the confines of a particular region with its own options, efficiency, and rules for resources charging
Note: Hardware regions aren’t visible for end-users via the dashboard, which operates with the host groups (availability can be configured severally for each user group).

JCA hardware regions

Here, all the crucial information on Regions is displayed through the following columns:

  • Name of a hardware region or comprised host group(s)
  • Domain assigned to the region
  • SSL certificates configuration for the hardware region
  • Subnet provided for the region
  • Migration shows if users should be able to migrate environments from/to the current hardware region
  • Status of a region/host group (could be either ACTIVE or under MAINTENANCE)
  • Description with some optional information on a region or host group

Tip: If you want to benefit from providing multiple regions, you should read the appropriate documentation before applying any changes:

Use the tools panel above the regions list to perform the following operations:

Add New Region

Follow the next steps to add a new hardware region to your Jelastic cluster:

Note: Before adding a new region, consider the following prerequisites:

  • hosts must be configured according to hardware requirements
  • at least two internal and two external IPs must be reserved for shared load balancers (resolvers)
  • new region domain delegation must be done to the IPs from the previous point and according to DNS Zones Delegation
  • firewall should be checked and, if necessary, set up

1. Click the Add Region button at the top pane of the Regions section:

JCA add region

Within the opened Add Region frame, you need to fulfill the required details.

2. Within the first Region Setting section, specify the following information:

  • Unique Name - unique identifier for the region (c**annot be changed later)

  • Display Name - changeable region alias, which is displayed in JCA (10 characters max)

  • Domain - hostname assigned to a new region

    Note: The appropriate domain name should be purchased beforehand using any preferred domain registrar.

  • Status - the initial state should be set as MAINTENANCE to avoid false monitoring alerts during region addition

  • Subnet - a dedicated internal subnet for the user nodes and traffic routing between different hardware regions

  • Start and End IP - range of the IP addresses for containers created in this region (cannot exceed the specified subnet)

  • Description - short information on the current hardware region displayed in JCA (optional)

  • Allow migration from/to regions - tick the checkbox to allow environments migration from/to this region by end-users

    Note: This parameter controls the permission for migration across different hardware regions; herewith, transferring between host groups of the same region cannot be disabled.

3. In the Name Servers section, you need to state a pair (or several pairs) of Public IPv4 and Internal IPv4. These addresses will be used by shared load balancers as a region entrance point and, at the same time, its internal and external DNS server.

4. The last Docker Host Settings section configures a separate Docker Engine module for this particular hardware region:

  • Host - domain or IP of your Docker Host
  • SSH and TCP Port - ports for connections via the appropriate protocols
  • Login and Password - access credentials for the Docker Host

Once all the settings are specified, confirm the creation by clicking the Add button.

Add New Host Group

To add a new host group, follow the instructions below.

1. Click the Add Host Group button at the top of the Regions panel.

2. Within the opened Add Host Group dialog, fill in the given fields to provide the required data:

  • Unique Name - unique identifier for the host group (c**annot be changed later)
  • Display Name - changeable host group name displayed in JCA and at the end-users' dashboard (10 characters max)
  • Status - initial state of the host group, i.e. the one set after creation (ACTIVE or MAINTENANCE)
  • Description - short information on the current host group displayed in JCA (optional)
  • Region - hardware region this host group should be assigned to (use the drop-down list to select an existing one or to jump to the Add Region dialog)

JCA add host group

Click Add to proceed.

3. If internal routing between regions is already set up, proceed to step 6. If not, VPN tunnels and GRE links must be created.

3.1. Install required software and create keys at the infrastructure host:

1
2
3
4
5
6
7
yum install libreswan

ipsec initnss --nssdir /etc/ipsec.d
ipsec newhostkey --bits 4096 --output /etc/ipsec.d/ipsec.secrets
ipsec showhostkey --list

ipsec showhostkey --left --ckaid $ckaid_from_list$

As a result of this command, you’ll get a key. Later on in this guide, we’ll refer to it as $(infranode key).

Repeat this step on the user hosts of the new region to get $(usernode key).

3.2. Create the following configs:

  • /etc/ipsec.d/default.conf on user hosts
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
conn dpd
	dpddelay = 15
	dpdtimeout = 30
	dpdaction = restart

conn self
	also = dpd
	left = $(usernode IP)
	leftid = $(usernode hostname)
	leftrsasigkey = $(usernode key)
	authby = rsasig
	type = tunnel
	compress = no
	ike = aes128-sha1;modp1024
	esp = aes128-sha1;modp1024

conn gre
	leftprotoport = gre
	rightprotoport = gre
  • /etc/ipsec.d/$(infranode hostname).conf on user hosts
1
2
3
4
5
6
7
conn $(infranode hostname)
	also = self
	also = gre
	rightid = $(infranode hostname)
	right = $(infranode IP)
	rightrsasigkey = $(infranode key)
	auto = start
  • /etc/ipsec.d/default.conf on infra hosts
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
conn dpd
	dpddelay = 15
	dpdtimeout = 30
	dpdaction = restart

conn self
	also = dpd
	left = $(infranode IP)
	leftid = $(infranode hostname)
	leftrsasigkey = $(infranode key)
	authby = rsasig
	type = tunnel
	compress = no
	ike = aes128-sha1;modp1024
	esp = aes128-sha1;modp1024

conn gre
	leftprotoport = gre
	rightprotoport = gre
  • /etc/ipsec.d/$(usernode hostname).conf on infra hosts
1
2
3
4
5
6
7
conn $(usernode hostname)
	also = self
	also = gre
	rightid = $(usernode hostname)
	right = $(usernode IP)
	rightrsasigkey = $(usernode key)
	auto = start

3.3. Configure the tunnels up:

  • on the user host
1
2
# ipsec auto --add $(infranode hostname)
# ipsec auto --up $(infranode hostname)
  • on the infra host
1
2
# ipsec auto --add $(usernode hostname)
# ipsec auto --up $(usernode hostname)

3.4. Create GRE links:

  • on the user host
1
2
3
4
5
6
7
8
9
# $(infranode hostname)
DEVICE=$linkname
BOOTPROTO=none
ONBOOT=yes
TYPE=GRE
PEER_OUTER_IPADDR=$(infranode IP)
PEER_INNER_IPADDR=$(infranode IP) - internal IP of the GRE network
MY_OUTER_IPADDR=$(usernode IP)
MY_INNER_IPADDR=$(usernode IP) - internal IP of the GRE network
  • on the infra host
1
2
3
4
5
6
7
8
9
# $(usernode hostname)
DEVICE=$linkname
BOOTPROTO=none
ONBOOT=yes
TYPE=GRE
PEER_OUTER_IPADDR=$(usernode IP)
PEER_INNER_IPADDR=$(usernode IP) - internal IP of the GRE network
MY_OUTER_IPADDR=$(infranode IP)
MY_INNER_IPADDR=$(infranode IP) - internal IP of the GRE network

4. Set up internal routing. The BIRD Internet Routing Daemon can be used to automate the process.

Install bird on the infra and user hosts:

1
# yum install bird

Set up bird config:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
router id from $(infranode IP);

protocol kernel {
    persist;   	 # Don't remove routes on bird shutdown
    scan time 20;   	 # Scan kernel routing table every 20 seconds
    export all;   	 # Default is export none
    import all;
    learn;
}

protocol device {
    scan time 10;   	 # Scan interfaces every 10 seconds
}

protocol ospf {
    	tick 2;
    	rfc1583compat yes;
    	ecmp yes;
    	merge external yes;

    import filter {
   	 krt_prefsrc = $(infranode IP);
   	 accept;
    };

	area 0 {
    	interface "link*" {
        	hello 10;
        	retransmit 6;
        	cost 15;
        	transmit delay 5;
        	dead count 5;
        	wait 50;
        	type pointopoint;
        	authentication cryptographic;
        	password "$birdpassword";
    	};
   	 
    	interface "br1" {
        	stub no;

        	cost 10;
        	dead 15;
        	hello 10;
        	type broadcast;
        	authentication cryptographic;
        	password "$birdpassword";   		 
    	};
	};
}

Repeat this step on the other hosts in the region.

5. Configure the /etc/sysctl.conf file.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
# sysctl settings are defined through files in
# /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/.
#
# Vendors settings live in /usr/lib/sysctl.d/.
# To override a whole file, create a new file with the same in
# /etc/sysctl.d/ and put new settings there. To override
# only specific settings, add a file with a lexically later
# name in /etc/sysctl.d/ and put new settings there.
#
# For more information, see sysctl.conf(5) and sysctl.d(5).
net.ipv4.conf.br1.rp_filter = 2
net.ipv4.conf.$linkname.rp_filter = 2   # Note the $linkname here from step 3.4
net.ipv6.conf.default.forwarding = 1
net.ipv6.conf.all.forwarding = 1
net.ipv6.conf.all.proxy_ndp = 1
fs.odirect_enable = 1

6. Next, add a host to this newly created host group.

6.1. Check /etc/vz/vz.conf. If the VE_ROUTE_SRC_DEV parameter is commented or indicates an incorrect device, fix the issue and save the file.

6.2. If your DOCKER_HOST is on the docker-engine host and you deploy vz7 host, add the next line to the /etc/yum.conf file:

1
echo 'exclude=docker-ce' >> /etc/yum.conf

6.3. Check routes from the new region to infra/user hosts in this and other regions. It could be set automatically via the bird daemon.

6.4. Start the host installation via JCA.

7. Configure shared load balancers (SLB).

7.1. Add a region network to the jelastic.net.subnetworks system settings in JCA.

7.2. Add SLB IPs (both external and internal) to the jelastic.isolation.infra.ips and jelastic.isolation.infra.ips.all system settings in JCA. If isolation is enabled on the platform, you need to disable and re-enable it to apply these new settings.

7.3. In order to create a shared load balancer for the new region, connect to a new host and create the config.ini file:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
[general]
VAR_JELASTIC_NETWORK=${PLATFORM_NETWORK}
VAR_JELASTIC_DOMAIN_ZONE=${PLATFORM_DOMAIN}
[zookeeper]
CTID=300
IPS=${ZK_INT_IP}
[jelastic-db]
CTID=301
IPS=${DB_INT_IP}
[resolver5]
CTID=$new_resolver_CTID
IPS=${RSLV_EXT_IP} ${RSLV_INT_IP}
DOMAIN=${REGION_DOMAIN}

7.4. Download the create_docker.sh script.

1
wget -qO- http://dot.jelastic.com/download/graf/migration/migration_scripts-5.4.tar.gz | tar -xz

Edit it to specify the platform version in the DOCKER_VERSION=""; line.

For example, if deploying a region to Jelastic 5.9-3, set it as follows: DOCKER_VERSION=“5.9-3”;

7.5. Run the script to create a new shared load balancer.

1
./create_docker.sh resolver5

Add all regions' networks to this SLB via the /var/lib/jelastic/customizations/ipconfig.cfg file.

7.6. Update ZooKeeper environment variables (/.jelenv) by adding shared load balancer’s internal IP to OPT_JELASTIC_IPS and new network to JELASTIC_NETWORK. Restart the ZooKeeper service to apply changes.

7.7. Fix nameservers for SLB containers.

1
vzctl set $new_resolver_CTID --nameserver @resolver1_internal_ip@ --nameserver @resolver2_internal_ip@ --nameserver 8.8.8.8 --save

7.8. Check all infrastructure containers and manually add region network and routes.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
For example, you have 10.100.0.0/16 - internal net of new region and  10.100.1.31/32,10.100.1.32/32 - resolvers ip.
 
2. Check and add iptables/routes to infra containers:
CT 300 [zookeeper]
iptables -A INPUT -s 10.100.1.31/32 -p tcp -m tcp --dport 2181 -j ACCEPT
iptables -A INPUT -s 10.100.1.32/32 -p tcp -m tcp --dport 2181 -j ACCEPT
 
CT 301 [jelastic-db]
iptables -A INPUT -s 10.100.1.31/32 -p tcp -m tcp --dport 3306 -j ACCEPT
iptables -A INPUT -s 10.100.1.32/32 -p tcp -m tcp --dport 3306 -j ACCEPT
 
CT 304 [gate]
Firewall:
iptables -A LAN -s 10.100.1.31/32 -j ACCEPT
iptables -A LAN -s 10.100.1.32/32 -j ACCEPT
 
Routes(in case if gate have external ip):
Was:
CT-304-bash-4.2# cat /etc/sysconfig/network-scripts/route-venet0
192.168.0.0/16 dev venet0 src 192.168.1.55
 
Now:
CT-304-bash-4.2# cat /etc/sysconfig/network-scripts/route-venet0
192.168.0.0/16 dev venet0 src 192.168.1.55
10.100.0.0/16 dev venet0 src 192.168.1.55
 
ip r a 10.100.0.0/16 dev venet0 scope link src 192.168.1.55
 
Add all networks to /var/lib/jelastic/customizations/ipconfig.cfg
 
CT 308 [jrouter]
iptables -A INPUT -s 10.100.0.0/16 -p tcp -m multiport --dports 80,443,8080,8081 -j ACCEPT
iptables -A INPUT -s 10.100.1.31/32 -p tcp -m multiport --dports 21,6010:6020 -j ACCEPT
iptables -A INPUT -s 10.100.1.32/32 -p tcp -m multiport --dports 21,6010:6020 -j ACCEPT
 
CT 314 [awakener]
iptables -A HTTP_INTERNAL -s 10.100.1.31/32 -j ACCEPT
iptables -A HTTP_INTERNAL -s 10.100.1.32/32 -j ACCEPT
 
CT 315 [uploader]
iptables -A INPUT -s 10.100.1.31/32 -p tcp -m multiport --dports 80,443,8080,8081 -j ACCEPT
iptables -A INPUT -s 10.100.1.32/32 -p tcp -m multiport --dports 80,443,8080,8081 -j ACCEPT
 
CT 317 [zabbix]
iptables -A INPUT -s 10.100.1.31/32 -p tcp -m tcp --dport 80 -j ACCEPT
iptables -A INPUT -s 10.100.1.32/32 -p tcp -m tcp --dport 80 -j ACCEPT
 
CT 318 [webgate]
iptables -A INPUT -s 10.100.1.31/32 -p tcp -m multiport --dports 80,443,8080,8743 -j ACCEPT
iptables -A INPUT -s 10.100.1.32/32 -p tcp -m multiport --dports 80,443,8080,8743 -j ACCEPT

7.9. Run service discovery:

1
jem docker run --ctid $new_resolver_CTID

Check results in /vz/root/$new_resolver_CTID/var/log/discovery.log and, if everything is ok, disable discovery:

1
jem docker addenv --env "SKIP_DISCOVERY=MQ==" --ctid $new_resolver_CTID

8. Provide Let’s Encrypt SSL certificates via JCA.

9. If needed, apply customizations and run J-runner tests for the new region.

10. Synchronize new SLBs in the patcher.

11. Finally, assign the host group to the appropriate user Groups via the Regions & Pricing tab.

host group availability

Afterward, your host group will appear in the topology wizard of the Dev dashboard as a new environment region.

Edit Region/Host Group

You can adjust the existing regions and host groups by simply double-clicking on the required item or using the Edit button at the top of the Regions panel.

JCA edit host group

Within the corresponding region/host group Edit dialog, you can adjust everything (same as for the addition) except the Unique Name value.

Apply changes with the Save button at the bottom-right corner of the frame.

SSL Certificates for Regions

Using the SSL column within the Regions section, you can Add Certificates for your hardware regions or manage the already configured ones:

  • Edit - allows switching between the Let’s Encrypt and custom SSL certificates
  • Update - provides a new LE certificate for the hardware region (this option is hidden for custom SSL)
  • Remove - detaches certificate from the region

SSL for hardware regions

1. While adding or editing your certificate, you can choose between two options:

  • Use Let’s Encrypt - automatically fetch and apply certificates from the Let’s Encrypt free and open Certificate Authority
  • Upload Custom Certificates - upload valid RSA-based Server Key, Intermediate Certificate (CA), and Domain Certificate files to automatically apply them. Self-signed certificates can be used as well, e.g. for testing purposes

add SSL for hardware region

Click Save to confirm changes.

2. If needed, you can configure the Let’s Encrypt certificates provisioning via the certain System Settings:

  • jelastic.letsencrypt.renewal.days - displays an alert at JCA if any of the SSL certificates are valid for fewer days than a provided value (21, by default)
  • qjob.ssl_checker.cron_schedule - checks the status of the Let’s Encrypt SSL certificates for hardware regions and automatically updates those, which are valid for fewer days than specified in the jelastic.letsencrypt.renewal.days setting; the default value is 0 0 15 * * ?, i.e. this job is run daily at 15:00
  • hcore.platform.admin.username - sets platform admin email address, which, in case any issue occurs, receives notification from Let’s Encrypt

To update or remove a certificate, select the appropriate option from the list, and confirm the action via the pop-up window.

Remove Region/Host Group

No longer needed regions and host groups can be deleted with the help of the Remove button at the top tools panel.

JCA remove host group

Note: Hardware regions and host groups with at least one user container inside cannot be deleted. You need to migrate all the instances to another host before initiating the removal.

Confirm your decision via the appeared pop-up window.

What’s next?