VZ-only Installation Requirements

This guide lists the specific requirements for the VZ-only Jelastic installation:

Tip: Check the general concepts and information on other possible installation scenarios.

Servers Requirements

Jelastic cluster is composed of 2 types of servers:
  • infrastructure nodes or infra nodes - where the Jelastic services are running
  • user nodes - where the users' applications are running

vz-only hardware requirements
The requirements for these servers are slightly different:

Infra Nodes

You need to provide at least 2 servers (the exact amount is discussed based on the particular installation specifics), which are required for High Availability enabling. Herewith, you can start with just one server, but you will need to provide the second one later when going into production (commercial).

Hardware requirements for infra nodes:
  • CPUs
    • x86-64 platform with Intel VT-x or AMD-V hardware virtualization support (Intel platform is preferred)
    • minimum of 8 cores per node (physical cores, not Hyper-threaded ones)
    • low voltage CPUs (i.e. Intel Atom) are strongly not recommended due to the poor performance.
  • RAM
    • 24 Gb minimum, 32 GB or more recommended
  • Network
    • 2 network cards with 1 Gbps speed each
  • Storage
    • local or SAN disks can be used; disks must belong to a single infra node only
    • in case of SAN disks usage, multipathing is strongly recommended
    • storage reliability and redundancy is required:
      • hardware RAID1 disk(s) are preferred
      • LVM/dmraid Linux mirroring is acceptable
    • at least about 600 IOPS is recommended
    • storage performance of the /vz volume is vital for overall cluster performance. Block device has to meet the requirements on sustained disk I/O read, write, random read and random write (100 MBps, 60 MBps, 4 MBps and 1 MBps respectively). Using SSD for /vz caching is recommended.
    • for SATA/SAS devices 6 Gbps throughput is strongly recommended (for both data/cache drives and controller)
    • for the operating system, you need to provide 100 GB of usable storage (i.e. the already mirrored storage)
    • for Jelastic services, you need to provide another 500 GB of usable storage (it’s ok to use the same mirror / RAID1 volume)

User Nodes

You need to provide at least 2 servers, which are necessary for high availability of user applications. More servers can be added later on to cover the growth in users/load.

Hardware requirements for user nodes:
  • CPUs
    • x86-64 platform with Intel VT-x or AMD-V hardware virtualization support (Intel platform is preferred)
    • minimum of 12 cores per node, 32 cores or more recommended (physical cores, not Hyper-threaded ones)
    • low voltage CPUs (i.e. Intel Atom) are strongly not recommended due to the poor performance
  • RAM
    • 16 Gb minimum, 32 GB or more recommended
  • Network
    • 2 network cards with 1 Gbps speed each
  • Storage
    • local or SAN disks can be used; disks must belong to a single user node only
    • in case of SAN disks usage, multipathing is strongly recommended
    • for SATA/SAS devices 6 Gbps throughput is strongly recommended (for both data/cache drives and controller)
    • for the operating system, you need to provide 70 GB or more of RAID1 or mirrored storage
    • for user containers, the storage reliability, high performance and redundancy are required:
      • hardware RAID1 or RAID10 disk(s) are strongly recommended
      • at least about 1200 IOPS is highly recommended
      • storage performance of the /vz volume is vital for end-users’ environments performance. Block device has to meet the requirements on sustained disk I/O read, write, random read and random write (460 MBps, 120 MBps, 8 MBps and 2 MBps respectively). Using SSD for /vz caching is strongly recommended. Only enterprise-grade SSD disks are allowed in production configurations.
    • sizing rules and recommendations for the user containers’ file system:
      • one user’s container occupies from 700 MB to 1.7 GB, therefore, to provide the required space for about 1000 containers per node, you will need at least ~ 1 TB, plus another 500 GB - 1 TB of space for user’s data inside the containers
      • the usual recommendation is to have 1-3 or more TB of usable storage per user node
      • however, you can consider the “grow /vz fs as you grow” scenario, starting with 600-800 GB storage size; please, consult with Jelastic Operations team in this case

Running Jelastic on Virtual Machines

Jelastic uses Virtuozzo as the underlying virtualization technology. The platform infra and user nodes can be run on virtual machines. The list of virtualization technologies compatible with the Jelastic PaaS:
  • KVM
  • VMware ESXi
  • Virtuozzo VM
  • Microsoft Hyper-V

Both bare-metal servers and VMs can be used for production deployment; however, bare-metal servers usually provide better performance.

Operating System Requirements

Common

CentOS 7, RHEL 7 or Virtuozzo 7 should be installed on all the infrastructure and user nodes. They will be further redeployed into Virtuozzo 7 preserving system mandatory configuration files. Please note, that the partitions associated with the /boot, rootfs and /vz mount points will be formatted, and all the appropriate data will be lost.

Partitioning

Below are recommendations on partitioning for the Virtuozzo Server.

VZ-based infrastructure and user nodes storage requirements and partitioning:

  • Storage for operating system partitions:
    • /boot - 1 GB, ext4
    • / - 40 GB, ext4
    • swap - depends on RAM, from 2 to 32 GB:
      RAMSwap
      up to 4 GB2 GB
      4-16 GB4 GB
      16-64 GB8 GB
      64-256 GB16 GB
      more than 256 GB32 GB
  • Storage for infra containers on the infrastructure nodes:
    • should be a single ext4 file system, mounted as /vz
    • please use all the available storage, remained after the creation of partitions for /, /boot and swap at this /vz file system
  • Storage for user environments on the user nodes:
    • should be a single ext4 file system, mounted as /vz
    • sizing rules can be seen above


Other

Server timezone must be set to UTC during the Jelastic PaaS installation, and must not be updated afterward.

Hardware Requirements

Network

  • All of the servers should have at least 2 network interfaces: WAN interface with the public IP address and LAN interface, connected to the managed port switch
  • Internal (LAN) network should operate at 1 Gbps speed or faster
  • Allocated internal network subnet mask should be at least /20; however, /8 or /16 is recommended

    Note: The 10.0.0.0/24 network range is reserved for NGINX HA applications and should never be used for nodes and infra/end-user containers.
  • External (WAN) connection should have at least 100 Mbps connection; 1 Gbps speed is recommended
  • Each hardware node (of both user and infra types) must have a public IP address assigned to the external (WAN) connection
  • 2 or more public IP addresses for Jelastic Shared Load Balancers
  • 1 or more public IP addresses for Jelastic SSH Gate
  • Additional public IP addresses for end-users containers
  • All outbound traffic should be unblocked
  • Firewalls should be configured - please contact Jelastic Operations for details

DNS

DNS zone delegation must be already configured:
  • Both domain names, infra-domain.hosterdomain.com and user-domain.hosterdomain.com, should be delegated to platform resolvers.
  • DNS server names and addresses:
    • ns1.infra-domain.hosterdomain.com and ns2.infra-domain.hosterdomain.com
    • ns1.user-domain.hosterdomain.com and ns2.user-domain.hosterdomain.com
    • 2 IP addresses allocated for these DNS servers (see above in the Network Requirements section)
  • Zone records example (make sure it is part of the file for parent zone hosterdomain.com) - check the 4 glue records below:

     infra-domain.hosterdomain.com IN NS ns1.infra-domain.hosterdomain.com
    infra-domain.hosterdomain.com IN NS ns2.infra-domain.hosterdomain.com
    ns1.infra-domain.hosterdomain.com IN A 1.1.1.1 ; glue records, in case
    ns2.infra-domain.hosterdomain.com IN A 2.2.2.2 ; they are needed
    
    user-domain.hosterdomain.com IN NS ns1.user-domain.hosterdomain.com
    user-domain.hosterdomain.com IN NS ns2.user-domain.hosterdomain.com
    ns1.user-domain.hosterdomain.com IN A 1.1.1.1 ; glue records, in case
    ns2.user-domain.hosterdomain.com IN A 2.2.2.2 ; they are needed
Make sure you don't have the SOA record for both domains (infra-domain.hosterdomain.com and user-domain.hosterdomain.com) in zones on your DNS servers - otherwise the delegation will not work properly.

SSL

Wildcard SSL certificates for both selected DNS domains and all of their subdomains must be provided: infra-domain.hosterdomain.com, *.infra-domain.hosterdomain.com, user-domain.hosterdomain.com and *.user-domain.hosterdomain.com.

Uploader Storage

Uploader storage is a file system, mounted by Jelastic via NFS or imported as SCSI LUN (with for example iSCSI) with an ext4 filesystem on top of it. Uploader storage, in some cases, can be shared for use with the Docker templates cache storage.
  • external shared storage for the file system is recommended
  • it is possible (but not recommended) to host this file system on a local disk of any infrastructure hardware nodes

Docker Templates Cache Storage

Docker templates cache storage is a file system, imported as SCSI LUN (with for example iSCSI) with an ext4 filesystem on top of it. Docker templates storage, in some cases, can be shared for use with the Uploader storage.
  • external shared storage for the file system is recommended
  • it is possible (but not recommended) to host this file system on a local disk of any infrastructure hardware nodes
  • all data stored at the Docker templates cache storage is volatile, so no redundancy or cache contents’ backups are required; herewith, the cluster-wide Docker environment creation will become unavailable in case of this storage failure

Operating System Settings

Jelastic PaaS requires an account with user ID and group ID set to 0 (root account) at VZ-based hardware nodes. The password-based authentication has to be enabled for this account at all of the VZ-based hardware nodes.