VZStorage Installation Requirements

Virtuozzo Storage is a solution, which allows you to quickly and easily transform low-cost hardware and network equipment into protected enterprise-level storage, like SAN (Storage Area Network) and NAS (Network Attached Storage).

The general structure of Jelastic cluster with VZStorage is displayed in the scheme below:

vzstorage installation requirements

The detailed information on Virtuozzo Storage cluster, its components and installation requirements can be found in the linked document.

The guide below lists the specific requirements for running the Jelastic cluster with VZStorage:

Tip: Check the general concepts and information on other possible installation scenarios.

Jelastic Servers Requirements

Jelastic cluster is composed of 2 types of servers, which have slightly different hardware requirements:
  • infrastructure nodes or infra nodes - where the Jelastic services are running
  • user nodes - where the users' applications are running

Note: All servers should be located in the confines of a single Data Center to ensure a low latency network.


Infra Nodes

You need to provide 2 servers, which are necessary for high availability enabling. However, you can start with just one server, but you will need to provide the second one later when going into production (commercial).

Hardware requirements for infra nodes:
  • CPUs
    • x86-64 platform with Intel VT-x or AMD-V hardware virtualization support (Intel platform is preferred)
    • minimum of 8 cores per node (physical cores, not Hyper-threaded ones)
    • low voltage CPUs (i.e. Intel Atom) are strongly not recommended due to the poor performance.
  • RAM
    • 24 Gb minimum, 32 GB or more recommended
  • Network
    • 2 network cards with 1 Gbps speed each
    • if a server is also a part of Virtuozzo Storage cluster, then another internal network is required for VZStorage; please additionally consider the requirements for Virtuozzo Storage cluster network
  • Storage
    • local or SAN disks can be used; disks must belong to a single infra node only
    • in case of SAN disks usage, multipathing is strongly recommended
    • storage reliability and redundancy is required:
      • hardware RAID1 disk(s) are preferred
      • LVM/dmraid Linux mirroring is acceptable
    • at least about 600 IOPS is recommended
    • storage performance of the /vz volume is vital for overall cluster performance. Block device has to meet the requirements on sustained disk I/O read, write, random read and random write (100 MBps, 60 MBps, 4 MBps and 1 MBps respectively). Using SSD for /vz caching is strongly recommended. Only enterprise-grade SSD disks are allowed in production configurations
    • for SATA/SAS devices 6 Gbps throughput is strongly recommended (for both data/cache drives and controller)
    • for the operating system, you need to provide 100 GB of usable storage (i.e. the already mirrored storage)
    • for Jelastic services, you need to provide another 500 GB of usable storage (it’s ok to use the same mirror / RAID1 volume)
  • infra nodes (servers) can also provide MDS and Chunk Services for Virtuozzo Storage cluster - in this case, please consider the VZ cluster storage options additionally

User Nodes

User nodes are part of Virtuozzo Storage cluster, specifically with client roles (so they can run the VZ-based containers and virtual machines), plus any node can run MDS and/or chunk services, as soon as overall cluster fits the requirements.

For user environments, you need to provide at least 3 servers, which are necessary for high availability of users’ application servers. If needed, you can add more servers later (the more resources are allocated, the more users the cluster is able to serve).

The general rule of thumb: the more resources you allocate, the more users you are able to serve.

Hardware requirements for user nodes:
  • CPUs
    • x86-64 platform with Intel VT-x or AMD-V hardware virtualization support (Intel platform is preferred)
    • minimum of 12 cores per node, 32 cores or more recommended (physical cores, not Hyper-threaded ones)
    • low voltage CPUs (i.e. Intel Atom) are strongly not recommended due to the poor performance.
  • RAM
    • 32 GB minimum, 64 GB or more recommended
  • Network
    • 2 network cards with 1 Gbps speed each
    • if a server is also a part of Virtuozzo Storage cluster, then another internal network is required for VZStorage; please additionally consider the requirements for Virtuozzo Storage cluster network
  • Storage
    • for the operating system, you need to provide 100-130 GB or more of RAID1 or mirrored storage
    • for users’ containers, Virtuozzo Storage is used

Running Jelastic on Virtual Machines

Jelastic uses Virtuozzo as the underlying virtualization technology. The platform infra and user nodes can be run on virtual machines. The list of virtualization technologies compatible with the Jelastic PaaS:
  • KVM
  • VMware ESXi
  • Virtuozzo VM
  • Microsoft Hyper-V

Notes:

  • both bare-metal servers and VMs can be used for production deployment; however, bare-metal servers usually provide better performance
  • the VMs usage for MDS/Chunk services and Virtuozzo Storage mounting into VMs setups are not recommended for production use due to the low performance

Operating System Requirements

Common

CentOS 7, RHEL 7 or Virtuozzo 7 should be installed on all the infrastructure, as well as all Linux-based user nodes. They will be further redeployed into Virtuozzo 7 preserving system mandatory configuration files. Please note, that the partitions associated with the /boot, rootfs and /vz mount points will be formatted, and all the appropriate data will be lost.

Partitioning

Below are recommendations on partitioning for the Virtuozzo Storage.

VZ-based infrastructure and user nodes storage requirements and partitioning:
  • Storage for operating system partitions:
    • /boot - 1 GB, ext4
    • / - 40 GB, ext4
    • swap - depends on RAM, from 2 to 32 GB:

      RAMSwap
      up to 4 GB2 GB
      4-16 GB4 GB
      16-64 GB8 GB
      64-256 GB16 GB
      more than 256 GB32 GB
  • Storage for infra containers on the infrastructure nodes:
    • should be a single ext4 file system, mounted as /vz
    • please use all the available storage, remained after the creation of partitions for /, /boot and swap at this /vz file system
  • Storage for local Virtuozzo data on the user nodes:
    • should be a single ext4 file system, mounted as /vz; 30-40 GB devoted to it, from the same device where the OS partitions are located
    • storage for user containers will be mounted as Virtuozzo Storage FUSE from the respective cluster

Other

Server timezone must be set to UTC during the Jelastic PaaS installation, and must not be updated afterward.

Hardware Requirements

Network

  • The names of the private and public network interfaces should match the eth*[0-9]|tr*[0-9]|wlan[0-9]|ath[0-9]|ip6tnl*[0-9]|mip6mnha*[0-9]|bond[0-9] mask (for example, bond1 or wlan0)
  • All of the servers should have at least 2 network interfaces: WAN interface with the public IP address and LAN interface, connected to the managed port switch
  • Internal (LAN) network should operate at 1 Gbps speed or faster
  • Another dedicated internal network for virtuozzo Storage should operate at least at 10 Gbps
  • Allocated internal network subnet mask should be at least /16; /8 is recommended
    Note: The 10.0.0.0/24 network range is reserved for NGINX HA applications and should never be used for nodes and infra/end-users' containers.
  • External (WAN) connection should have at least 100 Mbps connection; 1 Gbps speed is recommended
  • Each hardware node (of both user and infra types) must have a public IP address assigned to the external (WAN) connection
  • 2 or more public IP addresses for Jelastic shared load balancers
  • 1 or more public IP addresses for Jelastic SSH Gate
  • Additional public IP addresses for end-user containers
  • All outbound traffic should be unblocked
  • Firewalls should be configured - please contact Jelastic Operations for details

DNS

DNS zone delegation must be already configured:
  • Both domain names, infra-domain.hosterdomain.com and user-domain.hosterdomain.com, should be delegated to platform resolvers.
  • DNS server names and addresses:
    • ns1.infra-domain.hosterdomain.com and ns2.infra-domain.hosterdomain.com
    • ns1.user-domain.hosterdomain.com and ns2.user-domain.hosterdomain.com
    • 2 public IP addresses allocated for these DNS servers (see above in the Network Requirements section)
  • Zone records example (make sure it is part of the file for parent zone hosterdomain.com) - check the 4 glue records below:

    infra-domain.hosterdomain.com IN NS ns1.infra-domain.hosterdomain.com
    infra-domain.hosterdomain.com IN NS ns2.infra-domain.hosterdomain.com
    ns1.infra-domain.hosterdomain.com IN A 1.1.1.1 ; glue records, in case
    ns2.infra-domain.hosterdomain.com IN A 2.2.2.2 ; they are needed
    
    user-domain.hosterdomain.com IN NS ns1.user-domain.hosterdomain.com
    user-domain.hosterdomain.com IN NS ns2.user-domain.hosterdomain.com
    ns1.user-domain.hosterdomain.com IN A 1.1.1.1 ; glue records, in case
    ns2.user-domain.hosterdomain.com IN A 2.2.2.2 ; they are needed
Make sure you don't have the SOA record for both domains (infra-domain.hosterdomain.com and user-domain.hosterdomain.com) in zones on your DNS servers - otherwise the delegation will not work.

SSL

Wildcard SSL certificates for both selected DNS domains and all of their subdomains must be provided: infra-domain.hosterdomain.com, *.infra-domain.hosterdomain.com, user-domain.hosterdomain.com and *.user-domain.hosterdomain.com.

Uploader Storage

Uploader storage is a file system, mounted by Jelastic via NFS or imported as SCSI LUN (with, for example, iSCSI) with an ext4 filesystem on top of it. Uploader storage can be shared for use with the Docker templates cache storage.
  • external shared storage for the file system is recommended
  • it is possible (but not recommended) to host this file system on a local disk of any infrastructure hardware nodes

Docker Templates Cache Storage

Docker templates cache storage is a file system, imported as SCSI LUN (with, for example, iSCSI) with an ext4 filesystem on top of it. Docker templates storage, in some cases, can be shared for use with the Uploader storage.
  • external shared storage for the file system is recommended
  • it is possible (but not recommended) to host this file system on a local disk of any infrastructure hardware nodes
  • all data stored at the Docker templates cache storage is volatile, so no redundancy or cache contents’ backups are required; herewith, the cluster-wide Docker creation will become unavailable in case of this storage failure

Operating System Settings

Jelastic PaaS requires an account with user ID and group ID set to 0 (root account) at VZ-based hardware nodes. The password-based authentication has to be enabled for this account at all of the VZ-based hardware nodes.