59432: Acronis Storage: Supported Configurations

Translate to:

Applies to:

This article applies to Acronis Storage 2.0

When planning Acronis Storage deployment, pay attention to supported and recommended configurations. Refer to the Installation Guide for complete configuration details.

Proceed to:

Recommended Configuration

Hardware requirements

Recommended hardware configuration is described in chapter 2.2 Planning Node Hardware Configurations of the Installation Guide.

Type Configuration
CPU Intel Xeon E5-2620V2 or faster; at least one CPU core
per 8 HDDs
RAM 16GB ECC or more, plus 0.5GB ECC per each HDD
System disk 250GB SATA HDD
Storage disk Four or more HDDs or SSDs; 1 DWPD endurance minimum,
10 DWPD recommended
Disk controller HBA or RAID
Network Two 10Gbps network interfaces; dedicated links for
internal and public networks
SSD One or more recommended enterprise-grade SSDs
with power loss protection; 100GB or more capacity;
at least 50-75 MB/s sequential write performance per
each HDD (that the SSD services)
Sample configuration* Intel Xeon E5-2620V2, 32GB, 2xST1000NM0033,
32xST6000NM0024, 2xMegaRAID SAS 9271/9201,
Intel X540-T2, Intel P3700 800GB

*Even though a cluster can be created on top of varied hardware, using nodes with similar hardware in
each node will yield better cluster performance, capacity, and overall balance.

Software requirements

Recommended Acronis Storage software configuration per node:

Node # 1st disk role 2nd disk role 3rd and other disk role Access points
Nodes 1 to 5 System SSD; metadata,
journal and
cache
Storage iSCSI, S3 (private and public), Acronis
Backup Gateway (private and public)
Nodes 5 and other System SSD; journal and
cache
Storage iSCSI, S3 (private), Acronis
Backup Gateway (private)
5 nodes in total* 5 MDSs in total 5 or more CSs in total All nodes run required access
points

*Even though a production-ready cluster can be created from just five nodes with recommended hardware,
it is still recommended to enter production with at least ten nodes if you are aiming to achieve significant
performance advantages over direct-attached storage (DAS) or improved recovery times.

Network requirements

Acronis Storage uses two networks (e.g., Ethernet): a) a internal network that interconnects nodes and combines
them into a cluster, and b) a public network for exporting stored data to users.

Network requirements:

  • Nodes are added to clusters by their IP addresses, not FQDNs. Changing the IP address of a node in the
    cluster will remove that node from the cluster. If you plan to use DHCP in a cluster, make sure that IP
    addresses are bound to the MAC addresses of nodes’ network interfaces.
  • Each node must have Internet access so updates can be installed, or Acronis repo mirror is setup inside the internal network.
  • MTU is set to 1500 by default. On the interfaces with Storage role set MTU to 9000.
  • Network time synchronization (NTP) is required for correct timestamps in logs.
  • The management role is assigned automatically during installation and cannot be changed in the management
    panel later.
  • Even though the management node can be accessed from a web browser by the hostname, you still need
    to specify its IP address, not the hostname, during installation.

Per-node requirements:

  • Each node in the cluster must have access to the internal network and have the port 8889 open to listen
    for incoming connections from the internal network.
  • Each storage and metadata node must have at least one network interface for the internal network traffic.
    The IP addresses assigned to this interface must be either static or, if DHCP is used, mapped to
    the adapter’s MAC address.
  • The management node must have a network interface for internal network traffic and a network interface
    for the public network traffic (e.g., to the datacenter or a public network) so the management panel can
    be accessed via a web browser.
  • A node that runs one or more storage access point services must have a network interface for the internal
    network traffic and a network interface for the public network traffic.

The following ports need to be open on a management node :

  • 8888 for management panel access from the public network
  • 8889 for cluster node access from the internal network.

The following ports need to be open on access point nodes:

  • iSCSI access points use the TCP port 3260 for incoming connections from the public network.
  • S3 access points use ports 443 (HTTPS) and 80 (HTTP) to listen for incoming connections from the public network.
  • Acronis Backup Gateway access points use port 44445 for incoming connections from both internal and public networks and ports 443 and 8443 for outgoing connections to the public network.

Minimal Configuration

Be aware that when implementing minimal configuration, supported usage scenarios are limited.

Hardware requirements

Minimal hardware configuration is described in chapter 2.2 Planning Node Hardware Configurations of the Installation Guide.

Type Configuration
CPU Dual-core CPU
RAM 2GB
System disk Three 100GB SATA HDDs (one
system, one storage, one MDS
(on five nodes))
Storage disk Three 100GB SATA HDDs (one
system, one storage, one MDS
(on five nodes))
Disk controller None
Network 1 Gbps or faster network interface
SSD None

*Even though a cluster can be created on top of varied hardware, using nodes with similar hardware in
each node will yield better cluster performance, capacity, and overall balance.

Software requirements

Minimal Acronis Storage software configuration with high availability:

Node # 1st disk role 2nd disk role 3rd and other disk role Access points
1 System Metadata Storage iSCSI, S3 (private and public), Acronis
Backup Gateway (private and public)
2 System Metadata Storage iSCSI, S3 (private and public), Acronis
Backup Gateway (private and public)
3 System Metadata Storage iSCSI, S3 (private and public), Acronis
Backup Gateway (private and public)
3 nodes in total 3 MDSs in total 3 or more CSs in total Access point services run on three nodes in
total

*Metadata role may be installed on the System disk, in case it is an SSD.

*Only one or more Access point services may be installed on the nodes, depending on your access requirements.

Minimal Acronis Storage software configuration with no data redundancy and no service availability:

Node # 1st and 2nd disk role 3rd and 4th disk role 5th and other disk role Access points
1

System,

disks 1 and 2 in RAID 1 configuration

Metadata

Storage,

SAN/NAS with built-in redundancy

iSCSI, S3 (private and public), Acronis
Backup Gateway (private and public)

Network requirements

See recommended network requirements.

Not Supported Configuration

Be aware that when implementing an unsupported configuration, you will be denied Acronis Support for all usage scenarios and technical issues. Any limitation indicated in the Installation Guide of Acronis Storage puts on to the limitations below. 

Not supported hardware configuration and scenarios

  • One node with less than 2 Metadata service disks and without Raid mirorring of System disk. This option is only valid for schemes without data redundancy; service availability is limited to one node.
  • Less than 3 nodes when high service availability is required.
  • Acronis Storage cluster with redundancy runs in virtual machines or on top of SAN/NAS hardware that has its own redundancy mechanisms. 1+0 Erasure coding and No redundancy schemes are recommended for installation over SAN/NAS with in-built redundancy.
  • Less than 20% of cluster capacity is free. Data safety is still guaranteed, however cluster performance will degrade.
  • Any node has less than 2 disks.
  • System disk has less than 100 GBs of space.
  • Malfunctioning nodes/drives are not replaced with new ones.

Not supported software configuration and scenarios

3rs party software:

  • Adding 3rd party yum repositories besides the ones supported and/or approved by Acronis.
  • Installing any 3rd party software on storage nodes besides the software approved by Acronis and available in its official repositories.

Acronis Storage software:

  • Just one MDS is available in the cluster in initial configuration.
  • Just one CS is available in the cluster in initial configuration.

Not supported network configuration and scenarios

  • Storage cluster is not protected from non-authorized access over the network per Installation Guide.
  • Nodes are added to clusters by FQDNs and IP addresses are not bound to the MAC addresses of nodes’ network interfaces.
  • Fibre channel or InfiniBand networks are used for Acronis Storage networks.
  • No Internet access is available on any of the nodes, or no Acronis repo mirror is setup inside the internal network.
  • Network time synchronization (NTP) is not available.
  • The hostname of the management node is used instead of its IP address during installation.

Tags: 

You are reporting a typo in the following text:
Simply click the "Send typo report" button to complete the report. You can also include a comment.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
10 + 3 =
Solve this simple math problem and enter the result. E.g. for 1+3, enter 4.