64033: Acronis Cyber Infrastructure: how to configure HA for S3 service after upgrade from v.2.4

use Google Translate

Last update: Fri, 2020-05-29 12:20

Scenario

Since version 2.5, Acronis Cyber Infrastructure supports clustered object storage configuration service. Automatic high availability management for object storage configuration service has been implemented.

If you have been using Acronis Cyber Infrastructure (formerly Acronis Storage) prior to version 2.5, you need to configure high availability for S3 service manually.

Solution

By default (before 2.5) configuration server was deployed to only a single node, with backend. For example:

[root@aci1 ~]# ostor-ctl get-config
CFGD_ID                    ADDR       IS MASTER
1           10.37.130.67:2532             yes

To make S3 service highly available (to work if backend is down) follow the steps below:

1. Create HA if it wasn't created before. Go to Admin Panel -> SETTINGS -> Management node and select 3 nodes which you want to add to HA.

2. Join from 2 other nodes, using this command:

# ostor-ctl join --root /var/lib/ostor/configuration -a $(OSTOR_PRIVATE_FROM_JOINING_NODE) --name $(OSTOR_PRIVATE_FROM_BACKEND_NODE)

Example:

[root@aci2 ~]# ostor-ctl join --root /var/lib/ostor/configuration -a 10.37.130.175 --name 10.37.130.67
Please enter password for '10.37.130.67':
2019-07-04 16:41:11.118 Configuration service successfully joined '10.37.130.67' (created at /var/lib/ostor/configuration)
[root@aci3 ~]# ostor-ctl join --root /var/lib/ostor/configuration -a 10.37.130.19 --name 10.37.130.67
Please enter password for '10.37.130.67':
2019-07-04 16:41:11.118 Configuration service successfully joined '10.37.130.67' (created at /var/lib/ostor/configuration)

Password can be obtained in the following way:

[root@aci1 ~]# vinfra cluster password show
Authentication user 'admin' on https://backend-api.svc.vstoragedomain:8888:
Password:
+----------+----------------+
| Field    | Value          |
+----------+----------------+
| id       | 1              |
| name     | cluster        |
| password | *****          |
+----------+----------------+

3. Add the appropriate entries to DNS so cfgd will become highly available:

# su - vstoradmin
# psql coredns

coredns=> insert into records values (default, (select id from domains where name = 'svc.vstoragedomain.'), 'ostor-private.svc.vstoragedomain.', 'A', '10.37.130.67', 5, null, null, false);
INSERT 0 1
coredns=> insert into records values (default, (select id from domains where name = 'svc.vstoragedomain.'), 'ostor-private.svc.vstoragedomain.', 'A', '10.37.130.175', 5, null, null, false);
INSERT 0 1
coredns=> insert into records values (default, (select id from domains where name = 'svc.vstoragedomain.'), 'ostor-private.svc.vstoragedomain.', 'A', '10.37.130.19', 5, null, null, false);
INSERT 0 1

where 10.37.130.67, 10.37.130.175 and 10.37.130.19 are IP addresses of these 3 nodes from network with OSTOR private role.

4. Verify configurations:

[root@aci1~]# ostor-ctl get-config
CFGD_ID                  ADDR       IS MASTER
3           10.37.130.19:2532             no
2           10.37.130.175:2532            no
1           10.37.130.67:2532             yes

[root@aci1 ~]# nslookup ostor-private.svc.vstoragedomain
Server:         127.0.0.1
Address:        127.0.0.1#53

Name:   ostor-private.svc.vstoragedomain
Address: 10.37.130.67
Name:   ostor-private.svc.vstoragedomain
Address: 10.37.130.175
Name:   ostor-private.svc.vstoragedomain
Address: 10.37.130.19