Scenario
Since version 2.5, Acronis Cyber Infrastructure supports clustered object storage configuration service. Automatic high availability management for object storage configuration service has been implemented.
If you have been using Acronis Cyber Infrastructure (formerly Acronis Storage) prior to version 2.5, you need to configure high availability for S3 service manually.
Solution
By default (before 2.5) configuration server was deployed to only a single node, with backend. For example:
[root@aci1 ~]# ostor-ctl get-config
CFGD_ID ADDR IS MASTER
1 10.37.130.67:2532 yes
To make S3 service highly available (to work if backend is down) follow the steps below:
1. Create HA if it wasn't created before. Go to Admin Panel -> SETTINGS -> Management node and select 3 nodes which you want to add to HA.
2. Note ostor name set in the config file:
# cat /var/lib/ostor/configuration/name
Example output:
ostor-private.svc.vstoragedomain
3. You can check that it resolves to ostor nodes:
# nslookup ostor-private.svc.vstoragedomain
Example output:
Server: 10.37.130.67
Address: 10.37.130.67#53
Name: ostor-private.svc.vstoragedomain
Address: 10.37.130.175
Name: ostor-private.svc.vstoragedomain
Address: 10.37.130.19
Address: 10.37.130.67#53
Name: ostor-private.svc.vstoragedomain
Address: 10.37.130.175
Name: ostor-private.svc.vstoragedomain
Address: 10.37.130.19
2. Join from 2 other nodes, using this command:
# ostor-ctl join --root /var/lib/ostor/configuration -a $(OSTOR_PRIVATE_FROM_JOINING_NODE) --name $(OSTOR_PRIVATE_FROM_BACKEND_NODE
)
Example:
[root@aci2 ~]# ostor-ctl join --root /var/lib/ostor/configuration -a 10.37.130.175 --name ostor-private.svc.vstoragedomain
Please enter password for 'ostor-private.svc.vstoragedomain':
2019-07-04 16:41:11.118 Configuration service successfully joined 'ostor-private.svc.vstoragedomain' (created at /var/lib/ostor/configuration)
[root@aci3 ~]# ostor-ctl join --root /var/lib/ostor/configuration -a 10.37.130.19 --name ostor-private.svc.vstoragedomain
Please enter password for 'ostor-private.svc.vstoragedomain':
2019-07-04 16:41:11.118 Configuration service successfully joined 'ostor-private.svc.vstoragedomain' (created at /var/lib/ostor/configuration)
Password can be obtained in the following way:
[root@aci1 ~]# vinfra cluster password show
Authentication user 'admin' on https://backend-api.svc.vstoragedomain:8888:
Password:
+----------+----------------+
| Field | Value |
+----------+----------------+
| id | 1 |
| name | cluster |
| password | ***** |
+----------+----------------+
3. Add the appropriate entries to DNS so cfgd will become highly available:
# su - vstoradmin
# psql coredns
coredns=> insert into records values (default, (select id from domains where name = 'svc.vstoragedomain.'), 'ostor-private.svc.vstoragedomain.', 'A', '10.37.130.67', 5, null, null, false);
INSERT 0 1
coredns=> insert into records values (default, (select id from domains where name = 'svc.vstoragedomain.'), 'ostor-private.svc.vstoragedomain.', 'A', '10.37.130.175', 5, null, null, false);
INSERT 0 1
coredns=> insert into records values (default, (select id from domains where name = 'svc.vstoragedomain.'), 'ostor-private.svc.vstoragedomain.', 'A', '10.37.130.19', 5, null, null, false);
INSERT 0 1
where 10.37.130.67, 10.37.130.175 and 10.37.130.19 are IP addresses of these 3 nodes from network with OSTOR private role.
4. On both nodes, execute:
systemctl start ostor-cfgd
systemctl enable ostor-cfgd
5. Verify configurations:
[root@aci1~]# ostor-ctl get-config
CFGD_ID ADDR IS MASTER
3 10.37.130.19:2532 no
2 10.37.130.175:2532 no
1 10.37.130.67:2532 yes
[root@aci1 ~]# nslookup ostor-private.svc.vstoragedomain
Server: 127.0.0.1
Address: 127.0.0.1#53
Name: ostor-private.svc.vstoragedomain
Address: 10.37.130.67
Name: ostor-private.svc.vstoragedomain
Address: 10.37.130.175
Name: ostor-private.svc.vstoragedomain
Address: 10.37.130.19