67228: [MAC] Using archive_ctl CLI tool to test upload and download speed to/from a cloud storage location

use Google Translate

Last update: 21-02-2023

Applies to both Acronis-hosted and partner-hosted cloud locations.

Operating System - MacOS

For Windows and Linux speedtest use Connection Verification tool with speedtest function - KB47678

 

Preparation

1. Download archive_io_ctl:

2. Unpack and place it on a machine in your environment.

3. Find out the cloud storage address from the Management Portal web GUI:

4. Obtain the cloud certificate from a currently registered and active (which has recently done successful backups) Agent:

  1. See https://kb.acronis.com/content/60082 for certificate locations.
  2. Rename the file into cert.crt. 
  3. Put the cert.crt file in the same directory where archive_io_ctl is.

Testing upload and download speed on a MacOS machine

To test the upload and the download speed, use an existing cloud storage (allows to measure the effective throughput of the network between the test machine and the cloud location plus sequential write performance of the cloud storage; this does not take into account source-side disk-reading overheads).

Preparation

  1. Make sure your shell's current working directory is the one where you unpacked archive_io_ctl (use the cd command to enter the directory, use the pwd and ls commans to verify you're indeed in it).
  2. Prepare the files:
    chmod +x archive_io_ctl.exe
    export LD_LIBRARY_PATH=$(pwd)
  3. Use a test file of approximately 512MB - 2GB size. An ISO file is a good fit for this test. Or create a test file:
    dd if=/dev/urandom of=dummyfile.bin bs=1M count=1024 status=progress
    (typically /dev/urandom , and not /dev/random should be used, due to random possibly blocking --  see https://en.wikipedia.org/wiki//dev/random)
     
    Example:
    root@:~# dd if=/dev/urandom of=testfile.bin bs=1M count=1024 status=progress
    1021313024 bytes (1.0 GB, 974 MiB) copied, 15 s, 68.0 MB/s
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 15.7858 s, 68.0 MB/s
    root@:~# ls -lh testfile.bin
    -rw-r--r-- 1 root root 1.0G Jul 23 11:59 testfile.bin
    root@:~#
  4. Place the file to the archive_io_ctl folder.

Testing the upload speed

time ./archive_io_ctl.exe --copy <filename> --astor <storage_address> --cert cert.crt --dst /1/<filename> --continue
where <filename> is the name of your test file
<storage_address> is the address you've noted in Preparation - Step 2

For example:
time ./archive_io_ctl.exe --copy testfile.bin --astor baas-fes-eu4.acronis.com --cert cert.crt --dst /1/testfile.bin --continue

The elapsed wall clock time will be printed like this:
real:   1m26.230s

Divide the file size by the wallclock time spent; the result is the measured average transfer speed. E.g. 1GiB=1024MB divided by 1m26s=86s gives a speed of 11.9MBytes/s. NOTE: Please mind the units -- the file size is in megaBYTES, while your internet connection is usually measured in megaBITS. 1MByte/s = 8Mbit/s.

Testing the download speed

time ./archive_io_ctl.exe --astor <storage_address> --cert cert.crt --copy /1/<filename> --dst <newfilename> --continue
where <filename> is the name of your test file
<newfilename> is a new filename for the downloaded file (to avoid a filename conflict since the file will be downloaded into the same folder)
<storage_address> is the address you've noted in Preparation - Step 2

For example:
time ./archive_io_ctl.exe --astor baas-fes-eu4.acronis.com --cert cert.crt --copy /1/testfile.bin --dst newtestfile.bin --continue

The elapsed wallclock time will be printed like this:
real:   1m26.230s

Divide the file size by the wallclock time spent; the result is the measured average transfer speed. E.g. 1GiB=1024MB divided by 1m26s=86s gives a speed of 11.9MBytes/s. NOTE: Please mind the units -- the file size is in megaBYTES, while your internet connection is usually measured in megaBITS. 1MByte/s = 8Mbit/s.

Deleting the test files

Delete the uploaded test file so that it doesn't consume your cloud storage quota:

./archive_io_ctl.exe --astor <storage_address> --cert cert.crt --rm /1/<filename>
where <filename> is the name of your test file
<storage_address> is the address you've noted in Preparation - Step 2

For example:
./archive_io_ctl.exe --astor baas-fes-eu4.acronis.com --cert cert.crt --rm /1/testfile.bin

Delete the test files from the local directory: testfile.bin and newtestfile.bin:
rm testfile.bin newtestfile.bin

More information

It is generally a good idea to repeat these tests a few times, during different times of day and different days of week, to get a better idea of what results to expect as the load on the local network, the network path(s) local machine <-> cloud storage, and the I/O load on the cloud storage vary with time of day and day of week (network load and storage load).

NOTE: The transfer done by archive_io_ctl is, for the time being, over a single TCP stream, which matches how most agents work with cloud storage of most tenants at the moment. The currently used storage access protocol, ABGW 1.x, does not use multiple streams in parallel. A next-gen ABGW2 protocol is undergoing testing and should gradually be rolled out next year.

NOTE: The obtained upload and download speeds should be used only as rough estimates of the maximum transfer speeds to/from cloud; the processing of real backups is usually somewhat slower (e.g. 20-30% slower is normal). If your actual backup transfer speeds are much slower than that, check for I/O and CPU bottlenecks (saturation) of the local machine on which you're doing the tests.

Tags: