67228: Using archive_ctl CLI tool to test upload and download speed to/from a cloud storage location

use Google Translate

Last update: 01-09-2021

Applies to both Acronis-hosted and partner-hosted cloud locations.

Preparation

1. Download archive_io_ctl:

2. Unpack and place it on a machine in your environment.

3. Find out the cloud storage address from the Management Portal web GUI:

4. Obtain the cloud certificate from a currently registered and active (which has recently done successful backups) Agent:

  1. See https://kb.acronis.com/content/60082 for certificate locations.
  2. Rename the file into cert.crt. 
  3. Put the cert.crt file in the same directory where archive_io_ctl is.

Testing upload and download speed on a Windows machine with PowerShell

To test the upload and the download speed, use an existing cloud storage (allows to measure the effective throughput of the network between the test machine and the cloud location plus sequential write performance of the cloud storage; this does not take into account source-side disk-reading overheads).

  1. Download the script and place it to the archive_io_ctl folder.
  2. Open an elevated command prompt (in the search box on the taskbar, type cmd, right-click Command prompt and select Run as administrator).
  3. Use the cd command to navigate to the directory where acrhive_io_ctl has been unpacked. For example, you have unpacked it to C:\archive_io_ctl, then execute:
    cd C:\archive_io_ctl

  4. Execute the script:
    run_test.bat <cloud_storage_address>
    where <cloud_storage_address> is the address you've noted in Preparation - Step 2
    For example:
    run_test.bat baas-fes-eu4.acronis.com

The script creates a test file, uploads it to your storage, downloads a copy back to your machine and then removes the test file from the storage.

"finished with code 0" means that the script ran successfully. You can see the upload and the download time:

Testing upload and download speed on a Windows machine without PowerShell

To test the upload and the download speed, use an existing cloud storage (allows to measure the effective throughput of the network between the test machine and the cloud location plus sequential write performance of the cloud storage; this does not take into account source-side disk-reading overheads).

  1. Download the script2 and place it to the archive_io_ctl folder.
  2. Use a test file of approximately 512MB - 2GB size. An ISO file is a good fit for this test. Place the file to the archive_io_ctl folder.
  3. Open an elevated command prompt (in the search box on the taskbar, type cmd, right-click Command prompt and select Run as administrator).
  4. Use the cd command to navigate to the directory where acrhive_io_ctl has been unpacked. For example, you have unpacked it to C:\archive_io_ctl, then execute:
    cd C:\archive_io_ctl
  5. Execute the script:
    run_test2.bat <storage_address> <filename>
    where <cloud_storage_address> is the address you've noted in Preparation - Step 2
    and <filename> is the name of your test file
    For example:
    run_test2.bat baas-fes-eu4.acronis.com MyISO.iso

The script uploads the test file to your storage and then downloads a copy back to your machine.

"finished with code 0" means that the script ran successfully. You can see the upload and the download time:

Delete the uploaded test file so that it doesn't consume your cloud storage quota:

archive_io_ctl --astor <storage_address> --cert cert.crt --rm /1/<filenme>
where <filename> is the name of your test file
For example:
archive_io_ctl --astor baas-fes-eu4.acronis.com --cert cert.crt --rm /1/MyISO.iso

Testing upload and download speed on a Linux or a macOS machine

To test the upload and the download speed, use an existing cloud storage (allows to measure the effective throughput of the network between the test machine and the cloud location plus sequential write performance of the cloud storage; this does not take into account source-side disk-reading overheads).

Preparation

  1. Make sure your shell's current working directory is the one where you unpacked archive_io_ctl (use the cd command to enter the directory, use the pwd and ls commans to verify you're indeed in it).
  2. Prepare the files:
    chmod +x archive_io_ctl.exe
    export LD_LIBRARY_PATH=$(pwd)
  3. Use a test file of approximately 512MB - 2GB size. An ISO file is a good fit for this test. Or create a test file:
    dd if=/dev/urandom of=dummyfile.bin bs=1M count=1024 status=progress
    (typically /dev/urandom , and not /dev/random should be used, due to random possibly blocking --  see https://en.wikipedia.org/wiki//dev/random)
     
    Example:
    root@:~# dd if=/dev/urandom of=testfile.bin bs=1M count=1024 status=progress
    1021313024 bytes (1.0 GB, 974 MiB) copied, 15 s, 68.0 MB/s
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 15.7858 s, 68.0 MB/s
    root@:~# ls -lh testfile.bin
    -rw-r--r-- 1 root root 1.0G Jul 23 11:59 testfile.bin
    root@:~#
  4. Place the file to the archive_io_ctl folder.

Testing the upload speed

time ./archive_io_ctl.exe --copy <filename> --astor <storage_address> --cert cert.crt --dst /1/<filename> --continue
where <filename> is the name of your test file
<storage_address> is the address you've noted in Preparation - Step 2

For example:
time ./archive_io_ctl.exe --copy testfile.bin --astor baas-fes-eu4.acronis.com --cert cert.crt --dst /1/testfile.bin --continue

The elapsed wall clock time will be printed like this:
real:   1m26.230s

Divide the file size by the wallclock time spent; the result is the measured average transfer speed. E.g. 1GiB=1024MB divided by 1m26s=86s gives a speed of 11.9MBytes/s. NOTE: Please mind the units -- the file size is in megaBYTES, while your internet connection is usually measured in megaBITS. 1MByte/s = 8Mbit/s.

Testing the download speed

time ./archive_io_ctl.exe --astor <storage_address> --cert cert.crt --copy /1/<filename> --dst <newfilename> --continue
where <filename> is the name of your test file
<newfilename> is a new filename for the downloaded file (to avoid a filename conflict since the file will be downloaded into the same folder)
<storage_address> is the address you've noted in Preparation - Step 2

For example:
time ./archive_io_ctl.exe --astor baas-fes-eu4.acronis.com --cert cert.crt --copy /1/testfile.bin --dst newtestfile.bin --continue

The elapsed wallclock time will be printed like this:
real:   1m26.230s

Divide the file size by the wallclock time spent; the result is the measured average transfer speed. E.g. 1GiB=1024MB divided by 1m26s=86s gives a speed of 11.9MBytes/s. NOTE: Please mind the units -- the file size is in megaBYTES, while your internet connection is usually measured in megaBITS. 1MByte/s = 8Mbit/s.

Deleting the test files

Delete the uploaded test file so that it doesn't consume your cloud storage quota:

./archive_io_ctl.exe --astor <storage_address> --cert cert.crt --rm /1/<filename>
where <filename> is the name of your test file
<storage_address> is the address you've noted in Preparation - Step 2

For example:
./archive_io_ctl.exe --astor baas-fes-eu4.acronis.com --cert cert.crt --rm /1/testfile.bin

Delete the test files from the local directory: testfile.bin and newtestfile.bin:
rm testfile.bin newtestfile.bin

More information

It is generally a good idea to repeat these tests a few times, during different times of day and different days of week, to get a better idea of what results to expect as the load on the local network, the network path(s) local machine <-> cloud storage, and the I/O load on the cloud storage vary with time of day and day of week (network load and storage load).

NOTE: The transfer done by archive_io_ctl is, for the time being, over a single TCP stream, which matches how most agents work with cloud storage of most tenants at the moment. The currently used storage access protocol, ABGW 1.x, does not use multiple streams in parallel. A next-gen ABGW2 protocol is undergoing testing and should gradually be rolled out next year.

NOTE: The speedtest tool which is described in KB https://kb.acronis.com/content/59690 uses multiple TCP streams (as of now, typically 4) in parallel. This is less realistic and is in many cases not representative of how backups are transferred to/from cloud; if the latency between the testing machine/agent and the cloud storage exceeds several tens of milliseconds, the speedtest application is likely to SUBSTANTIALLY OVERESTIMATE THE TRANSFER SPEED. Please also note that the speedtest web application cannot be used with partner-hosted storage (storage which is not in an Acronis datacenter). The described method using archive_io_ctl is however UNIVERSALLY APPLICABLE.

NOTE: The obtained upload and download speeds should be used only as rough estimates of the maximum transfer speeds to/from cloud; the processing of real backups is usually somewhat slower (e.g. 20-30% slower is normal). If your actual backup transfer speeds are much slower than that, check for I/O and CPU bottlenecks (saturation) of the local machine on which you're doing the tests.

Tags: