64759: Acronis Cyber Infrastructure: How to import VZ7 Virtual Machine into Acronis Cyber Infrastructure

use Google Translate

Applies to Acronis Cyber Infrastructure 3.0 or later

How to import VZ7 Virtual Machine into Acronis Cyber Infrastructure

It is possible to copy a Virtuozzo 7 virtual disk to Acronis Cyber Infrastructure Storage and create a new VM with that disk.

Follow the steps below:

On Virtuozzo 7 node:

1. Stop the source VM and find its virtual disk:

# prlctl stop <VM_name>
# prlctl list -i <VM_name> | grep hdd

Example of command execution:

[root@VZ7 ~]# prlctl stop Migrate-VM
[root@VZ7 ~]# prlctl list -i Migrate-VM | grep hdd
 hdd0 (+) scsi:0 image='/vz/vmprivate/15f8a047-a00e-4cb8-a8db-2fdb7322cd49/harddisk.hdd' type='expanded' 65536Mb subtype=virtio-scsi

On any Acronis Cyber Infrastructure node:

2. Create an empty volume in Acronis Cyber Infrastructure

# vinfra service compute volume create --size <size_in_GB> --storage-policy <storage_policy_ID_or_Name> <volume_name>

Example of command execution: volume size 64GB, default storage policy, migrate-vm-volume is arbitrary volume name

[root@node01 ~]# vinfra service compute volume create --size 64 --storage-policy default migrate-vm-volume                                                                                          +--------------------------------+-------------------------------------------+
| Field                          | Value                                     |
+--------------------------------+-------------------------------------------+
| attachments                    | []                                        |
| availability_zone              | nova                                      |
| bootable                       | False                                     |
| consistencygroup_id            |                                           |
| created_at                     | 2019-10-09T13:16:05.448810                |
| description                    |                                           |
| encrypted                      | False                                     |
| id                             | 024b6843-2de3-4e25-a6e1-2b6ea2d601cf      |
| imageRef                       |                                           |
| migration_status               |                                           |
| multiattach                    | False                                     |
| name                           | migrate-vm-volume                         |
| network_install                | False                                     |
| os-vol-host-attr:host          | node01.vstoragedomain@vstorage#vstorage   |
| os-vol-mig-status-attr:migstat |                                           |
| os-vol-mig-status-attr:name_id |                                           |
| project_id                     | b6a94a03039043e2b69ff4ccab01f256          |
| replication_status             |                                           |
| size                           | 64                                        |
| snapshot_id                    |                                           |
| source_volid                   |                                           |
| status                         | creating                                  |
| storage_policy_name            | default                                   |
| updated_at                     | 2019-10-09T13:16:05.598860                |
| user_id                        | e31124cd06c74240ae9f9ad8d5e06e32          |
| volume_image_metadata          |                                           |
+--------------------------------+-------------------------------------------+

Note that the id in the output above should be used to rsync source disk into the proper place. migrate-vm-volume is an arbitrary volume name. 64 is disk size in GB as found on the source VZ7 node.

3. Copy the source image to the volume created in the previous step.

# rsync -av root@<source node IP>:<path to the source VM virtual disk> <path and ID of the volume created at step 2>

Here is a sample command:

[root@node01 ~]# rsync -av root@192.168.1.18:/vz/vmprivate/15f8a047-a00e-4cb8-a8db-2fdb7322cd49/harddisk.hdd /mnt/vstorage/vols/datastores/cinder/volume-024b6843-2de3-4e25-a6e1-2b6ea2d601cf/volume-024b6843-2de3-4e25-a6e1-2b6ea2d601cf

In this example, root@192.168.1.18 is the source node IP; /vz/vmprivate/15f8a047... is the path to the source VM virtual disk; /mnt/vstorage/vols/datastores/cinder/volume-024b6843.. is the path to and ID of the volume created in the previous step.

4. Create a VM with the following command:

# vinfra service compute server create <VM_name> --network id=<network> --volume source=volume,id=<volume_name_from_step2>,size=<size> --flavor <flavor_ID_or_name>

Example of command execution:

[root@node01 ~]# vinfra service compute server create migrate-vm --network id=public --volume source=volume,id=migrate-vm-volume,size=64 --flavor medium
+--------------+--------------------------------------+
| Field        | Value                                |
+--------------+--------------------------------------+
| config_drive |                                      |
| created      | 2019-10-09T13:39:38Z                 |
| description  |                                      |
| fault        |                                      |
| flavor       | disk: 0                              |
|              | ephemeral: 0                         |
|              | extra_specs: {}                      |
|              | original_name: medium                |
|              | ram: 4096                            |
|              | swap: 0                              |
|              | vcpus: 2                             |
| ha_enabled   | True                                 |
| host         |                                      |
| id           | 7b18bacb-8c3c-4c56-a531-94515a2ed510 |
| key_name     |                                      |
| metadata     | {}                                   |
| name         | migrate-vm                           |
| networks     | []                                   |
| power_state  | NOSTATE                              |
| project_id   | b6a94a03039043e2b69ff4ccab01f256     |
| status       | BUILD                                |
| task_state   | scheduling                           |
| updated      | 2019-10-09T13:39:39Z                 |
| user_data    |                                      |
| volumes      | []                                   |
+--------------+--------------------------------------+

The resulting VM will be connected to the public network and will be running with medium flavor in the default admin project (it can only be managed by the platform administrator).

Tags: