How to perform In-place volume restores with Portworx via CLI

In-place volume restores with Portworx are useful when you don’t want to have a separate restored volume from the original volume used by your application.

While there is kubernetes objects method described here https://docs.portworx.com/portworx-install-with-kubernetes/storage-operations/create-snapshots/on-demand/snaps-cloud/#restore-cloud-snapshots

Refer to section “Restore a cloud snapshot to the original PVC”

You can also do this operation via CLI as follows:-

There is no direct way to do this, however you can follow the steps via CLI.

  • Let’s say I have volume test1vol with repl 3 on poolid p1,p2,p3 (You can check pool ids by doing pxctl volume inspect , under Replica Sets Node section)
[root@worker1-node test1]# pxctl volume inspect test1vol
Volume  :  877353100090228711
        Name                     :  test1vol
        Size                     :  2.0 GiB
        Format                   :  ext4
        HA                       :  3
        IO Priority              :  LOW
        Creation time            :  Sep 7 11:50:34 UTC 2020
        Shared                   :  no
        Status                   :  up
        State                    :  Attached: ada3bd33-a6cf-4f01-9d75-4bcda8b88b9e (192.168.29.31)
        Device Path              :  /dev/pxd/pxd877353100090228711
        Reads                    :  56
        Reads MS                 :  3972
        Bytes Read               :  1114112
        Writes                   :  6325
        Writes MS                :  5801656
        Bytes Written            :  4296347648
        IOs in progress          :  0
        Bytes used               :  201 MiB
        Replica sets on nodes:
                Set 0
                  Node           : 192.168.29.33 (Pool 0d7ddaf9-3742-4d95-9df0-c411ec1b44f8 )
                  Node           : 192.168.29.31 (Pool 2bd7d5c2-cc96-4218-8b1d-8ddeb7860116 )
                  Node           : 192.168.29.37 (Pool 1044ff8d-bcfd-441a-9465-845e3da61655 )
        Replication Status       :  Up
  • Get the cloud-snap-id to the restore the volume from.
[root@worker1-node test1]# pxctl cs list
SOURCEVOLUME            SOURCEVOLUMEID                  CLOUD-SNAP-ID                                                                           CREATED-TIME                            TYPE            STATUS
test1vol                877353100090228711              b4fe6cde-80df-46fe-9eb7-c2691f4ed543/877353100090228711-839669093149144139              Wed, 30 Sep 2020 07:56:52 UTC           Manual          Done
test1vol                877353100090228711              b4fe6cde-80df-46fe-9eb7-c2691f4ed543/877353100090228711-362071880145893630-incr         Wed, 30 Sep 2020 07:58:28 UTC           Manual          Done
test1vol                877353100090228711              b4fe6cde-80df-46fe-9eb7-c2691f4ed543/877353100090228711-356482201843908268-incr         Wed, 30 Sep 2020 07:59:28 UTC           Manual          Done
  • Restore your cloud snapshot to new temporary volume tempVol (in our case it can be either p1,p2 or p3, let’s select p1)
pxctl cloudsnap restore -s b4fe6cde-80df-46fe-9eb7-c2691f4ed543/877353100090228711-362071880145893630-incr  -v tempVol -n 0d7ddaf9-3742-4d95-9df0-c411ec1b44f8
  • By default tempvol will have repl 1, we need to match repl level of tempVol as that of test1vol in the same set of Pool-ids. So increase replica of volume using ha-update.

pxctl v ha-update tempVol --repl 2 -n 2bd7d5c2-cc96-4218-8b1d-8ddeb7860116
pxctl v ha-update tempVol --repl 3 -n 1044ff8d-bcfd-441a-9465-845e3da61655

  • Scale down your deployment so volume test1vol gets into detach state.

  • Perform in-place restore of volume test1vol from tempVol

pxctl volume restore test1vol -s tempVol

  • Scale back up your deployment

  • And then you might want to delete the tempVol created for restore.

pxctl volume delete tempVol

  • Now your original volume is restored to specific state without creation of separate volume.