Portworx Volume does not get detached successfully when using multipath

Issue:

  • Pods unable to mount a Portworx volume after the Pod getting rescheduled.
     MountVolume.SetUp failed for volume "pvc-ba85504a-c6bf-4352-9be5-4d8b995f4bd8" 
         : rpc error: code = AlreadyExists desc = Volume is attached on another node
    
    

Resolution:

You need to blacklist all Portworx volumes in a multipath configuration file so multipathd does not hold the PX volume open. This needs to be done on all the cluster nodes and it will require restarting multipathd service. The example below shows how to blacklist all Portworx devices from multipathd.

#vi /etc/multipath.conf
...
blacklist {

  devnode "^pxd*"
 
}
...

Root cause:

Multipathd is holding the device open that prevents the volume from getting detached on the node that it was running on so it can be attached on a different node where the Pod got rescheduled on.