Increase VM Disk Size
- Power off the VM.
-
Increase the primary virtual disk and Ceph virtual disk sizes from the hypervisor. Ceph
uses the second disk attached to the virtual machine.
To increase disk size, perform the steps corresponding to your hypervisor:
-
Note: If your VM has a snapshot, the disk option to increase size is grayed out and unavailable. To increase the hard disk for a VM with a snapshot, you must first delete the snapshot in the VMware ESXi 8.0 server.
-
- Power on the VM.
- Log in to the Linux root shell of the VM.
-
Verify that the disk size is increased. For
example, on Proxmox
and KVM hosted VMs:
root@vm4:~# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS sr0 11:0 1 4M 0 rom nbd0 43:0 0 0B 0 disk nbd1 43:32 0 0B 0 disk nbd2 43:64 0 0B 0 disk nbd3 43:96 0 0B 0 disk nbd4 43:128 0 0B 0 disk nbd5 43:160 0 0B 0 disk nbd6 43:192 0 0B 0 disk nbd7 43:224 0 0B 0 disk vda 252:0 0 400G 0 disk ├─vda1 │ 252:1 0 399.9G 0 part /var/lib/kubelet/pods/82d639b3-56ad-4aa3-8ae5-853542060a90/volume-subpaths/config/network/0 │ /var/lib/kubelet/pods/badeffa6-bd3b-4d3c-8d4e-852c4f539ac4/volume-subpaths/features/papi-ws/2 │ /var/lib/kubelet/pods/badeffa6-bd3b-4d3c-8d4e-852c4f539ac4/volume-subpaths/config/papi-ws/0 │ /var/lib/kubelet/pods/e502baaa-9cc8-4770-9cfd-b5643d021c13/volume-subpaths/cfssl-data/cfssl/2 │ /var/lib/kubelet/pods/e502baaa-9cc8-4770-9cfd-b5643d021c13/volume-subpaths/cfssl-data/cfssl/1 <output snipped> │ /export/local-volumes/pv4 │ /export/local-volumes/pv3 │ /export/local-volumes/pv2 │ /export/local-volumes/pv1 │ /var/lib/kubelet/pods/29e73626-169c-4ff0-a3b1-b5ab96b0d57f/volume-subpaths/tigera-ca-bundle/calico-node/6 │ / ├─vda14 │ 252:14 0 4M 0 part └─vda15 252:15 0 106M 0 part /boot/efi vdb 252:16 0 100G 0 diskHere
vdafor the primary disk has increased to 400-GB andvdbfor Ceph has increased to 100-GB.If thevdadisk size is not increased as configured, increase it manually by executing the following commands:-
root@vm4:~# growpart /dev/vda 1
-
root@vm4:~# resize2fs /dev/vda1
Rerun the
lsblkcommand to ensure that the disk size has increased.Note: On ESXi hosted VMs, the disk and partition might be namedsdaandsd1respectively.
-
-
Restart Ceph OSD to detect the size change. For example:
Verify the existing OSD size.
Launch the Rook tools pod.
root@vm1:~# kubectl exec -ti -n rook-ceph $(kubectl get po -n rook-ceph -l app=rook-ceph-tools -o jsonpath={..metadata.name}) -- bash- Retrieve the current OSD size.
bash-4.4$ ceph osd status ID HOST USED AVAIL WR OPS WR DATA RD OPS RD DATA STATE 0 vm2 2846M 47.2G 0 2633 0 0 exists,up 1 vm3 3132M 46.9G 1 28.6k 0 1135k exists,up 2 vm4 3065M 47.0G 4 23.9k 3 201k exists,up 3 vm1 2897M 47.1G 1 2698 1 979k exists,up
In this example, you are modifying the OSD on vm4. Here, we still see original size, which is ~50-GB (USED+AVAIL).
Go back to the Linux root shell and determine the OSD pod that runs on the node for which the disk size is to be increased (vm4).
root@vm1:~# kubectl get pod -A -o wide | grep osd | grep vm4 ... rook-ceph rook-ceph-osd-2-787df64c87-bkjt8 2/2 Running 2 (4d3h ago) 13d 10.1.2.8 vm4 <none> <none> ...
Restart the pod.
root@vm1:~# kubectl rollout restart deploy -n rook-ceph rook-ceph-osd-2-787df64c87-bkjt8
Verify the new size.
Launch the Rook tools pod and execute
ceph osd statusagain.bash-4.4$ ceph osd status ID HOST USED AVAIL WR OPS WR DATA RD OPS RD DATA STATE 0 vm2 2924M 47.1G 0 4403 0 0 exists,up 1 vm3 3202M 46.8G 0 426k 0 1140k exists,up 2 vm4 1474M 98.5G 0 477 1 186k exists,up 3 vm1 2977M 47.0G 3 21.4k 3 1008k exists,up
Here, the OSD on vm4 has increased to ~100-GB.
Verify that the total size has also increased.
bash-4.4$ ceph status ... data: ... usage: 56 GiB used, 194 GiB / 250 GiB availHere, the total size has increased from 200-GB to 250-GB.
Note:In this example, you increased the size of Ceph storage on one node VM. You must increase the size on all the node VMs to make the storage value consistent across all the nodes. Repeat these steps for the remaining three VMs.
-
After you increase the total size of Ceph storage, you must update the allocation quota
between object storage and PVC. Log in to
Deployment
Shell of the VM to increase the allocation quota.
root@vm1> request deployment deploy cluster input "-t rook-quota" Process running with PID: 1830232 To track progress, run 'monitor start /epic/config/log'