Increase VM Disk Size
- Power off the VM.
-
Increase the primary virtual disk and Ceph virtual disk sizes from the hypervisor. Ceph
uses the second disk attached to the virtual machine.
To increase disk size, perform the steps corresponding to your hypervisor:
-
Note: If your VM has a snapshot, the disk option to increase size is grayed out and unavailable. To increase the hard disk for a VM with a snapshot, you must first delete the snapshot in the VMware ESXi 8.0 server.
-
- Power on the VM.
- Log in to the Linux root shell of the VM.
-
Verify that the disk size is increased. For example:
root@vm4:~# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS loop0 7:0 0 64M 1 loop /snap/core20/2379 loop1 7:1 0 63.7M 1 loop /snap/core20/2434 loop2 7:2 0 87M 1 loop /snap/lxd/29351 loop3 7:3 0 89.4M 1 loop /snap/lxd/31333 loop4 7:4 0 38.8M 1 loop /snap/snapd/21759 loop5 7:5 0 44.3M 1 loop /snap/snapd/23258 sr0 11:0 1 4M 0 rom nbd0 43:0 0 0B 0 disk nbd1 43:32 0 0B 0 disk nbd2 43:64 0 0B 0 disk nbd3 43:96 0 0B 0 disk nbd4 43:128 0 0B 0 disk nbd5 43:160 0 0B 0 disk nbd6 43:192 0 0B 0 disk nbd7 43:224 0 0B 0 disk vda 252:0 0 400G 0 disk ├─vda1 252:1 0 1M 0 part └─vda2 252:2 0 400G 0 part /var/lib/kubelet/pods/fde8c46d-f069-4203-bd4e-3897d5915559/volume-subpaths/config/network/0 .... /export/local-volumes/pv1 / vdb 252:16 0 100G 0 diskHere
vdafor the primary disk has increased to 400-GB andvdbfor Ceph has increased to 100-GB. -
Restart Ceph OSD to detect the size change. For example:
Verify the existing OSD size.
Launch the Rook tools pod.
root@vm1:~# kubectl exec -ti -n rook-ceph $(kubectl get po -n rook-ceph -l app=rook-ceph-tools -o jsonpath={..metadata.name}) -- bash- Retrieve the current OSD size.
bash-4.4$ ceph osd status ID HOST USED AVAIL WR OPS WR DATA RD OPS RD DATA STATE 0 vm2 2846M 47.2G 0 2633 0 0 exists,up 1 vm3 3132M 46.9G 1 28.6k 0 1135k exists,up 2 vm4 3065M 47.0G 4 23.9k 3 201k exists,up 3 vm1 2897M 47.1G 1 2698 1 979k exists,up
In this example, you are modifying the OSD on vm4. Here, we still see original size, which is ~50-GB (USED+AVAIL).
Go back to the Linux root shell and determine the OSD pod that runs on the node for which the disk size is to be increased (vm4).
root@vm1:~# kubectl get pod -A -o wide | grep osd | grep vm4 ... rook-ceph rook-ceph-osd-2-787df64c87-bkjt8 2/2 Running 2 (4d3h ago) 13d 10.1.2.8 vm4 <none> <none> ...
Restart the pod.
root@vm1:~# kubectl rollout restart deploy -n rook-ceph rook-ceph-osd-2-787df64c87-bkjt8
Verify the new size.
Launch the Rook tools pod and execute
ceph osd statusagain.bash-4.4$ ceph osd status ID HOST USED AVAIL WR OPS WR DATA RD OPS RD DATA STATE 0 vm2 2924M 47.1G 0 4403 0 0 exists,up 1 vm3 3202M 46.8G 0 426k 0 1140k exists,up 2 vm4 1474M 98.5G 0 477 1 186k exists,up 3 vm1 2977M 47.0G 3 21.4k 3 1008k exists,up
Here, the OSD on vm4 has increased to ~100-GB.
Verify that the total size has also increased.
bash-4.4$ ceph status ... data: ... usage: 56 GiB used, 194 GiB / 250 GiB availHere, the total size has increased from 200-GB to 250-GB.
Note:In this example, you increased the size of Ceph storage on one node VM. You must increase the size on all the node VMs to make the storage value consistent across all the nodes. Repeat these steps for the remaining three VMs.
-
After you increase the total size of Ceph storage, you must update the allocation quota
between object storage and PVC. Log in to Paragon Shell of the VM to increase the
allocation quota.
root@vm1> request paragon deploy cluster input "-t rook-quota" Process running with PID: 1830232 To track progress, run 'monitor start /epic/config/log'
Note:After the total Ceph storage allocation is increased, if you want to increase the TimescaleDB PVC size, you must increase it manually. To increase the PVC size, see Increase Timescale DB PVC.