Sometimes I want to be the one to implement simplicity. I don't really want the snaps, or JuJu charms, and creating default 4G LV's is only creating more work for me.
I discovered a nice little oddity between Ubuntu 16.04LTS and 18.04LTS that needed to be addressed on my Intel NUC home lab "servers". If you run into this same issue, you might be thankful that I decided to share this little tidbit.
I try to keep my equipment as up-to-date as possible. You really want to do this as well, especially security patches. This is one thing I don't really slack off about. I was in the security for years before jumping to cloud architecture, and I have a lot of respect for security professionals, not to mention the badasses out there who could own my equipment. Sometimes this means just wiping the slate clean, and starting over. After all, this is a learning lab. But when I started installing Ubuntu 18.04LTE for OpenStack (kolla-ansible), I noticed this little nugget of polished cow-dung:
bjozsa@galvatron:~$ df -H Filesystem Size Used Avail Use% Mounted on udev 136G 0 136G 0% /dev tmpfs 28G 2.6M 28G 1% /run /dev/mapper/ubuntu--vg-ubuntu--lv 4.2G 2.6G 1.5G 64% / tmpfs 136G 0 136G 0% /dev/shm tmpfs 5.3M 0 5.3M 0% /run/lock tmpfs 136G 0 136G 0% /sys/fs/cgroup /dev/sda2 1.1G 80M 874M 9% /boot /dev/sda1 536M 6.4M 530M 2% /boot/efi /dev/loop0 96M 96M 0 100% /snap/core/6350 tmpfs 28G 0 28G 0% /run/user/1000 bjozsa@galvatron:~$
The installer (by default) never really prompted me to increase the LVM. In fact, during installation it showed that the entire drive was going to be used in the LV. To be fair to Canonical, I'm sure this is all hidden under some "Advanced" option (which is typical for them). But by default, this could be a little misleading to the administrator. If you've made it to this page, you're probably thinking the same thing: "why was there no other advanced warning"?!
So I started digging in to find out exactly how Canonical was performing a default installation of LVM. Turns out, they break up the LVM configuration into three main partitions.
bjozsa@galvatron:~$ sudo fdisk /dev/sda Welcome to fdisk (util-linux 2.31.1). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Command (m for help): p Disk /dev/sda: 40 TiB, 44002476818432 bytes, 85942337536 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: C12BD4BB-54FE-4132-99E9-948F2176B52F Device Start End Sectors Size Type /dev/sda1 2048 1050623 1048576 512M EFI System /dev/sda2 1050624 3147775 2097152 1G Linux filesystem /dev/sda3 3147776 85942335487 85939187712 40T Linux filesystem Command (m for help): ^C bjozsa@galvatron:~$
So at this point, I kind of know where this is going. If you've found yourself in this position as well, don't despair. You've already created the LVM VG, so you just need to resize the LV. This could take a little while as it did in my case, because I'm backing this little server with a 40T volume. It took roughly a few minute to extend the drive from 4G to 40T.
So how do you make the proper changes? It's simple. If you want to use the remaining portion of your LVM-configured volume, just run the following command, regardless of disk size.
bjozsa@galvatron:~$ sudo lvextend -l 100%FREE --resizefs /dev/ubuntu-vg/ubuntu-lv Size of logical volume ubuntu-vg/ubuntu-lv changed from 4.00 GiB (1024 extents) to 40.01 TiB (10489599 extents). Logical volume ubuntu-vg/ubuntu-lv successfully resized. resize2fs 1.44.1 (24-Mar-2018) Filesystem at /dev/mapper/ubuntu--vg-ubuntu--lv is mounted on /; on-line resizing required old_desc_blocks = 1, new_desc_blocks = 5122 The filesystem on /dev/mapper/ubuntu--vg-ubuntu--lv is now 10741349376 (4k) blocks long. bjozsa@galvatron:~$
Afterwards, you will notice that you have consumed the entire volume. Now, I wouldn't normally recommend this for real environments, but for a lab this works out really well. You can see that we now are using the entire 40T of space.
bjozsa@galvatron:~$ df -H Filesystem Size Used Avail Use% Mounted on udev 136G 0 136G 0% /dev tmpfs 28G 2.5M 28G 1% /run /dev/mapper/ubuntu--vg-ubuntu--lv 44T 2.9G 42T 1% / tmpfs 136G 0 136G 0% /dev/shm tmpfs 5.3M 0 5.3M 0% /run/lock tmpfs 136G 0 136G 0% /sys/fs/cgroup /dev/sda2 1.1G 80M 874M 9% /boot /dev/sda1 536M 6.4M 530M 2% /boot/efi /dev/loop0 96M 96M 0 100% /snap/core/6350 tmpfs 28G 0 28G 0% /run/user/1000 bjozsa@galvatron:~$
Anyway, I hope that helps someone out on the inter-webs. This was a quick one for today! Take care, and always keep learning...
Disclaimer: Although I am doing this terrible thing, I don't want to even suggest that this is a good practice. In fact, having large drive sizes is actually a bad thing, potentially even risky. I'm just telling you there's an easy fix, because I know you're going to do it anyway. - Enjoy!