Hello IFFOSS Community,
I wanted to share details of my current home system storage
configuration. I’ve been refining my approach to storage flexibility
using Logical Volume Management (LVM) and thought this might be a good
setup to show off or get some feedback on. I can tell you, I will never go back to
/
/home
I did a test to add new drives to give space to home and amazing. Next drives will be server drives. Ihope you like it. Took me 2.5 days to setup. Did a error check all systems OK
Instead
of one monolithic volume group, I’ve segmented my storage goals into
three distinct, dedicated Volume Groups (VGs), mixing fast NVMe storage
for the OS with larger, slower HDDs for bulk data.
Here is an overview of how it is laid out:
My Configuration Summary
I am running three separate Volume Groups:
-
vg_root: The operating system brains. -
vg_data: Fast access and general storage. -
vg_home: Bulk storage for user directories.
The Hardware Behind It All
Here is the output from sudo pvs showing the physical layout:
bash
PV VG Fmt Attr PSize PFree
/dev/nvme0n1p2 vg_root lvm2 a-- <465.27g 0
/dev/nvme1n1p1 vg_data lvm2 a-- <476.94g 0
/dev/sda1 vg_data lvm2 a-- <931.51g 0
/dev/sdb1 vg_home lvm2 a-- <931.51g 0
/dev/sdc1 vg_home lvm2 a-- <931.51g 0
/dev/sdd1 vg_home lvm2 a-- <465.76g 0
Use code with caution.
Breakdown of the VGs
| VG Name | Size | Components | Purpose |
|---|---|---|---|
vg_root |
~465 GB | Single NVMe drive (/dev/nvme0n1p2) |
OS (lv_root), Swap (lv_swap) |
| undefined | ---- | ---- | ---- |
vg_data |
~1.38 TB | NVMe + 1TB HDD | General data (general_data_lv), used for projects needing speed. |
| undefined | ---- | ---- | ---- |
vg_home |
~2.27 TB | 3 HDDs (931G + 931G + 465G) | Large home directory storage (lv_home) |
| undefined | ---- | ---- | ---- |
Why I Like This Setup:
-
Performance Segmentation: My OS boots and runs quickly from a dedicated NVMe drive (
vg_root). -
Scalability: When my
vg_dataorvg_homestarts to fill up (which they currently are, withVFree
showing 0!), I can easily add another physical drive to the specific
volume group that needs the space without disrupting the others. -
Clean Separation: It’s easy to understand where data lives and how different performance tiers are managed.
Everything is allocated right up to the maximum capacity of each VG at the moment:
bash
VG #PV #LV #SN Attr VSize VFree
vg_data 2 1 0 wz--n- <1.38t 0
vg_home 3 1 0 wz--n- 2.27t 0
vg_root 1 2 0 wz--n- <465.27g 0
Use code with caution.
Next steps: I am planning to add another drive to vg_home soon to expand my user directory space.
Has anyone else here organized their LVM this way? Any tips for managing this many VGs efficiently?
Cheers,
JackFrost