Proudly sharing my LVM setup: Multi-VG, NVMe + HDD Hybrid Storage

Hello IFFOSS Community,

I wanted to share details of my current home system storage
configuration. I’ve been refining my approach to storage flexibility
using Logical Volume Management (LVM) and thought this might be a good
setup to show off or get some feedback on. I can tell you, I will never go back to

/

/home

I did a test to add new drives to give space to home and amazing. Next drives will be server drives. Ihope you like it. Took me 2.5 days to setup. Did a error check all systems OK

Instead
of one monolithic volume group, I’ve segmented my storage goals into
three distinct, dedicated Volume Groups (VGs), mixing fast NVMe storage
for the OS with larger, slower HDDs for bulk data.

Here is an overview of how it is laid out:

My Configuration Summary

I am running three separate Volume Groups:

  1. vg_root: The operating system brains.

  2. vg_data: Fast access and general storage.

  3. vg_home: Bulk storage for user directories.

The Hardware Behind It All

Here is the output from sudo pvs showing the physical layout:

bash

PV             VG      Fmt  Attr PSize    PFree
/dev/nvme0n1p2 vg_root lvm2 a--  <465.27g    0 
/dev/nvme1n1p1 vg_data lvm2 a--  <476.94g    0 
/dev/sda1      vg_data lvm2 a--  <931.51g    0 
/dev/sdb1      vg_home lvm2 a--  <931.51g    0 
/dev/sdc1      vg_home lvm2 a--  <931.51g    0 
/dev/sdd1      vg_home lvm2 a--  <465.76g    0 

Use code with caution.

Breakdown of the VGs

VG Name Size Components Purpose
vg_root ~465 GB Single NVMe drive (/dev/nvme0n1p2) OS (lv_root), Swap (lv_swap)
undefined ---- ---- ----
vg_data ~1.38 TB NVMe + 1TB HDD General data (general_data_lv), used for projects needing speed.
undefined ---- ---- ----
vg_home ~2.27 TB 3 HDDs (931G + 931G + 465G) Large home directory storage (lv_home)
undefined ---- ---- ----

Why I Like This Setup:

  • Performance Segmentation: My OS boots and runs quickly from a dedicated NVMe drive (vg_root).

  • Scalability: When my vg_data or vg_home starts to fill up (which they currently are, with VFree
    showing 0!), I can easily add another physical drive to the specific
    volume group that needs the space without disrupting the others.

  • Clean Separation: It’s easy to understand where data lives and how different performance tiers are managed.

Everything is allocated right up to the maximum capacity of each VG at the moment:

bash

VG      #PV #LV #SN Attr   VSize    VFree
vg_data   2   1   0 wz--n-   <1.38t    0 
vg_home   3   1   0 wz--n-    2.27t    0 
vg_root   1   2   0 wz--n- <465.27g    0 

Use code with caution.

Next steps: I am planning to add another drive to vg_home soon to expand my user directory space.

Has anyone else here organized their LVM this way? Any tips for managing this many VGs efficiently?

Cheers,
JackFrost

4 Likes

What would happen with this setup if you multi-booted 2 linuxes?
2 Linuxes can not share a home directory.
I make a separate sharable personal data directory and leave /home in each linux just for dot files.
I do that without LFS … I dont see any advantage in using LFS
I am like you, I have a fast SSD for OS’s and HDD’s for data, and I also have pluggable sata disks… I can manage all that without LFS?

3 Likes

Not sure that’s strictly true… Could cause problems maybe with different DEs - but not with headless machines - I’m sure there’d be a way to make it work…

One of my customers - $HOME on EVERY Linux and Solaris server is an NFS mount from a NetApp - and - it works everywhere… It’s kinda brilliant…

I have a conditional in my ~/.bash_profile (or is it ~/.bashrc - I never know which one to use) that determines :

  1. is this solaris - is zsh installed?
  2. if this is Linux - is zsh installed?

if ZSH isn’t installed - nothing -
but if it is - run it! and that then loads my .zshrc !

And - my ~/.ssh/ dir is the same everywhere… Brilliant!

4 Likes

PV VG Fmt Attr PSize PFree
/dev/nvme0n1p2 vg_root lvm2 a-- <465.27g 0
/dev/nvme1n1p1 vg_data lvm2 a-- <476.94g 0
/dev/sda1 vg_data lvm2 a-- <931.51g 0
/dev/sdb1 vg_home lvm2 a-- <931.51g 0
/dev/sdc1 vg_home lvm2 a-- <931.51g 0
/dev/sdd1 vg_home lvm2 a-- <465.76g 0

Lets study little here the PFee is the partitons I have 0 free partition and there are 3 homes pooled that give me 2.27 TB of data space.

LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
general_data_lv vg_data -wi-ao---- <1.38t
lv_home vg_home -wi-ao---- 2.27t

So there is 2 things I can do I can steal from a pool and create a new disk with the stolen space, I then can setup An new OS.

Let’s study the LVM structure a little based on my output. You’ll see I’m pretty maxed out on current space allocation:

PV              VG      Fmt  Attr PSize    PFree
/dev/nvme0n1p2  vg_root lvm2 a--  <465.27g    0
/dev/nvme1n1p1  vg_data lvm2 a--  <476.94g    0
/dev/sda1       vg_data lvm2 a--  <931.51g    0
/dev/sdb1       vg_home lvm2 a--  <931.51g    0
/dev/sdc1       vg_home lvm2 a--  <931.51g    0
/dev/sdd1       vg_home lvm2 a--  <465.76g    0

The PFree column is key here: I have 0 free space (Physical Extents) left in any of my Volume Groups. Everything is used.

Here are my Logical Volumes, showing how that space is assigned:

LV              VG      Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
general_data_lv vg_data -wi-ao----   <1.38t
lv_home         vg_home -wi-ao----    2.27t

My three vg_home disks are pooled to give me that massive 2.27 TB of space for /home.

Options for Adding a New OS:

Since I have no unallocated space within my current pools, I essentially have two main options if I want to install a new OS:

  1. Steal space from a pool (Shrink an LV): I can shrink an existing Logical Volume (like general_data_lv since it’s mostly empty), reclaim that space into the free pool of vg_data, and then create a new Logical Volume for the new OS using that reclaimed space. This requires careful resizing commands.

  2. Add another physical disk: The cleaner, safer option would be to physically add a new SSD or NVMe drive, make it an LVM Physical Volume, and dedicate that new space entirely to the new OS installation.

Both paths are feasible because I’m already using LVM!

This setup is for long permanent users of Linux it really is the best way to setup linux

2 Likes

Yes, they can! Here’s how to do it using Logical Volumes (LVs) and LVM:

  1. Create a shared Volume Group (VG):

    • On one Linux system, make a new VG called vg_shared_home on an available disk device or partition.
  2. Create a Logical Volume (LV) in the shared VG:

    • Create an LV named lv_home_shared that takes up half of vg_shared_home’s size.
  3. Prepare the shared home directory:

    • Format the LV with ext4 filesystem (mkfs.ext4 /dev/vg_shared_home/lv_home_shared).

    • Create a mount point (sudo mkdir -p /mnt/shared_home).

    • Mount the formatted LV at the created mount point (sudo mount /dev/vg_shared_home/lv_home_shared /mnt/shared_home).

  4. Set up proper permissions:

    • Change ownership of /mnt/shared_home to both users (sudo chown user1:user2 /mnt/shared_home).
  5. Configure automatic mounting:

    • Add an entry to /etc/fstab on both Linux systems to auto-mount the shared home directory LV at startup.
  6. Update user home directories:

    • Change both users’ home directories to use the shared LV (sudo usermod -d /mnt/shared_home user1 and sudo usermod -d /mnt/shared_home user2).

Now, reboot both Linux systems to ensure automatic mounting of the shared home directory at startup. Log in as each user (user1 and user2) to verify that their home directories are located within the shared LV /mnt/shared_home.

That’s it! Now you have two Linux systems sharing the same home directory using LVM.

I have done this today.

2 Likes

OK, but you have to be very careful with dot files
The way modern Linux puts stuff in .local and .cache, for example, means that /home has become part of the system, so I prefer to keep personal files somewhere separate.

3 Likes

I agree it is neat.
Some of the attraction may disappear if you were to have a single large disk.

1 Like

Yes you can share /home. But there are consequences. Think of what will happen when 2 distros possibly running different versions of firefox write to the same .mozilla directory. I would not like to try and sort out the mess.

2 Likes

Hate to be a pedant - but doesn’t “chown user1:user2” set the GID on that folder to a group “user2”? i.e. after the colon “:” you specify the Group Owner of the folder… That’s how I read that anyway…

2 Likes

On reflection I think LVM has a lot going for it in your situation where you want to pool an number of small disks and make one or more larger virtual volumes.

1 Like

you are correct I made changes for this.

usermod -aG sharedusers user1
usermod -aG sharedusers user2

chown root:sharedusers /mnt/shared_home

chmod 770 /mnt/shared_home

chmod g+s /mnt/shared_home

chown root:sharedusers /mnt/shared_home

chmod 770 /mnt/shared_home

chmod g+s /mnt/shared_home
1 Like

Here is the correct procedure This is all new for me as well so the help is much appreciated. I can say from user experience I like this setup however if a drive failed I’m Up ship creek without paddles.:kissing_face_with_smiling_eyes:

Edit I am sure with backups and time shift I can restore back when the new drive is in

Yes, they can! Here’s how to do it using Logical Volumes (LVs) and LVM:

The Corrected Procedure

1. Create a shared Volume Group (VG)

On one Linux system, make a new VG called vg_shared_home on an available disk device or partition.

2. Create a Logical Volume (LV) in the shared VG

Create an LV named lv_home_shared that takes up half of vg_shared_home’s size.

3. Prepare the shared home directory

Format the LV with an ext4 filesystem, create a mount point, and mount it:

  • mkfs.ext4 /dev/vg_shared_home/lv_home_shared

  • sudo mkdir -p /mnt/shared_home

  • sudo mount /dev/vg_shared_home/lv_home_shared /mnt/shared_home

4. Set up proper permissions (FIXED STEP)

A folder can only have one user owner and one group owner. To share access, create a shared group that both users belong to.

  • Create a new shared group:
    sudo groupadd sharedgroup

  • Add both users to that group (Replace user1 and user2 with actual usernames):
    sudo usermod -aG sharedgroup user1
    sudo usermod -aG sharedgroup user2

  • Change the ownership of the directory (Set owner to root, group owner to sharedgroup):
    sudo chown root:sharedgroup /mnt/shared_home

  • Set permissions (User=rwx, Group=rwx, Others=none):
    sudo chmod 770 /mnt/shared_home

  • Ensure new files inherit the shared group automatically (SetGID bit):
    sudo chmod g+s /mnt/shared_home

(Note: Users must log out and log back in for new group memberships to take effect.)

5. Configure automatic mounting

Add an entry to /etc/fstab on both Linux systems to auto-mount the shared home directory LV at startup:

  • /dev/vg_shared_home/lv_home_shared /mnt/shared_home ext4 defaults 0 2

6. Update user home directories

Change both users’ home directories to use the shared LV:

  • sudo usermod -d /mnt/shared_home user1

  • sudo usermod -d /mnt/shared_home user2

Now, reboot both Linux systems to ensure automatic mounting of the shared home directory at startup. Log in as each user (user1 and user2) to verify that their home directories are located within the shared LV /mnt/shared_home.

That’s it! Now you have two Linux systems sharing the same home directory using LVM and correct permissions.

2 Likes

I get it now. You are having a separate user in each Linux distro… that avoids clashing dot files. Sorry, I can sometimes be as dense as a block of lead.

I want the same user in each distro… therefore I cant share home directories because the dot files will clash. You cant have 2 linuxes writing in the same dotfile space.

Do you know what happens with LVM when you run a disk backup with eg Clonezilla.?
I suspect clonezilla will image a physical disk, not a logical volume?

2 Likes

Yes I do know what happens when I ran out of space in the /lv data a frightening red screen warning come up. So what I did was put a new drive in and pooled that for my / pools data. I do not know about clonezilla, Never used it. What I read all disk has to be unmounted.

Option A: Expanding the Existing Root Filesystem

root (/) filesystem

Option A: Expanding the Existing Root Filesystem

Command
Instruction
sudo fdisk -l List all disks to identify your new drive (e.g., /dev/sdb).
undefined -—
sudo fdisk /dev/sdb Start partitioning the new disk (follow prompts: n, p, accept defaults, t, 8e, w).
undefined -—
sudo partprobe Inform the operating system about the new partition table.
undefined -—
sudo pvcreate /dev/sdb1 Initialize the new partition as a Physical Volume (PV) for LVM.
undefined -—
sudo vgs Identify the name of your Volume Group (e.g., centos, vg_root).
undefined -—
sudo vgextend centos /dev/sdb1 Add the new PV to the existing Volume Group (replace centos with your VG name).
undefined -—
sudo lvs Identify the Logical Volume path for your root filesystem (e.g., /dev/centos/root).
undefined -—
sudo lvextend -l +100%FREE /dev/centos/root Extend the root Logical Volume to use all available space.
undefined -—
sudo resize2fs /dev/centos/root Resize the ext4 filesystem to match the new LV size (use xfs_growfs for xfs).
undefined -—
df -h / Verify the root filesystem size has increased.
undefined -—

Option B: Creating a New, Separate Logical Volume

These commands are to create a brand new LVM logical volume on a new disk, intended to be a separate mount point (e.g., /data).

Command
Instruction
sudo wipefs -a /dev/sdb Clear any existing data/signatures from the entire disk.
undefined -—
sudo pvcreate /dev/sdb Initialize the entire disk as a Physical Volume (PV).
undefined -—
sudo vgextend vg_name /dev/sdb Add the new PV to your chosen Volume Group (replace vg_name).
undefined -—
sudo lvcreate -n lv_data -L 80%VG vg_name Create a new Logical Volume (lv_data) using 80% of the VG’s size.
undefined -—
sudo mkfs.ext4 /dev/mapper/vg_name-lv_data Format the new LV with the ext4 filesystem.
undefined -—
sudo mkdir -p /data Create the directory where the new volume will be mounted.
undefined -—
`echo “/dev/mapper/vg_name-lv_data /data ext4 defaults 0 0” sudo tee -a /etc/fstab`
undefined -—
sudo mount -a Mount all entries listed in fstab (mounts the new drive immediately).
undefined -—
df -h /data Verify the new volume is mounted and visible.
undefined -—
3 Likes

Yes you use it from a flash drive so all disks are unmounted.
I think it disables/ignores LVM and images physical disks… so it would recover a whole disk, or partition , ignoring LVM. That would be OK … LVM is just some settings written on the root partition that tell your Linux how to ‘see’ the disks.

What backup do you use? Check how it interacts with LVM.

2 Likes

timeshift and live drive and the built in backup in mint

3 Likes

I have never used cloud backup.
I rsync my data directory to a second copy every night… a bit like raid but manually driven. About once a month I image all 3 internal disks with clonezilla to a usb backup drive. That is all … I have no timeshift equivalent … if an OS dies, I just reinstall it. I
protect my personal data, but OS’s are expendable.

Question:
If I was using LVM, and then I used a new linux on a separate disk that did not have LVM configured, what would I see on all my disks… would I see conventional partitions ? … would the disks controlled by lvm be usable?
and
if I did that, wrote things to the disks, and then went back to a Linux with LVM, would it see things I wrote without LVM. ?
I am really asking about lockin. Is LVM a lockin? If it is , how do you get on with a live flash drive that does not have LVM configured?

My understanding is that when you use LVM the volume info is marked on the disk, not in the OS. So it is a bit like a partition table. What is in the OS is ability to read the LVM info from the disk and act on it. … so if I use an OS that does not have LVM ability (eg Freebsd), what does it see? I think it would see an unformatted disk?

3 Likes

This table captures the difference perfectly: Horse vs Tractor Theory :vulcan_salute:

Feature
Horse and Plow (Traditional Partitions) Tractor and Modern Equipment (LVM)
Setup Speed Manual, requires skill, takes time to lay straight lines. Automated, efficient, gets the job done quickly.
undefined -— -—
Flexibility Static, hard to change the field layout once started. Dynamic, adaptable, easy to add more space/fields.
undefined -— -—
Compatibility Works on any terrain/farm (universal compatibility). Requires a modern setup (Linux-specific technology).
undefined -— -—
Power/Efficiency Gets the job done, but takes significant manual effort. More powerful tools for management and scale.
undefined -— -—

Just like modern farming maximizes output and efficiency with machinery, LVM
maximizes storage flexibility and minimizes management headaches
compared to the old, static methods. Copy Righted by me :copyright:

4 Likes

That is fine. What I was trying to do was to understand how LVM works.

2 Likes

Logical Volume Management (LVM) in Linux serves a very similar purpose to Dynamic Disks in the Windows world (or features like Storage Spaces or RAID controllers), providing flexible storage management beyond basic partitions.

Key Similarities: LVM vs. Dynamic Disks

Both technologies share core capabilities that differentiate them from traditional, static partitioning:

Feature
LVM (Linux) Dynamic Disks (Windows)
Combine Disks Combine multiple Physical Volumes (PVs) into a Volume Group (VG). Span volumes across multiple physical disks.
undefined -— -—
Resize Volumes Logical Volumes (LVs) can be resized (grown or shrunk) online. Volumes can be extended or shrunk dynamically.
undefined -— -—
Flexibility Manage space abstractly within VGs; space doesn’t need to be contiguous on the physical drive. Provide volume management independent of underlying disk layouts.
undefined -— -—

Summary

If you are familiar with Windows Dynamic Disks, you can think of the LVM components like this:

  • A Physical Volume (PV) is like a physical disk or partition used by the dynamic disk system.

  • A Volume Group (VG) is the shared pool of storage you draw from (like the total pool of space available to all dynamic disks).

  • A Logical Volume (LV)
    is the actual partition you format and use, which can span multiple
    physical disks seamlessly (like a spanned or striped dynamic volume).

providing your disk is /dev/sdb, I just completed this to expend my Root by 2tb in less then 2 minutes.

Here are the specific commands required to expand a Logical Volume by adding a new physical disk to the Volume Group.

This sequence of commands allows us to seamlessly increase the size of an existing filesystem without needing to unmount it or reboot the system.

Prerequisites

  • The new physical disk is installed and recognized by the OS (e.g., /dev/sdb).

  • You know the name of your existing Volume Group (e.g., vg_data).

  • You know the name of your existing Logical Volume (e.g., general_data_lv).

Step-by-Step Commands

Execute the following commands in sequence in your terminal:

1. Initialize the new physical disk as an LVM Physical Volume (PV)

This prepares the disk for LVM use. Replace /dev/sdb with the device name of your new disk.

bash

sudo pvcreate /dev/sdb

2. Add the new Physical Volume to your Volume Group (VG)

This merges the capacity of the new disk into the storage pool of your existing Volume Group. Replace vg_data with the actual name of your Volume Group.

bash

sudo vgextend vg_data /dev/sdb

3. Extend the Logical Volume (LV)

This command extends the Logical Volume to consume all the newly added free space now available in the Volume Group. Replace vg_data and general_data_lv with your actual VG and LV names.

bash

sudo lvextend -l +100%FREE /dev/vg_data/general_data_lv

4. Resize the Filesystem

The final step resizes the filesystem inside the logical volume so the operating system can use the additional space. This command can often be run while the filesystem is mounted.

  • For ext4 filesystems:

    bash

    sudo resize2fs /dev/vg_data/general_data_lv
    
  • For XFS filesystems (Note: XFS requires specifying the mount point instead of the device path):

    bash

    sudo xfs_growfs /mount/point  # e.g., sudo xfs_growfs /mnt/data
    

Verification

Once finished, verify the new, expanded size using the df command:

bash

df -hT

df -hT
Filesystem                          Type      Size  Used Avail Use% Mounted on
tmpfs                               tmpfs     1.6G  2.1M  1.6G   1% /run
efivarfs                            efivarfs  128K   77K   47K  63% /sys/firmware/efi/efivars
/dev/mapper/vg_root-lv_root         ext4      2.3T   34G  2.2T   2% /
tmpfs                               tmpfs     7.8G  4.0K  7.8G   1% /dev/shm
tmpfs                               tmpfs     5.0M   16K  5.0M   1% /run/lock
/dev/nvme0n1p1                      vfat      500M  6.2M  493M   2% /boot/efi
/dev/mapper/vg_data-general_data_lv ext4      1.4T   20G  1.3T   2% /mnt/data
/dev/mapper/vg_home-lv_home         ext4      2.3T   25G  2.1T   2% /home
tmpfs                               tmpfs     1.6G  208K  1.6G   1% /run/user/1000
/dev/sdf1                           fuseblk   3.7T  1.6T  2.1T  43% /media//4tb
/dev/sdg1                           vfat       30G  2.9G   27G  10% /media//LINUX MINT

You should see the total size of your mounted filesystem reflecting the added disk capacity.

4 Likes