How to mount an ext4 filesystem in NetBSD?

This is a ‘proof of concept’ investigation. I need to test if my approach will function correctly.

Trying to access a Linux ext4 filesystem from NetBSD

What I would like to do is mount my 1.3Tb Linux(ext4) data partition to NetBSD.
I can see that to get an ext4 mount one has to have a Linux environment.
There are 2 ways to get a Linux environment inside NetBSD

  • run Linux in a VM. Give the VM hardware access , so it can mount the data partiton directly, then export the filesystem to NetBSD using NFS.

  • run a Linux environment ( minus a kernel) in a container. This is more problematic. Containers share the host’s kernel. A linux container will not work with BSD kernel. One may be able to make it function by installing Linux compatability packages in NetBSD. These packages provide Linux system calls. That may not be enough.

Given that the ‘Linux in a VM’ options seems most feasable, we shall try that first.

Setting up a VM in NetBSD

There seem to be two options.

  • NetBSD has its own native hypervisor called nvmm (NetBSD Virtual Machine Monitor) which is built on qemu. One can use qemu with or without nvmm. It will be slower without a hypervisor.
  • NetBSD also supports Xen ( a Type 1 hypervisor). Xen requires a special kernel.

Given that I have some knowledge of qemu we shall try that option first.

Qemu in NetBSD

A barebones qemu virtual machine can be started as follows:

  1. Create a file that is to be the virtual machine’s virtul disk.
qemu-img create -f qcow2 antixvm.qcow2
  1. Put an OS into the VM from an .iso file
qemu-system-x86_64 -cpu qemu64,-apic -m 512 -nic user -display sdl,gl=on -boot d -hda antixvm.qcow2 -cdrom /home/nevj/Downloads/antiX-23.2_x64-core.iso

This will boot a live Antix from which one can install using the cli-install command. Use the MBR option for grub. When the install completes, do not reboot, use poweroff as this will properly shutdown the VM.

This VM can now be booted from disk as follows

qemu-system-x86_64 -cpu qemu64,-apic -m 512 -nic user -display sdl,gl=on -boot c -hda antixvm.qcow2

The -boot c tells qemu to try to boot first from disk.
The -cpu qemu64,-apic tells qemu to use a particular CPU model. Apart from qemu64, kvm64 works, and the one that matches my machine (SandyBridge) works, but -cpu host does not work for me in NetBSD… it says it needs KVM, which is not available. It may work with NetBSD’s own hypervisor (NVMM), if I can figure out how to run NVMM.

I chose Antix core iso because it is small and CLI only. I originally tried with Alpine, but had difficulties with its default ramdisk configuration.

The flag -apic means noapic. It is necessary because of a bug in the NetBSD version of qemu. I dont understand this ‘fix’. NetBSD supports APIC.

With that in place we can look into making a disk or a partition available to a qemu guest.

Passing a physical disk or partition to a qemu guest

This has to be done in the qemu command line. It can not be added dynamically to a running guest.

I have an ext4 partiton called common which contains data. It is on the partiton known as /dev/dk7 to NetBSD. To pass that partition to a qemu guest I modify the qemu command as follows

qemu-system-x86_64 -cpu qemu64,-apic -m 512 -nic user -display sdl,gl=on -boot c -hda antixvm.qcow2 -drive if=none,id=drive0,format=raw,file=/dev/rdk7 virtio-blk-pci,drive=drive0

That above command has to be issued as root.
After booting as above, I get

so it has a disk called /dev/vda which is a block special device.

I can mount vda with a loop mount

mount -o loop /dev/vda /mnt

and I can go to /mnt and see its content

cd /mnt
ls
.... lots of directories

And I can read files.
I have successfully passed my /dev/dk7 partition from NetBSD host to Antix guest, and the guest is able to read its ext4 files, even though the host cant read ext4 files.

What I need to do now is export this mounted filesystem back to the host

Setting up NFS to export a filesystem from guest to host

This will require networking between host and guest.
First of all I have to make the Antix guest an NFS server

apt-get update
apt-get upgrade
apt-get install nfs-kernel-server

Then I have to export the /mnt/common filesystem
Edit the file /etc/exports and add the line

/mnt/common *(rw,sync,no_subtree_check)

[ Note: I need to set the IP address to ‘*’ in above. Using a specific IP address will not work for a VM]

Then I have to setup NFS client services in NetBSD. NetBSD does not have a package for this, you simply add the following lines to /etc/rc.conf to enable the required services

# NFS client
rpcbind=yes
nfs_client=yes
lockd=yes
statd=yes

The order is important.
Then either reboot or start services by hand as follows

# service rpcbind start
Starting rpcbind.
# service nfslocking start
Starting statd.
Starting lockd.
# 

Then create a mount point

mkdir /mnt/common

and do the nfs mount

mount -t nfs antixvm:/mnt/common /mnt/common

[ Note: This does not work until I fix networking. The NetBSD host does not know what ‘antixvm’ hostname is because antixvm does not have an IP number known to the host. ]

Given the network fix, I can now cd /mnt/common and can see my data
directories and operate on files.

Networking a qemu guest to a NetBSD host

Now, fix the networking.
For NFS to operate with the qemu guest as an NFS server, and the NetBSD host as client , I only need a very simple network link between host and guest. Therefore I will use a tap interface, rather than user or bridge.

  1. In the NetBSD host create a tap interface
ifconfig tap0 create
ifconfig tap0 descr "NetBSDVM" up
ifconfig tap0 10.0.2.1 netmask 255.255.255.0

Then check with ifconfig that it has worked, the tap0 interface should exist and be up and have an IP number.
That is like plugging another NIC card into the host and putting it on the 10.0.2.0 network.

  1. Start the Antix guest VM using the following qemu command
qemu-system-x86_64 -cpu qemu64,-apic -m 512 -display sdl,gl=on -boot c -hda antixvm.qcow2 -drive if=none,id=drive0,format=raw,file=/dev/rdk7 -device virtio-blk-pci,drive=drive0 -device e1000,netdev=net0 -netdev tap,id=net0,ifname=tap0,script=no,downscript=no

There are a lot of command line options. The critical one’s for networking is -netdev and -device. That is what links the VM to the tap0 interface in the host. It is analagous to running an ethernet cable from the virtual machine to the tap0 port on the host.

  1. After the VM starts, login and check its networking
ip a

Shows interface eth0 is UP , but has no IP. We need to give it an IP

ip addr add 10.0.2.15/24 dev eth0

Now ip a shows eth0 is UP and has IP 10.0.2.15
But

ip r
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15

it only has the 10.0.2.0 network defined… no default route. That is sufficient for my purpose

ping 10.0.2.1 works from the guest
ping 10.0.2.15 works from the host

but I cant ping anything else, not even the hosts IP on the modem port.

We can impove that by adding a default route

ip route ad default via 10.0.2.1

ie the default gateway is the host.
Now I can also do

ping 192.168.0.57 from the guest ( that is my hosts IP on the modem port)

I also cant ping google.com.au … there is no dns server accessible to the guest. That does not matter.

Final issues

Given the network setup above, the guest exports my common directory using NFS, and the host can mount it with

mount -t nfs 10.0.2.15:/mnt/common /mnt/common

and I can do

cd /mnt/common
ls

and I see my files.
I can read files as either root or nevj, but I can only write files as nevj, not as root.
I assume that might be some sort of NFS security feature.

Where to now

I have shown that a VM NFS server will work. I now need to automate the steps.
Are there any suggestions for automating this?

Acknowledgement

I am grateful for assistence from @JoelA with qemu and tap interfaces.

3 Likes

Automating my NetBSD ext4 mount

This proved to be easier than I expected

  1. Configure the Antix VM.
    I need the VM to have an NFS server installed and configured, as previously described
add the following to /etc/exports
/mnt/common *(rw,sync,no_subtree_check)
and do 
exportfs -a

I also need the VM to loop mount the physical drive passed through to qemu

mount -o loop /dev/vda /mnt/common

I also need the VM to give eth0 a static IP. That can be done in /etc/network/interfaces

auto eth0
iface eth0 inet static
        address 10.0.2.15
        netmask 255.255.255.0
        gateway 10.0.2.1

All of the above only need to be done once

  1. In NetBSD there is no pre-configuration. I can automate the whole process of setting up the tap interface and starting the Antix VM with qemu, by writing a small startup script which is executed when NetBSD boots.
    To do that I put the following entry in /etc/rc.local.
# This file is (nearly) the last thing invoked by /etc/rc during a
# normal boot, via /etc/rc.d/local.
#
# It is intended to be edited locally to add site-specific boot-time
# actions, such as starting locally installed daemons.
#
# An alternative option is to create site-specific /etc/rc.d scripts.
#

echo -n 'Starting local daemons:'

# Add your local daemons here, eg:
#
#if [ -x /path/to/daemon ]; then
#	/path/to/daemon args
#fi

if [ -x /usr/local/bin/antixvm.sh ]; then
          /usr/local/bin/antixvm.sh > /dev/null
fi


echo '.'

Then I put the script antixvm.sh in /usr/local/bin.
Here is the script

#!/bin/sh

ifconfig tap0 create
ifconfig tap0 descr "AntixVM" up
ifconfig tap0 10.0.2.1 netmask 255.255.255.0

/usr/pkg/bin/qemu-system-x86_64 -cpu qemu64,-apic -m 512 -display none -boot c -hda /usr/local/qemu/antixvm.qcow2 -drive if=none,id=drive0,format=raw,file=/dev/rdk7 -device virtio-blk-pci,drive=drive0 -device e1000,netdev=net0 -netdev tap,id=net0,ifname=tap0,script=no,downscript=no &

Note that it sets -display none and uses ‘&’ to run it as a background process.
Also I move my antixvm.qcow2 file to /usr/local/qemu

  1. Then reboot NetBSD and all I see is a 'Starting local daemons:'in the boot sequence.
    When I look at the interfaces
trinity: {4} ifconfig
...
tap0: flags=0x8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
	ec_capabilities=0x5<VLAN_MTU,JUMBO_MTU>
	ec_enabled=0
	description: "AntixVM"
	address: f2:0b:a4:a9:06:ff
	status: active
	inet6 fe80::f00b:a4ff:fea9:6ff%tap0/64 flags 0 scopeid 0x5
	inet 10.0.2.1/24 broadcast 10.0.2.255 flags 0

The tap0 interface is active and has an IP address.
I can ping the VM,

trinity: {5} ping 10.0.2.15
PING 10.0.2.15 (10.0.2.15): 56 data bytes
64 bytes from 10.0.2.15: icmp_seq=0 ttl=64 time=0.397531 ms

which means the Antix VM is running and configured.
I can do the NFS mount

# Mount -t nfs 10.0.2.15:/mnt/common /mnt/common
# df
System              512-blocks         Used        Avail %Cap Mounted on
/dev/dk12                607675824     47963728    529328312   8% /
/dev/dk16                  1046488         2136      1044352   0% /mnt/EFI/boot
kernfs                           2            2            0 100% /kern
ptyfs                            2            2            0 100% /dev/pts
procfs                           8            8            0 100% /proc
tmpfs                     33540464            0     33540464   0% /var/shm
/dev/dk18                 40224680      4451608     33725176  11% /mnt/share
10.0.2.15:/mnt/common   4029495000    210612288   3614050048   5% /mnt/common

and there is my ‘common’ ext4 partiton in the last line of the df.

I chose not to automate the mount by putting an entry in /etc/fstab but that option is available

Concluding Remarks

Now that I have it automated, it is really quite efficient and goes unnoticed. It does not cause a boot delay. The memory usage is 1.41Gb with the VM running and 0.75 Gb without the VM running. So the Antix VM uses 0.7Gb. I could probably prune it to use less, but it is not worth the effort for me.

One word of WARNING. The -drive option in qemu which accesses my disk partiton does a raw physical mount of the entire disk, then picks out my nominated partition. One would not want that disk or partiton to be active when accessed as a raw device. Therefore one should not use this procedure if the desired partition is on the same disk as the NetBSD root partition, because the NetBSD root partiton will be active. Fortunately , in my case , my ‘common’ data partition was on a separate disk.

It is rather clumsy , running an entire VM just to get a mount of an ext4 partition. It works because NFS has its own pseudo-filesystem and it converts everything to that before exporting, so that what NetBSD gets to work with is the NFS pseudo-filesystem, and it can cope with that. I knew , before I started this, that NFS would work between 2 separate computers with different incompatable filesystems … I just needed to develop it for one hard install and a VM.

If anyone knows a better method, please let us know.

2 Likes

I do it like this (from Linux):
su
modprobe ufs
mount -t ufs -o ro,ufstype=ufs2 /dev/sdb2 /mnt

This is read-only. I don’t know an easy way to mount ufs read-write

1 Like

And from FreeBSD (to mount a ext4):
su
mount -t ext2fs /dev/ada0p3 /mnt

And it is read-write

1 Like

I would be wary of writing with that mount. You could damage your ext4 filesystem.
ext2fs will only work on ext4 filesystems which are smaller than the ext2 size limit.

In general I would not recommend writing on a filesystem with anything other than its own proper driver.

2 Likes

Thanks for the heads up! I usually don’t need to transfer files but will use a SD card/USB stick from now on. It’s dual boot (actually triple) machine.

Your findings on the issue are interesting but seems a bit overkill for some small files you could upload to cloud or use a memory card. Of course for bigger files it’s not so convenient.

1 Like

Oh yes , it is a dreadful workaround.
I want it because I need proper access to my common data directory.
To just exchange a few files, use a USB, as you say, or make a small ext2 partition and mount that to both Linux and BSD. Ext2 is safe… both Linux and BSD have proper drivers for ext2.

With ext4 files on another computer , one can simply use NFS. It is only files on the same computer as BSD that are an issue.

Dont you have Btrfs? No idea what happens with that in BSD?

2 Likes

I had it with my laptop but switched back to ext4. Simpler and I make file backups with rsync anyways so no need for system snapshots.

2 Likes