The next step in the process of setting up a home NAS on FreeBSD is to add an NFS share.
We set up Samba in the previous part – now we let’s add NFS shares to it.
My idea is for the Samba share to be used for various media resources requiring access from phones and TVs, while NFS will be exclusively for Linux hosts – two laptops in different networks (home and office) that will perform their backups to this partition using rsync, rclone, or restic.
Previous posts in this series:
- FreeBSD: Home NAS, part 1 – setting up ZFS mirror (RAID1) (test on a virtual machine)
- FreeBSD: Home NAS, part 2 – introduction to Packet Filter (PF) firewall
- FreeBSD: Home NAS, part 3 – WireGuard VPN, Linux peer, and routing
- FreeBSD: Home NAS, part 4 – local DNS with Unbound
- FreeBSD: Home NAS, part 5 – ZFS pool, datasets, snapshots, and monitoring
- FreeBSD: Home NAS, part 6 – Samba server and client connection
- (current) FreeBSD: Home NAS, part 7 – NFSv4 and connecting to Linux
- (to be continued)
Contents
NFSv3 vs NFSv4
Currently, there are two main versions of NFS – v3 and v4.
FreeBSD supports working with both (I configured v3 during testing as well), but obviously, v4 is more relevant and offers several advantages:
- NFSv3 is stateless, while NFSv4 is stateful: v4 maintains the state of clients and sessions, simplifying locks and file access management
- NFSv3 is considered simpler: but in my opinion, NFSv4 is no more difficult to configure, and perhaps even easier due to having fewer components
- Authentication:
- NFSv3 authentication – client IP + UID/GID (who connected and on behalf of which user)
- NFSv4 – extended access model, ACL support (and a foundation for Kerberos if needed)
See NFSv3 and NFSv4: What’s the difference?
Creating ZFS datasets
To allow separate snapshot settings and ZFS quotas – we will create the datasets for NFS hierarchically:
nas/: root dataset of the ZFS poolnas/nfs/: root dataset for all things NFS- nas/nfs/backups/: dataset for backups from other machines
Inside nas/nfs/backups/, there will be separate directories named after the hosts for their backups – “setevoy-home“, “setevoy-work“, “setevoy-rtfm“, etc.
Later, if needed, a new NFS share can be added to nas/nfs/.
Add a new dataset:
root@setevoy-nas:/home/setevoy # zfs create nas/nfs
Create the second one inside it, for backups:
root@setevoy-nas:/home/setevoy # zfs create nas/nfs/backups
The NFS daemon is available in the system out of the box; we only need to enable it at startup:
root@setevoy-nas:/home/setevoy # sysrc nfs_server_enable="YES" root@setevoy-nas:/home/setevoy # sysrc nfsv4_server_enable="YES" root@setevoy-nas:/home/setevoy # sysrc nfsv4_server_only="YES" root@setevoy-nas:/home/setevoy # sysrc nfsuserd_enable="YES"
NFSv4 works with names rather than “raw” UID/GID. To ensure IDs map correctly to names, add the following to /etc/sysctl.conf and activate the changes now:
root@setevoy-nas:/home/setevoy # sysctl vfs.nfs.enable_uidtostring=1 vfs.nfs.enable_uidtostring: 0 -> 1 root@setevoy-nas:/home/setevoy # sysctl vfs.nfsd.enable_stringtouid=1 vfs.nfsd.enable_stringtouid: 0 -> 1
NFS, ZFS, and sharenfs
ZFS supports configuring NFS for datasets via the sharenfs dataset property (and for Samba via sharesmb), though it has its limitations.
For NFSv4, it is mandatory to specify a root directory where the shares themselves will be located; see NFS Version 4 Protocol.
Add it to the /etc/exports file:
V4: /nas/nfs
However, access to this root directory won’t exist until we explicitly add it, for example:
# zfs set sharenfs="-network 192.168.0.0/24 -ro" nas/nfs
Now we can share the nas/nfs/backups dataset via sharenfs, using -network to specify allowed addresses:
root@setevoy-nas:/home/setevoy # zfs set sharenfs="-network 192.168.0.0/24" nas/nfs/backups
Check it:
root@setevoy-nas:/home/setevoy # zfs get sharenfs nas/nfs/backups NAME PROPERTY VALUE SOURCE nas/nfs/backups sharenfs -network 192.168.0.0/24 local
In fact, zfs set sharenfs simply edits the /etc/zfs/exports file, which is then read by mountd:
root@setevoy-nas:~ # cat /etc/zfs/exports # !!! DO NOT EDIT THIS FILE MANUALLY !!! /nas/nfs/backups -network 192.168.0.0/24
Start nfsd:
root@setevoy-nas:/home/setevoy # service nfsd start
nfsd will automatically start mountd, which is actually responsible for NFS sharing and reads configuration from /etc/exports and /etc/zfs/exports:
root@setevoy-nas:~ # ps aux | grep mount root 9475 0.0 0.0 13888 1076 - Is 05:46 0:00.00 /usr/sbin/mountd -r -S -R /etc/exports /etc/zfs/exports
Add a rule to the pf firewall (if used):
...
pass in on em0 proto { tcp udp } from { 192.168.0.0/24, 192.168.100.0/24, 10.8.0.0/24 } to (em0) port 2049
...
Check the syntax and apply pf changes:
root@setevoy-nas:~ # pfctl -nvf /etc/pf.conf && service pf reload
Mount on the client – explicitly specifying -t nfs4:
[setevoy@setevoy-work ~] $ sudo mkdir /mnt/test/ [setevoy@setevoy-work ~] $ sudo mount -t nfs4 192.168.0.2:/backups /mnt/test/
In 192.168.0.2:/backups, we specify the /backups directory from the root defined in /etc/exports: since our root is “/nas/nfs“, it will be “/” on the clients, and internal datasets are mounted from this root as /backups.
Check mounts on the client:
[setevoy@setevoy-work ~] $ findmnt /mnt/test/ TARGET SOURCE FSTYPE OPTIONS /mnt/test 192.168.0.2:/backups nfs4 rw,relatime,vers=4.2,rsize=131072,wsize=131072,namlen=255,hard,fatal_neterrors=none,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.0.4,local_lock=none,addr
Interesting details here:
nfs4: protocolvers=4.2: versionsec=sys: client authentication (important for/etc/fstabon clients, see below)- clientaddr=192.168.0.4: the client’s address
Similarly, you can check the active connection on the client with nfsstat -m:
[setevoy@setevoy-work ~] $ nfsstat -m /mnt/test from 192.168.0.2:/backups Flags: rw,relatime,vers=4.2,rsize=131072,wsize=131072,namlen=255,hard,fatal_neterrors=none,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.0.4,local_lock=none,addr=192.168.0.2
On the server, use nfsdumpstate:
root@setevoy-nas:/home/setevoy # nfsdumpstate Flags OpenOwner Open LockOwner Lock Deleg OldDeleg Clientaddr 0 0 0 0 0 0 192.168.0.4
With the -l option, you can see active Opens and Locks – but it’s empty now.
Test access – let’s create a file on the server:
root@setevoy-nas:/home/setevoy # touch /nas/nfs/backups/test-server
Check it on the client:
[setevoy@setevoy-work ~] $ ll /mnt/test/ total 1 -rw-r--r-- 1 root root 0 Dec 31 13:12 test-server
(Dec 31 – when else would one set up a home NAS on FreeBSD, right? 😀 )
NFS, users, and “Permission denied”
However, we cannot write anything to the directory from the client right now:
[setevoy@setevoy-work ~] $ touch /mnt/test/test-client touch: cannot touch '/mnt/test/test-client': Permission denied
Because on the server, it was created by root:
root@setevoy-nas:~ # stat /nas/nfs/backups/test-server 4446369902026857636 4 -rw-r--r-- 1 root wheel 0 0 "Dec 31 13:12:36 2025" "Dec 31 13:12:36 2025" "Dec 31 13:12:36 2025" "Dec 31 13:12:36 2025" 131072 1 0x800 /nas/nfs/backups/test-server
Furthermore, there is no access even from the local root on the client:
[setevoy@setevoy-work ~] $ sudo touch /mnt/test/test-client touch: cannot touch '/mnt/test/test-client': Permission denied
By default, NFS performs root_squash, and all operations from clients are executed on the server by the local nobody user.
There are several solutions:
- Create a group like
nfsuserson the server, grant it write permissions to the directory (775), and add the local usersetevoyto this group- The cleanest and safest option.
- Alternatively, you can set the
-maproot=rootoption – thenrooton the client will ==rooton the server (UID0for both)- But this only applies to file access and only within the NFS root –
/nas/nfs. - An acceptable option for a home NAS.
- But this only applies to file access and only within the NFS root –
- A slightly safer version – specify
-maproot=setevoyand change the owner of/nas/nfs/backups/on the server – thenrootoperations from the client will be executed on the server as the UID/GID of thesetevoyuser. - Or just use
-mapall=root– then all users on the client will perform operations as the localroot- Similar to
-maproot=root, but also the most dangerous option.
- Similar to
Since this share is for backups performed by root on the clients – we can use maproot=root:
root@setevoy-nas:/home/setevoy # zfs set sharenfs="-network 192.168.0.0/24 -maproot=root" nas/nfs/backups root@setevoy-nas:/home/setevoy # service nfsd restart
Now, we still don’t have access from a regular user on the client:
[setevoy@setevoy-work ~] $ touch /mnt/test/test-client touch: cannot touch '/mnt/test/test-client': Permission denied
But access is available from the local root, as it gains root permissions on the server:
[setevoy@setevoy-work ~] $ sudo touch /mnt/test/test-client
ZFS sharenfs and multiple -network
Spent some time trying to do that 🙁
Help came from the FreeBSD forum; see NFSv4 and share for multiply networks (actually, the FreeBSD community is very welcoming and much less toxic than the Arch Linux forums).
The problem: I access the FreeBSD host from several networks (see FreeBSD: Home NAS, part 3 – WireGuard VPN, Linux peer and routing):
- 192.168.0.0/24: office network
- 192.168.100.0/24: home network
- 10.8.0.0/24: VPN
And obviously, I want to keep NFS access secured at the network level (although in my case I could have lived without it, it’s better to do it right from the start).
The issue is that ZFS ver 2.2.7, currently used in FreeBSD 14.3, does not allow specifying multiple networks in the sharenfs property.
That’s it, you cannot use something like:
# zfs set sharenfs="-network 192.168.0.0/24 -network 192.168.100.0/24 -network 10.8.0.0/24" nas/nfs/backups
However, in FreeBSD 15.0 and ZFS 2.4.0, the syntax was reportedly expanded, and you can pass a list separated by “;“:
# zfs sharenfs="-network 192.168.0.0/24 -maproot=root;-network 192.168.100.0/24" nas/nfs/backups
Well, on version 2.2.7 – we just create the share via the /etc/exports file directly instead of using zfs set sharenfs.
Unmount the share on the client:
[setevoy@setevoy-work ~] $ sudo umount /mnt/test
Remove sharenfs on the server:
root@setevoy-nas:~ # zfs set sharenfs=off nas/nfs/backups
Check:
root@setevoy-nas:~ # zfs get sharenfs nas/nfs/backups NAME PROPERTY VALUE SOURCE nas/nfs/backups sharenfs off local
Ensure /etc/zfs/exports is empty now:
root@setevoy-nas:~ # cat /etc/zfs/exports # !!! DO NOT EDIT THIS FILE MANUALLY !!! root@setevoy-nas:~ #
Next, edit /etc/exports and define the shares here, each entry for a separate network:
V4: /nas/nfs /nas/nfs/backups -network 192.168.0.0/24 -maproot=root /nas/nfs/backups -network 192.168.100.0/24 -maproot=root /nas/nfs/backups -network 10.8.0.0/24 -maproot=root
Restart nfsd and mountd (restart mountd explicitly and manually, as there were access issues and “Input/output error” errors):
Check access from the client in the office:
[setevoy@setevoy-work ~] $ sudo mount -t nfs4 192.168.0.2:/backups /mnt/test [setevoy@setevoy-work ~] $ file /mnt/test/test-client /mnt/test/test-client: empty
And from the client at home connected via VPN:
[setevoy@setevoy-home ~]$ sudo wg show interface: wg0 ... peer: xLWA/FgF3LBswHD5Z1uZZMOiCbtSvDaUOOFjH4IF6W8= endpoint: 178.***.***.184:51830 allowed ips: 10.8.0.1/32, 192.168.0.0/24
With address 10.8.0.3:
[setevoy@setevoy-home ~]$ ip a s wg0 44: wg0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000 link/none inet 10.8.0.3/24 ...
Install nfs-utils, otherwise you’ll get an “NFS: mount program didn’t pass remote address” error:
[setevoy@setevoy-home ~]$ sudo pacman -S nfs-utils
Mount the share on this client:
[setevoy@setevoy-home ~]$ sudo mount -t nfs4 192.168.0.2:/backups /mnt/test/
Check access:
[setevoy@setevoy-home ~]$ sudo touch /mnt/test/test-vpn-client
Check the file:
[setevoy@setevoy-home ~]$ ls -l /mnt/test/test-vpn-client -rw-r--r-- 1 root root 0 Dec 31 15:16 /mnt/test/test-vpn-client
And on the server, we now see two active connections: one Clientaddr from the VPN network – 10.8.0.3, and one from the office – the work laptop with 192.168.0.4:
root@setevoy-nas:~ # nfsdumpstate Flags OpenOwner Open LockOwner Lock Deleg OldDeleg Clientaddr CB 1 0 0 0 0 0 10.8.0.3 0 0 0 0 0 0 192.168.0.4
Linux, /etc/fstab and systemd-automount
Finally, let’s add automount, as I did for the Samba share earlier.
Unmount the share for now:
[setevoy@setevoy-work ~] $ sudo umount /mnt/test
Create a permanent directory:
[setevoy@setevoy-work ~] $ sudo mkdir /nas/nfs/backups/
Edit the /etc/fstab on the clients:
... 192.168.0.2:/backups /nas/nfs/backups nfs sec=sys,_netdev,noauto,x-systemd.automount,nofail,noatime 0 0
Here we explicitly specify sec=sys, which is AUTH_SYS authentication (by UID/GID; see The AUTH_SYS authentication method).
Run sudo systemctl daemon-reload and check the new unit files:
[setevoy@setevoy-work ~] $ ll /run/systemd/generator/ | grep nfs -rw-r--r-- 1 root root 177 Dec 31 15:29 nas-nfs-backups.automount -rw-r--r-- 1 root root 274 Dec 31 15:29 nas-nfs-backups.mount
Activate them:
[setevoy@setevoy-work ~] $ sudo systemctl restart remote-fs.target
Check access – the share should mount automatically:
[setevoy@setevoy-work ~] $ ll /nas/nfs/backups total 2 -rw-r--r-- 1 setevoy nfsusers 0 Dec 31 14:18 test-client -rw-r--r-- 1 root root 0 Dec 31 13:12 test-server -rw-r--r-- 1 root nfsusers 0 Dec 31 15:16 test-vpn-client
Done.
Useful links
- Network File System (NFS): Red Hat documentation; the syntax may differ slightly, but there is plenty of general information.
- Network File System (NFS): FreeBSD Handbook, the canonical FreeBSD documentation.
- mount.nfs4 access denied by server: here I found the fix for the error when
sec=sysis missing in the client’s/etc/fstab. - How to Share ZFS Filesystems with NFS: some optimization tips.
![]()