Hey again! I’ve progressed in my NAS project and I’ve chosen to go for a DIY NAS. I can’t wait for the parts to arrive!

Now I’m a bit struggling to choose an OS. I am starting with 2x10To HDD + 1To NVMe SSD. I plan to use 1 HDD for parity and to add more disks later.

I plan to use this server purely as a NAS because I will be getting a second more powerful server some time next year. But in the meantime, this NAS is a big upgrade over my rpi 4, so I will run some containers or VMs.

I don’t want to go with TrueNAS as I don’t want to use ZFS (my RAM is limited and I’m not sure I can add drives with different sizes). I’ve read btrfs is the second best for NAS, so I may use this.

Unraid seemed like the perfect fit. But the more I read about it, the more I wonder if I shouldn’t switch to Proxmox.

What I like about Unraid is the ability to add a disk without worrying about the size. I don’t care much about the applications Unraid provides and since docker-compose is not fully supported, I’m afraid I won’t be able to do things I could have done easily with a docker-compose.yml I also like that’s it’s easy to share a folder. What I don’t like about Unraid is the cache system and the mover. I understand why the system works this way but I’m not a fan.

I’ve asked myself if I needed instant parity for all my data and if I should put everything in the array.

The thing is that for some of my data I don’t care about parity. For instance, I’m good with only backing up my application data and to have parity for the backup. For my tv shows I don’t care about parity nor backup while I want both for my photos.

After some more research, I found mergerfs and snapraid. I feel that they are more flexible and fix the cache/mover issue from Unraid. Although I’m not sure if snapraid can run with only 2 disks.

If I go with Proxmox I think I would use OpenMediaVault to setup shares.

Is anyone using something like this? What are your recommendations?

Thanks!

  • CouncilOfFriends@slrpnk.net
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    11 months ago

    One note which may not apply to you, I installed my Proxmox to boot from 2 256G SSDs as a basic RAID 1 mirror and only have the bare minimum data in VM storage to reduce size of backups. Backup retention on the boot drives is limited because a cron job on the VM handles copying backups to the MergerFS pool for longer term storage.

    Moving docker’s data directory to the ‘slow’ drives was a helpful decision, this post covers the old/wrong ways to do that and the way which worked (data-root). Docker data doesn’t take up a huge amount of space, but it saved me some work recently when I found my media server had been down for a while and couldn’t remember when it worked last to identify a working backup. I spun up a fresh Debian image and ran through the steps to reinstall the stack, and point to the same Docker data path. Running the same Docker compose command got most services working with the old metadata, though others i renamed/removed the service’s path and reconfigured.

    My docker-compose and its revisions are the extent of a backup I need for a piracy box as my internet is quick enough to recreate my library within a couple days if needed.