Probably not very good. I selected large spinning hard drives because I could get them at a good price for 2TB each and I wanted to setup a RAID5-like system in ZFS and btrfs (lesson learned, btrfs doesn't actually support this correctly) and I wanted to get at least 10TB with redundancy.
I don't know how much each of those SATA disks take up, but probably more than a single Raspberry Pi does.
Likewise it has a few case fans in it that may be pointless. I would prefer it never has a heating issue versus saving a few "whurrr" sounds off in a closet somewhere that nobody cares about.
It's also powering that Nvidia 1060 that I do almost nothing with on the NAS. I don't even bother to enable the Jellyfin GPU transcoding configuration because I prefer to keep my library encoded in h264 right now anyhow as I have not yet made the leap to a newer codec because the different smart TVs have varying support. And sometimes my daughter has a friend come over that has a weird Amazon tablet thing that only does a subset of things correctly.
The 1060 isn't an amazing card really, but it could do some basic Ollama inference if I wanted. I think it has 6GB of memory, which is pretty low, but usable for some small LLMs.
I don't know how much each of those SATA disks take up, but probably more than a single Raspberry Pi does.
Likewise it has a few case fans in it that may be pointless. I would prefer it never has a heating issue versus saving a few "whurrr" sounds off in a closet somewhere that nobody cares about.
It's also powering that Nvidia 1060 that I do almost nothing with on the NAS. I don't even bother to enable the Jellyfin GPU transcoding configuration because I prefer to keep my library encoded in h264 right now anyhow as I have not yet made the leap to a newer codec because the different smart TVs have varying support. And sometimes my daughter has a friend come over that has a weird Amazon tablet thing that only does a subset of things correctly.
The 1060 isn't an amazing card really, but it could do some basic Ollama inference if I wanted. I think it has 6GB of memory, which is pretty low, but usable for some small LLMs.