Current setup is PMS running on a Synology 5-bay, and another PMS running on a Shield Pro. The NAS server is primarily used for remote streaming, while the Shield serves to my home LAN (AppleTVs mainly).
I’ve been seeing stuttering on larger files, either using the Plex app or Infuse, and I’m fairly certain the Synology is the weak link. Network performance in the house has pretty solid, though admittedly I could stand to test it more thoroughly. I’ve been looking at moving my library to a standalone system. I’ve been looking at the Beelink ME Mini (which happens to be on sale!). What I don’t know is the best way to build this out.
I don’t want to have to buy all 6 SSDs (ar at least 6x4TB ones!) at once, so I’d be looking at either a stock Linux (Ubuntu or Rocky) install w/ I guess a BTRFS pool for the SSDs (I’m guessing I can use the eMMC for OS depending on how big the install is - that or use the SSD in slot 4). Alternatively, i could possibly set up TruNAS w/ the Plex pp to manage the storage.
As for populating the media, I plan to keep the Synology as the central repo of my data. I have it replicating to another NAS at my dad’s house, with movies/music/tv replicating using Syncthing. I plan to also use Syncthing to populate the Beelink.
Anyway, please poke holes in this plan and/or suggest a better one. My main goals are to get the media I’m streaming off spinning disk w/ minimal power draw (didn’t mention that above) in a way that I can expand storage as necessary to accommodate the media library. Nothing’s purchased yet, so I’m not married to the hardware. I would ideally like to convert the library to h.265 or even AV1 if I can make it work.
ETA: For clarity: I’m not transcoding AFAIK. My Shield mounts the Synology over SMB and mostly works fine, until I try to play anything 4k - then I get stuttering. On the surface, this sounded like a network issue, but I can’t find a problem w/ the LAN. My thought was to move the PMS to a single location w/ local storage, and use the Synology just as an archive.
ETA2: FWIW, I have not expanded the memory on the Synology or installed any cache drives.
I would ideally like to convert the library to h.265 or even AV1 if I can make it work.
Unless you’ve downloaded remuxes (which I doubt), I’d seriously recommend redownloading instead of converting your existing files.
h.265 and especially AV1 take a long time to encode by CPU, and hardware encoding won’t give you any space savings, unless you’re okay with losing much details.
Redownloading is most definitely faster, will result in more space savings for the quality you’ll get. PS: Unless you’ve got data volume limits, but even then I’d recommend slowly upgrading over time. It’s quite simple with TRaSH guides and giving h.265 a higher score.
Agreed! The library is populated by my parents and me. I’d have to look again, but i think it’s around 12TB.
just get a cheap n100 box, don’t overspend
I would double and triple check your not transcoding. Even if you’re watching on the client at whatever the files native resolution is depending on the codec of the file you might still be transcoding. For instance with 1080p anything h265 or AV1 is transcoded into h264 by the server. There’s also a few other situations where Plex with force transcoding or down convert the video whether you want it to or not.
Your NAS shouldn’t be having trouble serving the file to Plex I’d bet it’s transcoding in the background and you just don’t realize it.
Aside from looking at the current activity on the server web page, where might I look to see if this is true?
I was like you and had my Synology DS920+ doing everything - the full *arr suite in docker containers, Plex server (native), torrents (in container), etc. I’ve recently moved away from that to a new setup which I think will be significantly better and simpler.
-
Mac Mini M4: this is running Plex Media Server and all the *arrs. I’m currently running them natively, but might move them off to docker. Don’t really see any need to though, they’re safe and I know how to configure them like the back of my hand at this point.
-
Terra-Master D5-310 5 Bay DAS: running in RAID5 with 5x10TB WD Red drives. Since this is connected directly to the Mac via USB-C, so is essentially a external HDD, it can be backed up with BackBlaze for $99/year! This is great because it’s just plain RAID5, not Synology’s proprietary RAID (which is great btw) so I’m not locked in to their hardware ecosystem.
The Mac has absolutely no problem transcoding as many streams as you can throw at it, and the power draw is just insane. It tops out at like 50W or something stupid, and idles at about 4W.
This sounds pretty great, TBH. I think I’m probably tied to Synology for the foreseeable future with my parents’ and my NAS being each others off site backup. I know there are other ways to do it, but the investments are already made on both ends. Plus, they’re retired, so the 1522 with 18TB drives wasn’t a small expense for them!
-
Which part is your problem, serving the media from disk, or transcoding and serving that stream?
I’ve updated the OP to answer this. I think serving the media from the spinning disk is the heart of the problem.
Prove it with data, else you’ll just be blindly throwing money down the drain.
The network testing I’ve done (iperf and file transfers) hasn’t revealed any issues. I’m seeing consistent 1Gb speeds. I could try some wireshark monitoring, I guess.
I believe I have a very similar issue as you. It only occurs on remux/high bitrate 4k videos. Happens no matter the player etc. I currently fix this issue by putting on that limit qbit upload speed and everything plays fine then.
I am interested in what your solution is.
limit qbit upload speed
Not sure what you’re referring to here.
Its a qbittorrent setting. you can set upload and download limits to turn on at specific times or when you want to by clicking the little speed dial.
So if you are not utilizing those harddrives for other uses such as torrents thn disregard.
Got it: qbit == qbittorrent. I’ve thought about getting the remaining docker containers off of the Synology. I may look into that this weekend.
Yea I would test a large file and as soon as it buffers put a limit on quittorrent or something that could be reading and writing lots of data from those drives and see if that fixes the issue. I wish there was a better way.
I had issues streaming directly from one device to the other without transcoding on WiFi. (I know you’re wired! Heard me out.)
I found that, although it didn’t fix the issue, it did help to switch from using SMB to NFS. Something about the way the protocol works meant that SMB had enough of an overhead that it worsened my stuttering issues outside of the spotty WiFi connection. For sure it significantly sped up scrubbing access times as well.
It may not be the issue, but it may be a step worth checking just to see if it is a part of the issue.
For what it’s worth, 4k remuxes can have bitrate spikes well exceeding the limits of a single gbps wire. If you have a player with limited memory, or just limited cache settings, this may also be a part of the problem.
I’ve looked into NFS multiple times. I work in HPC implementation and believe me I know about SMB/CIFS performance (or lack thereof!)! I just haven’t had the time to figure out ID mapping. What NFS version do you use, and how do you handle file ownership on the shares? I suppose it’s all read-only, so that would make it a bit easier?
It’s all read only, yes, but I just use a group specifically for NAS access and put users that need it in there.
I use the NFS version from the debian repository; not actually sure which one, and didn’t even know that it mattered.
Yeah, nfs v2 3 or 4 can make a difference. I don’t know that many use v2 anymore. If you’re using the current release in your distribution and didn’t specify a specific version, I would guess you’re using v4.
TBH I would consider an M4 Mac Mini and either a NAS or DAS to go along with it. The power, efficiency, and price make it a really compelling choice.
I did this. It is a mixed bag.
What issues are you having with it? I just did this and have found it to be pretty much perfect.
What issues are you having with it? I just did this and have found it to be pretty much perfect.
- The lack of something like mergerfs. It is okay for plex, because you can add each drive, but not for all softwares. There is unionfs with macfuse, but it is very buggy. Maybe the new fskit backend will improve over time, but not today. I had to modify some docker containers to install and run mergerfs inside of the container.
- Docker. You can use orbstack or docker desktop. But both work the same. They create an internal vm with linux and run docker there. This means no gpu acceleration and worse performance in general. So no gpu / ai hardware acceleration in softwares like immich. And I have some softwares running in orbstack and some native on the host (softwares like plex for hardware transcoding). But I would prefer to run all in containers. Maybe the new container software by apple will solve this, but it is at early stages. Far from finish.
- Because a mac mini is not standard, there is no way to power control it. I bought a jetkvm, but of course I can’t use the atx adapter to control the power state. This is all fine until macos locks up when you are abroad. 3.1 You need display access to unlock filevault
- In macos there is no way to tell why a hdd woke up from sleep. You can’t find the culprit who wakes up your hdds all the time. Pretty annoying.
- The smb implementation on macos is shit. And I know for a fact, I am not alone with this opinion.
- Macos permission prompts. Not nice if you don’t know why your softwares has no network access, because you opened it via ssh but on the desktop there is a local network permission prompt.
- Hardware is expensive. I don’t mean the mac mini itself. I mean thunderbolt accessories.
- Launchctl sucks
But there are also some good things:
- Power usage is great
- The transcoding power is insane. 100Mbit 4k movie? No problem. My 1650 ti I used in the past struggled with it.
I just commented saying OP should consider this haha. I just went from a DS920+ to a M4 Mac Mini and Terra-Master DAS.
I have thought of that… Most of the daily driver systems in the house are Macs, so it would probably work out pretty well.
I just did something sort of like what you are doing and after a few hiccups, it’s working great. My Synology just couldn’t handle transcoding with docker containers running in the background.
Couple differences from your plan: I chose a N100 over the N150 because it used less power and I wasn’t loading up CPU dependent tasks on the thing. The N150 is about 30% faster if memory serves, but draws more power. Second, do you really need a second m.2 SSD BTRFS volume? Your Synology is perfectly capable of being the file storage. I’d personally spend the money you’d save buying a smaller N150 device on a tasty drive to expand the existing capacity then start a second pool from scratch.
Finally, I wouldn’t worry about converting media unless you are seriously pinched for space. Every time you do, you lose quality.
If you went from “everything works”, to “now it stutters”, then you either have a networking issue, or a resource issue on the source.
Did you update something recently?
Do you have network stats from your router?
Do some devices work fine, and others don’t?
I see no other issues from the network. The thing that “changed” is me trying to watch 4k stuff on my plex server. Up until recently, I didn’t bother with 4k. No real reason for trying it now, TBH. I’ve never felt that 4k was necessary for home viewing on anything smaller than a 100” screen (my largest if 75”).
Okay, so what’s your network look like?
Specs on your router, is this wired or wireless to the Shield, how much other traffic are the other network clients pulling, and is this a constant, or just happens intermittently?
Everything’s wired. Router is a TP-Link BE63, 2 APs w/ wired backhaul. Shield is on the same switch as the synology. STBs are throughout the house, but generally max 3-hops to the Shield/Synology. All Netgear bluebox 1Gb dumbswitches. At some point in the near future, I plan on getting this stuff to a central switch, so everything is a leaf switch away from it.
ETA: if I’m watching something, the network is generally pretty quiet. I have most data-intensive things (downloads, backups, off-site replications) set to happen in the wee-hours.
Netgear is…kinda shit. Is that 1Gb for the whole switch, or per-port? Any traffic steering on the network?
They’re all just dumb consumer switches. Nothing managed… yet.