Marauder is a fully dockerized cloud media server solution utilising Google Drive as an unlimited disk backend, assisting with the setup of tools like Sonarr, Radarr, Sabnzbd, Transmission and services enhancing their function.
Deploy this app to Linode with a free $100 credit!
Depends on a proprietary service outside the user's control
This is designed to be a fully dockerized Media downloading + Watching solution utilising Google Drive as an unlimited disk backend.
Check out the "Getting Started" page here. This should be enough to get you going.
The documentation is still under construction and is subject to change! If you have any questions in the meantime, please open an issue.
There are several other attempts at this, such as Cloudbox and PGBlitz, however I found these setups a little lacking in certain areas.
This setup has one gigantic shared folder named shared
. It's set up with Rclone union mounts so that programs like Sonarr, Radarr, Medusa etc. all believe that the downloading and gdrive media directories are on the same filesystem (because they are!). This severely reduces I/O compared to moving things between volumes. There's then some clever volume mapping so all the programs have matched up directories even though they're technically pointed to different places.
Q: How much RAM/CPU/Disk space do I need?
A: This is hard to say. I have 32gb on my VM, but I also have a much larger collection than most so Rclone uses a lot of RAM. Rclone has fairly high limits in its current setup so may run out of memory on machines with less than 16gb - I may include some options to reduce that if needed.
In terms of CPU, it depends on how much you're downloading and whether you're using usenet or torrenting. I wouldn't advise using a Pi 1 for it, but it might run okay on a Pi 4 if you're only downloading a couple films a day or something.
For disk space, you need space to store incomplete downloads and cache them before they're downloaded, so it depends on what you're downloading. I personally run the entire setup on two raid 0 SSDs for high disk speeds, since essentially they're just caches before uploading.
Q: Why are most of the containers on the host network?
A: You'd be surprised how much CPU a bandwith-heavy container can use using the Docker proxy (especially for something like Sabnzbd). It just makes sense to allow the heavy stuff to bridge straight to the host, which also comes with its own set of connectivity challenges. Also, each open port would be a proxy process, so having a large range for torrenting would suck.
Q: How do I disable some of the services I don't want/need?
A: You can edit the .env
file (after you've made a copy of .env.template
) and disable any service by changing it from a 1
to a 0
. eg to disable LazyLibrarian, you could add lazylibrarian_enabled=0
.
Q: There are some extra settings in the service interface that you don't mention! What do I do?!
A: That's intentional. The defaults for whatever I don't mention are usually fine. If I included all of the config options, it would take too long to configure.
Q: Why is there a custom container for most services?
A:
You can see the containers in Makeshift/Marauder-Containers.
Q: Why Plex as opposed to Emby/Jellyfin/Serviio/Whatever?
A: Jellyfin's Chromecast support is iffy at best, especially with subtitles (I watch anime on my Chromecast, deal with it). Emby has similar issues with casting and subtitles but is probably otherwise the least-worst offering. Serviio is a little feature bare. Plex, as much hacking as it requires to get to work, and as absolutely freaking terrible as its new interface is, does work once it's set up properly, and handles the abuse I throw at it fairly well.
Q: Wait, you automatically switch team drives? Won't that cause duplicates?
A: Nope, I fixed that in #4.
Pretty much all documentation has been moved to the wiki.
Please login to review this project.
No reviews for this project yet.
OpenMediaVault is the next generation network attached stor…
Low maintenance framework to manage self-hosted services.
Linux distro that runs on indie boxes (personal servers and…
Comments (0)
Please login to join the discussion on this project.