Table of Contents
- I HA Docker Swarm
-
II Chef’s Favorites (Docker)
- 15 AutoPirate
- 16 SABnzbd
- 17 NZBGet
- 18 RTorrent / ruTorrent
- 19 Sonarr
- 20 Radarr
- 21 Mylar
- 22 LazyLibrarian
- 23 Headphones
- 24 Lidarr
- 25 NZBHydra
- 26 NZBHydra 2
- 27 Ombi
- 28 Jackett
- 29 Heimdall
- 30 Duplicity
- 31 Elkar Backup
- 32 Emby
- 33 Home Assistant
- 34 iBeacons with Home assistant
- 35 Huginn
- 36 Kanboard
- 37 Miniflux
- 38 Munin
- 39 NextCloud
- 40 OwnTracks
- 41 phpIPAM
- 42 Plex
- 43 PrivateBin
- 44 Swarmprom
-
III Recipies (Docker)
- 45 Bitwarden
- 46 BookStack
- 47 Calibre-Web
- 48 Collabora Online
- 49 Ghost
- 50 GitLab
- 51 Gitlab Runner
- 52 Gollum
- 53 InstaPy
- 54 KeyCloak
- 55 Create KeyCloak Users
- 56 Authenticate KeyCloak against OpenLDAP
- 57 Add OIDC Provider to KeyCloak
- 58 OpenLDAP
- 59 Mail Server
- 60 Minio
- 61 Piwik
- 62 Portainer
- 63 Realms
- 64 Tiny Tiny RSS
- 65 Wallabag
- 66 Wekan
- 67 Wetty
- IV Reference
- Notes
1 What is this?
Funky Penguin’s “Geek Cookbook” is a collection of how-to guides for establishing your own container-based self-hosting platform, using either Docker Swarm or Kubernetes.
Running such a platform enables you to run self-hosted tools such as AutoPirate (Radarr, Sonarr, NZBGet and friends), Plex, NextCloud, and includes elements such as:
- Automatic SSL-secured access to all services (with LetsEncrypt)
- SSO / authentication layer to protect unsecured / vulnerable services
- Automated backup of configuration and data
- Monitoring and metrics collection, graphing and alerting
Recent updates and additions are posted on the CHANGELOG, and there’s a friendly community of like-minded geeks in the Discord server.
1.1 Who is this for?
You already have a familiarity with concepts such as virtual machines, Docker containers, LetsEncrypt SSL certificates, databases, and command-line interfaces.
You’ve probably played with self-hosting some mainstream apps yourself, like Plex, NextCloud, Wordpress or Ghost.
1.2 Why should I read this?
So if you’re familiar enough with the concepts above, and you’ve done self-hosting before, why would you read any further?
- You want to upskill. You want to work with container orchestration, Prometheus and Grafana, Kubernetes
- You want to play. You want a safe sandbox to test new tools, keeping the ones you want and tossing the ones you don’t.
- You want reliability. Once you go from playing with a tool to actually using it, you want it to be available when you need it. Having to “quickly ssh into the basement server and restart plex” doesn’t cut it when you finally convince your wife to sit down with you to watch sci-fi.
1.3 What have you done for me lately? (CHANGELOG)
Check out recent change at CHANGELOG
1.4 What do you want from me?
I want your support, either in the financial sense, or as a member of our friendly geek community (or both!)
Get in touch
- Come and say hi to me and the friendly geeks in the Discord chat or the Discourse forums - say hi, ask a question, or suggest a new recipe!
- Tweet me up, I’m @funkypenguin!
- Contact me by a variety of channels
Sponsor / Patronize me
The best way to support this work is to become a GitHub Sponsor / Patreon patron. You get:
- warm fuzzies,
- access to the pre-mix repo,
- an anonymous plug you can pull at any time,
- and a bunch more loot based on tier
.. and I get some pocket money every month to buy wine, cheese, and cryptocurrency!
Impulsively click here (NOW quick do it!) to sponsor me via GitHub, or patronize me via Patreon!
Work with me
Need some Cloud / Microservices / DevOps / Infrastructure design work done? I’m a full-time AWS-certified consultant, this stuff is my bread and butter! :breadfork_and_knife: Get in touch, and let’s talk business!
By the time I had enlisted Funky Penguin’s help, I’d architected myself into a bit of a nightmare with Kubernetes. I knew what I wanted to achieve, but I’d made a mess of it. Funky Penguin (David) was able to jump right in and offer a vital second-think on everything I’d done, pointing out where things could be simplified and streamlined, and better alternatives.He unblocked me on all the technical hurdles to launching my SaaS in GKE!
With him delivering the container/Kubernetes architecture and helm CI/CD workflow, I was freed up to focus on coding and design, which fast-tracked me to launching on time. And now I have a simple deployment process that is easy for me to execute and maintain as a solo founder.
I have no hesitation in recommending him for your project, and I’ll certainly be calling on him again in the future.
– John McDowall, Founder, kiso.io
Buy my book
I’m publishing the Geek Cookbook as a formal eBook (PDF, mobi, epub), on Leanpub (https://leanpub.com/geek-cookbook). Check it out!
2 How to read this book
2.1 Structure
- “Recipes” generally follow on from each other. I.e., if a particular recipe requires a mail server, that mail server would have been described in an earlier recipe.
- Each recipe contains enough detail in a single page to take a project from start to completion.
- When there are optional add-ons/integrations possible to a project (i.e., the addition of “smart LED bulbs” to Home Assistant), this will be reflected either as a brief “Chef’s note” after the recipe, or if they’re substantial enough, as a sub-page of the main project
2.2 Conventions
- When creating swarm networks, we always explicitly set the subnet in the overlay network, to avoid potential conflicts (which docker won’t prevent, but which will generate errors) (https://github.com/moby/moby/issues/26912)
3 CHANGELOG
3.1 Subscribe to updates
- Email : Sign up here (double-opt-in) to receive email updates on new and improve recipes!
- Mastodon: https://mastodon.social/@geekcookbook_changes
- RSS: https://mastodon.social/@geekcookbook_changes.rss
- The #changelog channel in our Discord server
3.2 Recent additions to work-in-progress
- Kubernetes recipes for UniFi controller, Miniflux, Kanboard and PrivateBin coming in March! (19 Mar 2019)
3.3 Recently added recipes
- Overhauled Ceph (Shared Storage) recipe for Ceph Octopus (v15) (25 May 2020)
- Added recipe for making your own DIY Kubernetes Cluster (14 December 2019)
- Added recipe for authenticating Traefik Forward Auth against KeyCloak (16 May 2019)
- Added Bitwarden, an awesome open-source password manager, with great mobile sync support (14 May 2019)
- Added Traefik Forward Auth, replacing function of multiple oauth_proxies with a single, 7MB Go application, which can authenticate against Google, KeyCloak, and other OIDC providers (10 May 2019)
3.4 Recent improvements
- Added recipe for automated snapshots of Kubernetes Persistent Volumes, instructions for using Helm, and recipe for deploying Traefik, which completes the Kubernetes cluster design! (9 Feb 2019)
- Added detailed description (and diagram) of our Kubernetes design, plus a simple load-balancer design to avoid the complexities/costs of permitting ingress access to a cluster (7 Feb 2019)
- Added an introductory/explanatory page, including a children’s story, on Kubernetes (29 Jan 2019)
- NextCloud updated to fix CalDAV/CardDAV service discovery behind Traefik reverse proxy (12 Dec 2018)
4 Welcome to Funky Penguin’s Geek Cookbook
4.1 Hello world,
I’m David.
I’m a contracting IT consultant, with a broad range of experience and skills. I’m an AWS Certified Solution Architect (Professional), a remote worker, I’ve had a book published, and I blog on topics that interest me.
4.2 Why Funky Penguin?
My first “real” job, out of high-school, was working the IT helpdesk in a typical pre-2000 organization in South Africa. I enjoyed experimenting with Linux, and cut my teeth by replacing the organization’s Exchange 5.5 mail platform with a 15-site qmail-ldap cluster, with amavis virus-scanning.
One of our suppliers asked me to quote to do the same for their organization. With nothing to loose, and half-expecting to be turned down, I quoted a generous fee, and chose a cheeky company name. The supplier immediately accepted my quote, and the name (“Funky Penguin”) stuck.
4.3 Technical Documentation
During the same “real” job above, I wanted to deploy jabberd, for internal instant messaging within the organization, and as a means to control the sprawl of ad-hoc instant-messaging among staff, using ICQ, MSN, and Yahoo Messenger.
To get management approval to deploy, I wrote a logger (with web UI) for jabber conversations (Bandersnatch), and a 75-page user manual (in Docbook XML) for a spunky Russian WinXP jabber client, JAJC.
Due to my contributions to phpList, I was approached in 2011 by Packt Publishing, to write a book about using PHPList.
4.4 Work with me
Need some Cloud / Microservices / DevOps / Infrastructure design work done? I’m a full-time [AWS-certified][aws_cert] consultant, this stuff is my bread and butter! :breadfork_and_knife: [Get in touch][contact], and let’s talk business!
[plex]: https://www.plex.tv/
[nextcloud]: https://nextcloud.com/
[wordpress]: https://wordpress.org/
[ghost]: https://ghost.io/
[discord]: http://chat.funkypenguin.co.nz
[patreon]: https://www.patreon.com/bePatron?u=6982506
[github_sponsor]: https://github.com/sponsors/funkypenguin
[github]: https://github.com/sponsors/funkypenguin
[discourse]: https://discourse.geek-kitchen.funkypenguin.co.nz/
[twitter]: https://twitter.com/funkypenguin
[contact]: https://www.funkypenguin.co.nz
[aws_cert]: https://www.certmetrics.com/amazon/public/badge.aspx?i=4&t=c&d=2019-02-22&ci=AWS00794574
He unblocked me on all the technical hurdles to launching my SaaS in GKE!
With him delivering the container/Kubernetes architecture and helm CI/CD workflow, I was freed up to focus on coding and design, which fast-tracked me to launching on time. And now I have a simple deployment process that is easy for me to execute and maintain as a solo founder.
I have no hesitation in recommending him for your project, and I’ll certainly be calling on him again in the future.
- John McDowall, Founder, kiso.io
4.5 Contact Me
Contact me by:
- Jumping into our Discord server
- Email (davidy@funkypenguin.co.nz)
- Twitter (@funkypenguin)
Or by using the form below:
<div class=”panel”>
<iframe width=”100%” height=”400” frameborder=”0” scrolling=”no” src=”https://funkypenguin.wufoo.com/forms/z16038vt0bk5txp/”></iframe>
</div>
I HA Docker Swarm
This section introduces the HA Docker Swarm, which will be the basis for all the recipes discussed.
5 Design
In the design described below, our “private cloud” platform is:
- Highly-available (can tolerate the failure of a single component)
- Scalable (can add resource or capacity as required)
- Portable (run it on your garage server today, run it in AWS tomorrow)
- Secure (access protected with LetsEncrypt certificates and optional OIDC with 2FA)
- Automated (requires minimal care and feeding)
5.1 Design Decisions
Where possible, services will be highly available.
This means that:
- At least 3 docker swarm manager nodes are required, to provide fault-tolerance of a single failure.
- Ceph is employed for share storage, because it too can be made tolerant of a single failure.
Where multiple solutions to a requirement exist, preference will be given to the most portable solution.
This means that:
- Services are defined using docker-compose v3 YAML syntax
- Services are portable, meaning a particular stack could be shut down and moved to a new provider with minimal effort.
5.2 Security
Under this design, the only inbound connections we’re permitting to our docker swarm in a minimal configuration (you may add custom services later, like UniFi Controller) are:
Network Flows
- HTTP (TCP 80) : Redirects to https
- HTTPS (TCP 443) : Serves individual docker containers via SSL-encrypted reverse proxy
Authentication
- Where the hosted application provides a trusted level of authentication (i.e., NextCloud), or where the application requires public exposure (i.e. Privatebin), no additional layer of authentication will be required.
- Where the hosted application provides inadequate (i.e. NZBGet) or no authentication (i.e. Gollum), a further authentication against an OAuth provider will be required.
5.3 High availability
Normal function
Assuming a 3-node configuration, under normal circumstances the following is illustrated:
- All 3 nodes provide shared storage via Ceph, which is provided by a docker container on each node.
- All 3 nodes participate in the Docker Swarm as managers.
- The various containers belonging to the application “stacks” deployed within Docker Swarm are automatically distributed amongst the swarm nodes.
- Persistent storage for the containers is provide via cephfs mount.
- The traefik service (in swarm mode) receives incoming requests (on HTTP and HTTPS), and forwards them to individual containers. Traefik knows the containers names because it’s able to read the docker socket.
- All 3 nodes run keepalived, at varying priorities. Since traefik is running as a swarm service and listening on TCP 80/443, requests made to the keepalived VIP and arriving at any of the swarm nodes will be forwarded to the traefik container (no matter which node it’s on), and then onto the target backend.
![](/site_images/geek-cookbook/..----images----docker-swarm-ha-function.png)
Node failure
In the case of a failure (or scheduled maintenance) of one of the nodes, the following is illustrated:
- The failed node no longer participates in Ceph, but the remaining nodes provide enough fault-tolerance for the cluster to operate.
- The remaining two nodes in Docker Swarm achieve a quorum and agree that the failed node is to be removed.
- The (possibly new) leader manager node reschedules the containers known to be running on the failed node, onto other nodes.
- The traefik service is either restarted or unaffected, and as the backend containers stop/start and change IP, traefik is aware and updates accordingly.
- The keepalived VIP continues to function on the remaining nodes, and docker swarm continues to forward any traffic received on TCP 80/443 to the appropriate node.
![](/site_images/geek-cookbook/..----images----docker-swarm-node-failure.png)
Node restore
When the failed (or upgraded) host is restored to service, the following is illustrated:
- Ceph regains full redundancy
- Docker Swarm managers become aware of the recovered node, and will use it for scheduling new containers
- Existing containers which were migrated off the node are not migrated backend
- Keepalived VIP regains full redundancy
![](/site_images/geek-cookbook/..----images----docker-swarm-node-restore.png)
Total cluster failure
A day after writing this, my environment suffered a fault whereby all 3 VMs were unexpectedly and simultaneously powered off.
Upon restore, docker failed to start on one of the VMs due to local disk space issue1. However, the other two VMs started, established the swarm, mounted their shared storage, and started up all the containers (services) which were managed by the swarm.
In summary, although I suffered an unplanned power outage to all of my infrastructure, followed by a failure of a third of my hosts… ==all my platforms are 100% available with absolutely no manual intervention==.
5.4 Chef’s Notes
6 Nodes
Let’s start building our cluster. You can use either bare-metal machines or virtual machines - the configuration would be the same. To avoid confusion, I’ll be referring to these as “nodes” from now on.
In 2017, I initially chose the “Atomic” CentOS/Fedora image for the swarm hosts, but later found its outdated version of Docker to be problematic with advanced features like GPU transcoding (in Plex), Swarmprom, etc. In the end, I went mainstream and simply preferred a modern Ubuntu installation.6.1 Ingredients
New in this recipe:* [ ] 3 x nodes (bare-metal or VMs), each with:
* A mainstream Linux OS (tested on either CentOS 7+ or Ubuntu 16.04+)
* At least 2GB RAM
* At least 20GB disk space (but it’ll be tight)
* [ ] Connectivity to each other within the same subnet, and on a low-latency link (i.e., no WAN links)
6.2 Preparation
Permit connectivity
Most modern Linux distributions include firewall rules which only only permit minimal required incoming connections (like SSH). We’ll want to allow all traffic between our nodes. The steps to achieve this in CentOS/Ubuntu are a little different…
CentOS
Add something like this to /etc/sysconfig/iptables
:
# Allow all inter-node communication
-A INPUT -s 192.168.31.0/24 -j ACCEPT
And restart iptables with systemctl restart iptables
Ubuntu
Install the (non-default) persistent iptables tools, by running apt-get install iptables-persistent
, establishing some default rules (dkpg will prompt you to save current ruleset), and then add something like this to /etc/iptables/rules.v4
:
# Allow all inter-node communication
-A INPUT -s 192.168.31.0/24 -j ACCEPT
And refresh your running iptables rules with iptables-restore < /etc/iptables/rules.v4
Enable hostname resolution
Depending on your hosting environment, you may have DNS automatically setup for your VMs. If not, it’s useful to set up static entries in /etc/hosts for the nodes. For example, I setup the following:
192.168.31.11 ds1 ds1.funkypenguin.co.nz
192.168.31.12 ds2 ds2.funkypenguin.co.nz
192.168.31.13 ds3 ds3.funkypenguin.co.nz
Set timezone
Set your local timezone, by running:
ln -sf /usr/share/zoneinfo/<your timezone> /etc/localtime
6.3 Serving
After completing the above, you should have:
Deployed in this recipe:* [X] 3 x nodes (bare-metal or VMs), each with:
* A mainstream Linux OS (tested on either CentOS 7+ or Ubuntu 16.04+)
* At least 2GB RAM
* At least 20GB disk space (but it’ll be tight)
* [X] Connectivity to each other within the same subnet, and on a low-latency link (i.e., no WAN links)
6.4 Chef’s Notes
7 Shared Storage (Ceph)
While Docker Swarm is great for keeping containers running (and restarting those that fail), it does nothing for persistent storage. This means if you actually want your containers to keep any data persistent across restarts (hint: you do!), you need to provide shared storage to every docker node.
![](/site_images/geek-cookbook/..----images----ceph.png)
7.1 Ingredients
3 x Virtual Machines (configured earlier), each with:* [X] Support for “modern” versions of Python and LVM
* [X] At least 1GB RAM
* [X] At least 20GB disk space (but it’ll be tight)
* [X] Connectivity to each other within the same subnet, and on a low-latency link (i.e., no WAN links)
* [X] A second disk dedicated to the Ceph OSD
* [X] Each node should have the IP of every other participating node hard-coded in /etc/hosts (including its own IP)
7.2 Preparation
Earlier iterations of this recipe (based on Ceph Jewel) required significant manual effort to install Ceph in a Docker environment. In the 2+ years since Jewel was released, significant improvements have been made to the ceph “deploy-in-docker” process, including the introduction of the cephadm tool. Cephadm is the tool which now does all the heavy lifting, below, for the current version of ceph, codenamed “Octopus”.Pick a master node
One of your nodes will become the cephadm “master” node. Although all nodes will participate in the Ceph cluster, the master node will be the node which we bootstrap ceph on. It’s also the node which will run the Ceph dashboard, and on which future upgrades will be processed. It doesn’t matter which node you pick, and the cluster itself will operate in the event of a loss of the master node (although you won’t see the dashboard)
Install cephadm on master node
Run the following on the ==master== node:
MYIP=`ip route get 1.1.1.1 | grep -oP 'src \K\S+'`
curl --silent --remote-name --location https://github.com/ceph/ceph/raw/octopus/src/\
cephadm/cephadm
chmod +x cephadm
mkdir -p /etc/ceph
./cephadm bootstrap --mon-ip $MYIP
The process takes about 30 seconds, after which, you’ll have a MVC (Minimum Viable Cluster)1, encompassing a single monitor and mgr instance on your chosen node. Here’s the complete output from a fresh install:
??? “Example output from a fresh cephadm bootstrap”
root@raphael:~# MYIP=`ip route get 1.1.1.1 | grep -oP 'src \K\S+'`
root@raphael:~# curl --silent --remote-name --location https://github.com/ceph/c\
eph/raw/octopus/src/cephadm/cephadm
root@raphael:~# chmod +x cephadm
root@raphael:~# mkdir -p /etc/ceph
root@raphael:~# ./cephadm bootstrap --mon-ip $MYIP
INFO:cephadm:Verifying podman|docker is present...
INFO:cephadm:Verifying lvm2 is present...
INFO:cephadm:Verifying time synchronization is in place...
INFO:cephadm:Unit systemd-timesyncd.service is enabled and running
INFO:cephadm:Repeating the final host check...
INFO:cephadm:podman|docker (/usr/bin/docker) is present
INFO:cephadm:systemctl is present
INFO:cephadm:lvcreate is present
INFO:cephadm:Unit systemd-timesyncd.service is enabled and running
INFO:cephadm:Host looks OK
INFO:root:Cluster fsid: bf3eff78-9e27-11ea-b40a-525400380101
INFO:cephadm:Verifying IP 192.168.38.101 port 3300 ...
INFO:cephadm:Verifying IP 192.168.38.101 port 6789 ...
INFO:cephadm:Mon IP 192.168.38.101 is in CIDR network 192.168.38.0/24
INFO:cephadm:Pulling latest docker.io/ceph/ceph:v15 container...
INFO:cephadm:Extracting ceph user uid/gid from container image...
INFO:cephadm:Creating initial keys...
INFO:cephadm:Creating initial monmap...
INFO:cephadm:Creating mon...
INFO:cephadm:Waiting for mon to start...
INFO:cephadm:Waiting for mon...
INFO:cephadm:mon is available
INFO:cephadm:Assimilating anything we can from ceph.conf...
INFO:cephadm:Generating new minimal ceph.conf...
INFO:cephadm:Restarting the monitor...
INFO:cephadm:Setting mon public_network...
INFO:cephadm:Creating mgr...
INFO:cephadm:Wrote keyring to /etc/ceph/ceph.client.admin.keyring
INFO:cephadm:Wrote config to /etc/ceph/ceph.conf
INFO:cephadm:Waiting for mgr to start...
INFO:cephadm:Waiting for mgr...
INFO:cephadm:mgr not available, waiting (1/10)...
INFO:cephadm:mgr not available, waiting (2/10)...
INFO:cephadm:mgr not available, waiting (3/10)...
INFO:cephadm:mgr is available
INFO:cephadm:Enabling cephadm module...
INFO:cephadm:Waiting for the mgr to restart...
INFO:cephadm:Waiting for Mgr epoch 5...
INFO:cephadm:Mgr epoch 5 is available
INFO:cephadm:Setting orchestrator backend to cephadm...
INFO:cephadm:Generating ssh key...
INFO:cephadm:Wrote public SSH key to to /etc/ceph/ceph.pub
INFO:cephadm:Adding key to root@localhost's authorized_keys...
INFO:cephadm:Adding host raphael...
INFO:cephadm:Deploying mon service with default placement...
INFO:cephadm:Deploying mgr service with default placement...
INFO:cephadm:Deploying crash service with default placement...
INFO:cephadm:Enabling mgr prometheus module...
INFO:cephadm:Deploying prometheus service with default placement...
INFO:cephadm:Deploying grafana service with default placement...
INFO:cephadm:Deploying node-exporter service with default placement...
INFO:cephadm:Deploying alertmanager service with default placement...
INFO:cephadm:Enabling the dashboard module...
INFO:cephadm:Waiting for the mgr to restart...
INFO:cephadm:Waiting for Mgr epoch 13...
INFO:cephadm:Mgr epoch 13 is available
INFO:cephadm:Generating a dashboard self-signed certificate...
INFO:cephadm:Creating initial admin user...
INFO:cephadm:Fetching dashboard port number...
INFO:cephadm:Ceph Dashboard is now available at:
URL: https://raphael:8443/
User: admin
Password: mid28k0yg5
INFO:cephadm:You can access the Ceph CLI with:
sudo ./cephadm shell --fsid bf3eff78-9e27-11ea-b40a-525400380101 -c /etc/cep\
h/ceph.conf -k /etc/ceph/ceph.client.admin.keyring
INFO:cephadm:Please consider enabling telemetry to help improve Ceph:
ceph telemetry on
For more information see:
https://docs.ceph.com/docs/master/mgr/telemetry/
INFO:cephadm:Bootstrap complete.
root@raphael:~#
Prepare other nodes
It’s now necessary to tranfer the following files to your ==other== nodes, so that cephadm can add them to your cluster, and so that they’ll be able to mount the cephfs when we’re done:
Path on master | Path on non-master |
---|---|
/etc/ceph/ceph.conf |
/etc/ceph/ceph.conf |
/etc/ceph/ceph.client.admin.keyring |
/etc/ceph/ceph.client.admin.keyring |
/etc/ceph/ceph.pub |
/root/.ssh/authorized_keys (append to anything existing) |
Back on the ==master== node, run ceph orch host add <node-name>
once for each other node you want to join to the cluster. You can validate the results by running ceph orch host ls
/root/.ssh
), so worrying about cephadm seems a little barn-door-after-horses-bolted. If you take host-level security seriously, consider switching to Kubernetes :) Add OSDs
Now the best improvement since the days of ceph-deploy and manual disks.. on the ==master== node, run ceph orch apply osd --all-available-devices
. This will identify any unloved (unpartitioned, unmounted) disks attached to each participating node, and configure these disks as OSDs.
Setup CephFS
On the ==master== node, create a cephfs volume in your cluster, by running ceph fs volume create data
. Ceph will handle the necessary orchestration itself, creating the necessary pool, mds daemon, etc.
You can watch the progress by running ceph fs ls
(to see the fs is configured), and ceph -s
to wait for HEALTH_OK
Mount CephFS volume
On ==every== node, create a mountpoint for the data, by running mkdir /var/data
, add an entry to fstab to ensure the volume is auto-mounted on boot, and ensure the volume is actually mounted if there’s a network / boot delay getting access to the gluster volume:
mkdir /var/data
MYNODES="<node1>,<node2>,<node3>" # Add your own nodes here, comma-delimited
MYHOST=`ip route get 1.1.1.1 | grep -oP 'src \K\S+'`
echo -e "
# Mount cephfs volume \n
raphael,donatello,leonardo:/ /var/data ceph name=admin,noatime,_netdev 0 0" >> /etc/\
fstab
mount -a
7.3 Serving
Sprinkle with tools
Although it’s possible to use cephadm shell
to exec into a container with the necessary ceph tools, it’s more convenient to use the native CLI tools. To this end, on each node, run the following, which will install the appropriate apt repository, and install the latest ceph CLI tools:
curl -L https://download.ceph.com/keys/release.asc | sudo apt-key add -
cephadm add-repo --release octopus
cephadm install ceph-common
Drool over dashboard
Ceph now includes a comprehensive dashboard, provided by the mgr daemon. The dashboard will be accessible at https://[IP of your ceph master node]:8443, but you’ll need to run ceph dashboard ac-user-create <username> <password> administrator
first, to create an administrator account:
root@raphael:~# ceph dashboard ac-user-create batman supermansucks administrator
{"username": "batman", "password": "$2b$12$3HkjY85mav.dq3HHAZiWP.KkMiuoV2TURZFH.6WFf\
o/BPZCT/0gr.", "roles": ["administrator"], "name": null, "email": null, "lastUpdate"\
: 1590372281, "enabled": true, "pwdExpirationDate": null, "pwdUpdateRequired": false\
}
root@raphael:~#
7.4 Summary
What have we achieved?
Created:* [X] Persistent storage available to every node
* [X] Resiliency in the event of the failure of a single node
* [X] Beautiful dashboard
7.5 The easy, 5-minute install
I share (with [sponsors][github_sponsor] and [patrons][patreon]) a private “premix” GitHub repository, which includes an ansible playbook for deploying the entire Geek’s Cookbook stack, automatically. This means that members can create the entire environment with just a git pull
and an ansible-playbook deploy.yml
Here’s a screencast of the playbook in action. I sped up the boring parts, it actually takes ==5 min== (you can tell by the timestamps on the prompt):
[patreon]: https://www.patreon.com/bePatron?u=6982506
[github_sponsor]: https://github.com/sponsors/funkypenguin
7.6 Chef’s Notes
8 Shared Storage (GlusterFS)
While Docker Swarm is great for keeping containers running (and restarting those that fail), it does nothing for persistent storage. This means if you actually want your containers to keep any data persistent across restarts (hint: you do!), you need to provide shared storage to every docker node.
This recipe is deprecated. It didn’t work well in 2017, and it’s not likely to work any better now. It remains here as a reference. I now recommend the use of Ceph for shared storage instead. - 2019 Chef8.1 Design
Why GlusterFS?
This GlusterFS recipe was my original design for shared storage, but I found it to be flawed, and I replaced it with a design which employs Ceph instead. This recipe is an alternate to the Ceph design, if you happen to prefer GlusterFS.
8.2 Ingredients
3 x Virtual Machines (configured earlier), each with:* [X] CentOS/Fedora Atomic
* [X] At least 1GB RAM
* [X] At least 20GB disk space (but it’ll be tight)
* [X] Connectivity to each other within the same subnet, and on a low-latency link (i.e., no WAN links)
* [ ] A second disk, or adequate space on the primary disk for a dedicated data partition
8.3 Preparation
Create Gluster “bricks”
To build our Gluster volume, we need 2 out of the 3 VMs to provide one “brick”. The bricks will be used to create the replicated volume. Assuming a replica count of 2 (i.e., 2 copies of the data are kept in gluster), our total number of bricks must be divisible by our replica count. (I.e., you can’t have 3 bricks if you want 2 replicas. You can have 4 though - We have to have minimum 3 swarm manager nodes for fault-tolerance, but only 2 of those nodes need to run as gluster servers.)
On each host, run a variation following to create your bricks, adjusted for the path to your disk.
(
echo o # Create a new empty DOS partition table
echo n # Add a new partition
echo p # Primary partition
echo 1 # Partition number
echo # First sector (Accept default: 1)
echo # Last sector (Accept default: varies)
echo w # Write changes
) | sudo fdisk /dev/vdb
mkfs.xfs -i size=512 /dev/vdb1
mkdir -p /var/no-direct-write-here/brick1
echo '' >> /etc/fstab >> /etc/fstab
echo '# Mount /dev/vdb1 so that it can be used as a glusterfs volume' >> /etc/fstab
echo '/dev/vdb1 /var/no-direct-write-here/brick1 xfs defaults 1 2' >> /etc/fstab
mount -a && mount
Create glusterfs container
Atomic doesn’t include the Gluster server components. This means we’ll have to run glusterd from within a container, with privileged access to the host. Although convoluted, I’ve come to prefer this design since it once again makes the OS “disposable”, moving all the config into containers and code.
Run the following on each host:
docker run \
-h glusterfs-server \
-v /etc/glusterfs:/etc/glusterfs:z \
-v /var/lib/glusterd:/var/lib/glusterd:z \
-v /var/log/glusterfs:/var/log/glusterfs:z \
-v /sys/fs/cgroup:/sys/fs/cgroup:ro \
-v /var/no-direct-write-here/brick1:/var/no-direct-write-here/brick1 \
-d --privileged=true --net=host \
--restart=always \
--name="glusterfs-server" \
gluster/gluster-centos
Create trusted pool
On a single node (doesn’t matter which), run docker exec -it glusterfs-server bash
to launch a shell inside the container.
From the node, run gluster peer probe <other host>
.
Example output:
[root@glusterfs-server /]# gluster peer probe ds1
peer probe: success.
[root@glusterfs-server /]#
Run gluster peer status
on both nodes to confirm that they’re properly connected to each other:
Example output:
[root@glusterfs-server /]# gluster peer status
Number of Peers: 1
Hostname: ds3
Uuid: 3e115ba9-6a4f-48dd-87d7-e843170ff499
State: Peer in Cluster (Connected)
[root@glusterfs-server /]#
Create gluster volume
Now we create a replicated volume out of our individual “bricks”.
Create the gluster volume by running:
gluster volume create gv0 replica 2 \
server1:/var/no-direct-write-here/brick1 \
server2:/var/no-direct-write-here/brick1
Example output:
[root@glusterfs-server /]# gluster volume create gv0 replica 2 ds1:/var/no-direct-wr\
ite-here/brick1/gv0 ds3:/var/no-direct-write-here/brick1/gv0
volume create: gv0: success: please start the volume to access data
[root@glusterfs-server /]#
Start the volume by running gluster volume start gv0
[root@glusterfs-server /]# gluster volume start gv0
volume start: gv0: success
[root@glusterfs-server /]#
The volume is only present on the host you’re shelled into though. To add the other hosts to the volume, run gluster peer probe <servername>
. Don’t probe host from itself.
From one other host, run docker exec -it glusterfs-server bash
to shell into the gluster-server container, and run gluster peer probe <original server name>
to update the name of the host which started the volume.
Mount gluster volume
On the host (i.e., outside of the container - type exit
if you’re still shelled in), create a mountpoint for the data, by running mkdir /var/data
, add an entry to fstab to ensure the volume is auto-mounted on boot, and ensure the volume is actually mounted if there’s a network / boot delay getting access to the gluster volume:
mkdir /var/data
MYHOST=`hostname -s`
echo '' >> /etc/fstab >> /etc/fstab
echo '# Mount glusterfs volume' >> /etc/fstab
echo "$MYHOST:/gv0 /var/data glusterfs defaults,_netdev,co\
ntext="system_u:object_r:svirt_sandbox_file_t:s0" 0 0" >> /etc/fstab
mount -a
For some reason, my nodes won’t auto-mount this volume on boot. I even tried the trickery below, but they stubbornly refuse to automount:
echo -e "\n\n# Give GlusterFS 10s to start before \
mounting\nsleep 10s && mount -a" >> /etc/rc.local
systemctl enable rc-local.service
For non-gluster nodes, you’ll need to replace $MYHOST above with the name of one of the gluster hosts (I haven’t worked out how to make this fully HA yet)
8.4 Serving
After completing the above, you should have:
- [X] Persistent storage available to every node
- [X] Resiliency in the event of the failure of a single (gluster) node
8.5 Chef’s Notes
Future enhancements to this recipe include:
- Migration of shared storage from GlusterFS to Ceph ()#2)
- Correct the fact that volumes don’t automount on boot (#3)
9 Keepalived
While having a self-healing, scalable docker swarm is great for availability and scalability, none of that is any good if nobody can connect to your cluster.
In order to provide seamless external access to clustered resources, regardless of which node they’re on and tolerant of node failure, you need to present a single IP to the world for external access.
Normally this is done using a HA loadbalancer, but since Docker Swarm aready provides the load-balancing capabilities (routing mesh), all we need for seamless HA is a virtual IP which will be provided by more than one docker node.
This is accomplished with the use of keepalived on at least two nodes.
9.1 Ingredients
Already deployed:* [X] At least 2 x swarm nodes
* [X] low-latency link (i.e., no WAN links)
New:
* [ ] At least 3 x IPv4 addresses (one for each node and one for the virtual IP)
9.2 Preparation
Enable IPVS module
On all nodes which will participate in keepalived, we need the “ip_vs” kernel module, in order to permit serivces to bind to non-local interface addresses.
Set this up once for both the primary and secondary nodes, by running:
echo "modprobe ip_vs" >> /etc/rc.local
modprobe ip_vs
Setup nodes
Assuming your IPs are as follows:
- 192.168.4.1 : Primary
- 192.168.4.2 : Secondary
- 192.168.4.3 : Virtual
Run the following on the primary
docker run -d --name keepalived --restart=always \
--cap-add=NET_ADMIN --net=host \
-e KEEPALIVED_UNICAST_PEERS="#PYTHON2BASH:['192.168.4.1', '192.168.4.2']" \
-e KEEPALIVED_VIRTUAL_IPS=192.168.4.3 \
-e KEEPALIVED_PRIORITY=200 \
osixia/keepalived:1.3.5
And on the secondary:
docker run -d --name keepalived --restart=always \
--cap-add=NET_ADMIN --net=host \
-e KEEPALIVED_UNICAST_PEERS="#PYTHON2BASH:['192.168.4.1', '192.168.4.2']" \
-e KEEPALIVED_VIRTUAL_IPS=192.168.4.3 \
-e KEEPALIVED_PRIORITY=100 \
osixia/keepalived:1.3.5
9.3 Serving
That’s it. Each node will talk to the other via unicast (no need to un-firewall multicast addresses), and the node with the highest priority gets to be the master. When ingress traffic arrives on the master node via the VIP, docker’s routing mesh will deliver it to the appropriate docker node.
9.4 Chef’s notes
- Some hosting platforms (OpenStack, for one) won’t allow you to simply “claim” a virtual IP. Each node is only able to receive traffic targetted to its unique IP, unless certain security controls are disabled by the cloud administrator. In this case, keepalived is not the right solution, and a platform-specific load-balancing solution should be used. In OpenStack, this is Neutron’s “Load Balancer As A Service” (LBAAS) component. AWS, GCP and Azure would likely include similar protections.
- More than 2 nodes can participate in keepalived. Simply ensure that each node has the appropriate priority set, and the node with the highest priority will become the master.
10 Docker Swarm Mode
For truly highly-available services with Docker containers, we need an orchestration system. Docker Swarm (as defined at 1.13) is the simplest way to achieve redundancy, such that a single docker host could be turned off, and none of our services will be interrupted.
10.1 Ingredients
Existing* [X] 3 x nodes (bare-metal or VMs), each with:
* A mainstream Linux OS (tested on either CentOS 7+ or Ubuntu 16.04+)
* At least 2GB RAM
* At least 20GB disk space (but it’ll be tight)
* [X] Connectivity to each other within the same subnet, and on a low-latency link (i.e., no WAN links)
10.2 Preparation
Bash auto-completion
Add some handy bash auto-completion for docker. Without this, you’ll get annoyed that you can’t autocomplete docker stack deploy <blah> -c <blah.yml>
commands.
cd /etc/bash_completion.d/
curl -O https://raw.githubusercontent.com/docker/cli/b75596e1e4d5295ac69b9934d1bd8af\
f691a0de8/contrib/completion/bash/docker
Install some useful bash aliases on each host
cd ~
curl -O https://raw.githubusercontent.com/funkypenguin/geek-cookbook/master/examples\
/scripts/gcb-aliases.sh
echo 'source ~/gcb-aliases.sh' >> ~/.bash_profile
10.3 Serving
Release the swarm!
Now, to launch a swarm. Pick a target node, and run docker swarm init
Yeah, that was it. Seriously. Now we have a 1-node swarm.
[root@ds1 ~]# docker swarm init
Swarm initialized: current node (b54vls3wf8xztwfz79nlkivt8) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join \
--token SWMTKN-1-2orjbzjzjvm1bbo736xxmxzwaf4rffxwi0tu3zopal4xk4mja0-bsud7xnvhv4c\
icwi7l6c9s6l0 \
202.170.164.47:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the\
instructions.
[root@ds1 ~]#
Run docker node ls
to confirm that you have a 1-node swarm:
[root@ds1 ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER S\
TATUS
b54vls3wf8xztwfz79nlkivt8 * ds1.funkypenguin.co.nz Ready Active Leader
[root@ds1 ~]#
Note that when you run docker swarm init
above, the CLI output gives youe a command to run to join further nodes to my swarm. This command would join the nodes as workers (as opposed to managers). Workers can easily be promoted to managers (and demoted again), but since we know that we want our other two nodes to be managers too, it’s simpler just to add them to the swarm as managers immediately.
On the first swarm node, generate the necessary token to join another manager by running docker swarm join-token manager
:
[root@ds1 ~]# docker swarm join-token manager
To add a manager to this swarm, run the following command:
docker swarm join \
--token SWMTKN-1-2orjbzjzjvm1bbo736xxmxzwaf4rffxwi0tu3zopal4xk4mja0-cfm24bq2zvfk\
cwujwlp5zqxta \
202.170.164.47:2377
[root@ds1 ~]#
Run the command provided on your other nodes to join them to the swarm as managers. After addition of a node, the output of docker node ls
(on either host) should reflect all the nodes:
[root@ds2 davidy]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER S\
TATUS
b54vls3wf8xztwfz79nlkivt8 ds1.funkypenguin.co.nz Ready Active Leader
xmw49jt5a1j87a6ihul76gbgy * ds2.funkypenguin.co.nz Ready Active Reachable
[root@ds2 davidy]#
Setup automated cleanup
Docker swarm doesn’t do any cleanup of old images, so as you experiment with various stacks, and as updated containers are released upstream, you’ll soon find yourself loosing gigabytes of disk space to old, unused images.
To address this, we’ll run the “meltwater/docker-cleanup” container on all of our nodes. The container will clean up unused images after 30 minutes.
First, create docker-cleanup.env (mine is under /var/data/config/docker-cleanup), and exclude container images we know we want to keep:
KEEP_IMAGES=traefik,keepalived,docker-mailserver
DEBUG=1
Then create a docker-compose.yml as follows:
version: "3"
services:
docker-cleanup:
image: meltwater/docker-cleanup:latest
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /var/lib/docker:/var/lib/docker
networks:
- internal
deploy:
mode: global
env_file: /var/data/config/docker-cleanup/docker-cleanup.env
networks:
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.0.0/24
Launch the cleanup stack by running docker stack deploy docker-cleanup -c <path-to-docker-compose.yml>
Setup automatic updates
If your swarm runs for a long time, you might find yourself running older container images, after newer versions have been released. If you’re the sort of geek who wants to live on the edge, configure shepherd to auto-update your container images regularly.
Create /var/data/config/shepherd/shepherd.env as follows:
# Don't auto-update Plex or Emby, I might be watching a movie! (Customize this for t\
he containers you _don't_ want to auto-update)
BLACKLIST_SERVICES="plex_plex emby_emby"
# Run every 24 hours. Note that SLEEP_TIME appears to be in seconds.
SLEEP_TIME=86400
Then create /var/data/config/shepherd/shepherd.yml as follows:
version: "3"
services:
shepherd-app:
image: mazzolino/shepherd
env_file : /var/data/config/shepherd/shepherd.env
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
labels:
- "traefik.enable=false"
deploy:
placement:
constraints: [node.role == manager]
Launch shepherd by running docker stack deploy shepherd -c /var/data/config/shepherd/shepherd.yml
, and then just forget about it, comfortable in the knowledge that every day, Shepherd will check that your images are the latest available, and if not, will destroy and recreate the container on the latest available image.
Summary
After completing the above, you should have:
10.4 Chef’s Notes
11 Traefik
The platforms we plan to run on our cloud are generally web-based, and each listening on their own unique TCP port. When a container in a swarm exposes a port, then connecting to any swarm member on that port will result in your request being forwarded to the appropriate host running the container. (Docker calls this the swarm “routing mesh“)
So we get a rudimentary load balancer built into swarm. We could stop there, just exposing a series of ports on our hosts, and making them HA using keepalived.
There are some gaps to this approach though:
- No consideration is given to HTTPS. Implementation would have to be done manually, per-container.
- No mechanism is provided for authentication outside of that which the container providers. We may not want to expose every interface on every container to the world, especially if we are playing with tools or containers whose quality and origin are unknown.
To deal with these gaps, we need a front-end load-balancer, and in this design, that role is provided by Traefik.
![](/site_images/geek-cookbook/..----images----traefik.png)
11.1 Ingredients
Existing* [X] Docker swarm cluster with persistent shared storage
New
* [ ] Access to update your DNS records for manual/automated LetsEncrypt DNS-01 validation, or ingress HTTP/HTTPS for HTTP-01 validation
11.2 Preparation
Prepare the host
The traefik container is aware of the other docker containers in the swarm, because it has access to the docker socket at /var/run/docker.sock. This allows traefik to dynamically configure itself based on the labels found on containers in the swarm, which is hugely useful. To make this functionality work on a SELinux-enabled CentOS7 host, we need to add custom SELinux policy.
The following is only necessary if you’re using SELinux!Run the following to build and activate policy to permit containers to access docker.sock:
mkdir ~/dockersock
cd ~/dockersock
curl -O https://raw.githubusercontent.com/dpw/\
selinux-dockersock/master/Makefile
curl -O https://raw.githubusercontent.com/dpw/\
selinux-dockersock/master/dockersock.te
make && semodule -i dockersock.pp
Prepare traefik.toml
While it’s possible to configure traefik via docker command arguments, I prefer to create a config file (traefik.toml
). This allows me to change traefik’s behaviour by simply changing the file, and keeps my docker config simple.
Create /var/data/traefik/traefik.toml
as follows:
checkNewVersion = true
defaultEntryPoints = ["http", "https"]
# This section enable LetsEncrypt automatic certificate generation / renewal
[acme]
email = "<your LetsEncrypt email address>"
storage = "acme.json" # or "traefik/acme/account" if using KV store
entryPoint = "https"
acmeLogging = true
onDemand = true
OnHostRule = true
# Request wildcard certificates per https://docs.traefik.io/configuration/acme/#wild\
card-domains
[[acme.domains]]
main = "*.example.com"
sans = ["example.com"]
# Redirect all HTTP to HTTPS (why wouldn't you?)
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[web]
address = ":8080"
watch = true
[docker]
endpoint = "tcp://127.0.0.1:2375"
domain = "example.com"
watch = true
swarmmode = true
Prepare the docker service config
“We’ll want an overlay network, independent of our traefik stack, so that we can attach/detach all our other stacks (including traefik) to the overlay network. This way, we can undeploy/redepoly the traefik stack without having to bring every other stack first!” - voice of experienceCreate /var/data/config/traefik/traefik.yml
as follows:
version: "3.2"
# What is this?
#
# This stack exists solely to deploy the traefik_public overlay network, so that
# other stacks (including traefik-app) can attach to it
services:
scratch:
image: scratch
deploy:
replicas: 0
networks:
- public
networks:
public:
driver: overlay
attachable: true
ipam:
config:
- subnet: 172.16.200.0/24
git pull
and a docker stack deploy
Create /var/data/config/traefik/traefik-app.yml
as follows:
version: "3"
services:
traefik:
image: traefik
command: --web --docker --docker.swarmmode --docker.watch --docker.domain=exampl\
e.com --logLevel=DEBUG
# Note below that we use host mode to avoid source nat being applied to our ingr\
ess HTTP/HTTPS sessions
# Without host mode, all inbound sessions would have the source IP of the swarm \
nodes, rather than the
# original source IP, which would impact logging. If you don't care about this, \
you can expose ports the
# "minimal" way instead
ports:
- target: 80
published: 80
protocol: tcp
mode: host
- target: 443
published: 443
protocol: tcp
mode: host
- target: 8080
published: 8080
protocol: tcp
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- /var/data/config/traefik:/etc/traefik
- /var/data/traefik/traefik.log:/traefik.log
- /var/data/traefik/acme.json:/acme.json
networks:
- traefik_public
# Global mode makes an instance of traefik listen on _every_ node, so that regar\
dless of which
# node the request arrives on, it'll be forwarded to the correct backend service.
deploy:
labels:
- "traefik.enable=false"
mode: global
placement:
constraints: [node.role == manager]
restart_policy:
condition: on-failure
networks:
traefik_public:
external: true
Docker won’t start a service with a bind-mount to a non-existent file, so prepare an empty acme.json (with the appropriate permissions) by running:
touch /var/data/traefik/acme.json
chmod 600 /var/data/traefik/acme.json
acme.json
’s permissions to owner-readable-only, else the container will fail to start with an ID-10T error!Traefik will populate acme.json itself when it runs, but it needs to exist before the container will start (Chicken, meet egg.)
11.3 Serving
Launch
First, launch the traefik stack, which will do nothing other than create an overlay network by running docker stack deploy traefik -c /var/data/traefik/traefik.yml
[root@kvm ~]# docker stack deploy traefik -c traefik.yml
Creating network traefik_public
Creating service traefik_scratch
[root@kvm ~]#
Now deploy the traefik appliation itself (which will attach to the overlay network) by running docker stack deploy traefik-app -c /var/data/traefik/traefik-app.yml
[root@kvm ~]# docker stack deploy traefik-app -c traefik-app.yml
Creating service traefik-app_app
[root@kvm ~]#
Confirm traefik is running with docker stack ps traefik-app
:
[root@kvm ~]# docker stack ps traefik-app
ID NAME IMAGE \
NODE DESIRED STATE CURRENT STATE ERROR \
PORTS
74uipz4sgasm traefik-app_app.t4vcm8siwc9s1xj4c2o4orhtx traefik:alpine \
kvm.funkypenguin.co.nz Running Running 33 seconds ago \
*:443->443/tcp,*:80->80/tcp
[root@kvm ~]#
Check Traefik Dashboard
You should now be able to access your traefik instance on http://<node IP>:8080 - It’ll look a little lonely currently (below), but we’ll populate it as we add recipes :)
![](/site_images/geek-cookbook/geek-cookbook_funkypenguin_co_nz---images---traefik-post-launch.png)
Summary
We’ve achieved:* [X] An overlay network to permit traefik to access all future stacks we deploy
* [X] Frontend proxy which will dynamically configure itself for new backend containers
* [X] Automatic SSL support for all proxied resources
11.4 Chef’s Notes
- Did you notice how no authentication was required to view the Traefik dashboard? Eek! We’ll tackle that in the next section, regarding Traefik Forward Authentication!
12 Traefik Forward Auth
Now that we have Traefik deployed, automatically exposing SSL access to our Docker Swarm services using LetsEncrypt wildcard certificates, let’s pause to consider that we may not want some services exposed directly to the internet…
..Wait, why not? Well, Traefik doesn’t provide any form of authentication, it simply secures the transmission of the service between Docker Swarm and the end user. If you were to deploy a service with no native security (Radarr or Sonarr come to mind), then anybody would be able to use it! Even services which may have a layer of authentication might not be safe to expose publically - often open source projects may be maintained by enthusiasts who happily add extra features, but just pay lip service to security, on the basis that “it’s the user’s problem to secure it in their own network”.
To give us confidence that we can access our services, but BadGuys(tm) cannot, we’ll deploy a layer of authentication in front of Traefik, using Forward Authentication. You can use your own KeyCloak instance for authentication, but to lower the barrier to entry, this recipe will assume you’re authenticating against your own Google account.
12.1 Ingredients
Existing:* [X] Docker swarm cluster with persistent shared storage
* [X] Traefik configured per design
New:
* [ ] Client ID and secret from an OpenID-Connect provider (Google, KeyCloak, Microsoft, etc..)
12.2 Preparation
Obtain OAuth credentials
This recipe will demonstrate using Google OAuth for traefik forward authentication, but it’s also possible to use a self-hosted KeyCloak instance - see the KeyCloak OIDC Provider recipe for more details!Log into https://console.developers.google.com/, create a new project then search for and select “Credentials” in the search bar.
Fill out the “OAuth Consent Screen” tab, and then click, “Create Credentials” > “OAuth client ID”. Select “Web Application”, fill in the name of your app, skip “Authorized JavaScript origins” and fill “Authorized redirect URIs” with either all the domains you will allow authentication from, appended with the url-path (e.g. https://radarr.example.com/_oauth, https://radarr.example.com/_oauth, etc), or if you don’t like frustration, use a “auth host” URL instead, like “https://auth.example.com/_oauth” (see below for details)
Store your client ID and secret safely - you’ll need them for the next step.
Prepare environment
Create /var/data/config/traefik/traefik-forward-auth.env
as follows:
CLIENT_ID=<your client id>
CLIENT_SECRET=<your client secret>
OIDC_ISSUER=https://accounts.google.com
SECRET=<a random string, make it up>
# uncomment this to use a single auth host instead of individual redirect_uris (reco\
mmended but advanced)
#AUTH_HOST=auth.example.com
COOKIE_DOMAINS=example.com
Prepare the docker service config
This is a small container, you can simply add the following content to the existing traefik-app.yml
deployed in the previous Traefik recipe:
traefik-forward-auth:
image: funkypenguin/traefik-forward-auth
env_file: /var/data/config/traefik/traefik-forward-auth.env
networks:
- traefik_public
# Uncomment these lines if you're using auth host mode
#deploy:
# labels:
# - traefik.port=4181
# - traefik.frontend.rule=Host:auth.example.com
# - traefik.frontend.auth.forward.address=http://traefik-forward-auth:4181
# - traefik.frontend.auth.forward.trustForwardHeader=true
If you’re not confident that forward authentication is working, add a simple “whoami” test container, to help debug traefik forward auth, before attempting to add it to a more complex container.
# This simply validates that traefik forward authentication is working
whoami:
image: containous/whoami
networks:
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:whoami.example.com
- traefik.port=80
- traefik.frontend.auth.forward.address=http://traefik-forward-auth:4181
- traefik.frontend.auth.forward.authResponseHeaders=X-Forwarded-User
- traefik.frontend.auth.forward.trustForwardHeader=true
git pull
and a docker stack deploy
12.3 Serving
Launch
Redeploy traefik with docker stack deploy traefik-app -c /var/data/traefik/traeifk-app.yml
, to launch the traefik-forward-auth container.
Test
Browse to https://whoami.example.com (obviously, customized for your domain and having created a DNS record), and all going according to plan, you should be redirected to a Google login. Once successfully logged in, you’ll be directed to the basic whoami page.
12.4 Summary
What have we achieved? By adding an additional three simple labels to any service, we can secure any service behind our choice of OAuth provider, with minimal processing / handling overhead.
Created:* [X] Traefik-forward-auth configured to authenticate against an OIDC provider
12.5 Chef’s Notes
- Traefik forward auth replaces the use of oauth_proxy containers found in some of the existing recipes
- @thomaseddon’s original version of traefik-forward-auth only works with Google currently, but I’ve created a fork of a fork, which implements generic OIDC providers.
- I reviewed several implementations of forward authenticators for Traefik, but found most to be rather heavy-handed, or specific to a single auth provider. @thomaseddon’s go-based docker image is 7MB in size, and with the generic OIDC patch (above), it can be extended to work with any OIDC provider.
- No, not github natively, but you can ferderate GitHub into KeyCloak, and then use KeyCloak as the OIDC provider.
13 Using Traefik Forward Auth with KeyCloak
While the Traefik Forward Auth recipe demonstrated a quick way to protect a set of explicitly-specified URLs using OIDC credentials from a Google account, this recipe will illustrate how to use your own KeyCloak instance to secure any URLs within your DNS domain.
13.1 Ingredients
Existing:* [X] KeyCloak recipe deployed successfully, with a local user and an OIDC client
New:
* [ ] DNS entry for your auth host (“auth.yourdomain.com” is a good choice), pointed to your keepalived IP
13.2 Preparation
What is AuthHost mode
Under normal OIDC auth, you have to tell your auth provider which URLs it may redirect an authenticated user back to, post-authentication. This is a security feture of the OIDC spec, preventing a malicious landing page from capturing your session and using it to impersonate you. When you’re securing many URLs though, explicitly listing them can be a PITA.
@thomaseddon’s traefik-forward-auth includes an ingenious mechanism to simulate an “auth host” in your OIDC authentication, so that you can protect an unlimited amount of DNS names (with a common domain suffix), without having to manually maintain a list.
How does it work?
Say you’re protecting radarr.example.com. When you first browse to https://radarr.example.com, Traefik forwards your session to traefik-forward-auth, to be authenticated. Traefik-forward-auth redirects you to your OIDC provider’s login (KeyCloak, in this case), but instructs the OIDC provider to redirect a successfully authenticated session back to https://auth.example.com/_oauth, rather than to https://radarr.example.com/_oauth.
When you successfully authenticate against the OIDC provider, you are redirected to the “redirect_uri” of https://auth.example.com. Again, your request hits Traefik, whichforwards the session to traefik-forward-auth, which knows that you’ve just been authenticated (cookies have a role to play here). Traefik-forward-auth also knows the URL of your original request (thanks to the X-Forwarded-Whatever header). Traefik-forward-auth redirects you to your original destination, and everybody is happy.
This clever workaround only works under 2 conditions:
- Your “auth host” has the same domain name as the hosts you’re protecting (i.e., auth.example.com protecting radarr.example.com)
- You explictly tell traefik-forward-auth to use a cookie authenticating your whole domain (i.e. example.com)
Setup environment
Create /var/data/config/traefik/traefik-forward-auth.env
as follows (change “master” if you created a different realm):
CLIENT_ID=<your keycloak client name>
CLIENT_SECRET=<your keycloak client secret>
OIDC_ISSUER=https://<your keycloak URL>/auth/realms/master
SECRET=<a random string to secure your cookie>
AUTH_HOST=<the FQDN to use for your auth host>
COOKIE_DOMAIN=<the root FQDN of your domain>
Prepare the docker service config
This is a small container, you can simply add the following content to the existing traefik-app.yml
deployed in the previous Traefik recipe:
traefik-forward-auth:
image: funkypenguin/traefik-forward-auth
env_file: /var/data/config/traefik/traefik-forward-auth.env
networks:
- traefik_public
deploy:
labels:
- traefik.port=4181
- traefik.frontend.rule=Host:auth.example.com
- traefik.frontend.auth.forward.address=http://traefik-forward-auth:4181
- traefik.frontend.auth.forward.trustForwardHeader=true
If you’re not confident that forward authentication is working, add a simple “whoami” test container, to help debug traefik forward auth, before attempting to add it to a more complex container.
# This simply validates that traefik forward authentication is working
whoami:
image: containous/whoami
networks:
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:whoami.example.com
- traefik.port=80
- traefik.frontend.auth.forward.address=http://traefik-forward-auth:4181
- traefik.frontend.auth.forward.authResponseHeaders=X-Forwarded-User
- traefik.frontend.auth.forward.trustForwardHeader=true
git pull
and a docker stack deploy
13.3 Serving
Launch
Redeploy traefik with docker stack deploy traefik-app -c /var/data/traefik/traeifk-app.yml
, to launch the traefik-forward-auth container.
Test
Browse to https://whoami.example.com (obviously, customized for your domain and having created a DNS record), and all going according to plan, you’ll be redirected to a KeyCloak login. Once successfully logged in, you’ll be directed to the basic whoami page.
Protect services
To protect any other service, ensure the service itself is exposed by Traefik (if you were previously using an oauth_proxy for this, you may have to migrate some labels from the oauth_proxy serivce to the service itself). Add the following 3 labels:
- traefik.frontend.auth.forward.address=http://traefik-forward-auth:4181
- traefik.frontend.auth.forward.authResponseHeaders=X-Forwarded-User
- traefik.frontend.auth.forward.trustForwardHeader=true
And re-deploy your services :)
13.4 Summary
What have we achieved? By adding an additional three simple labels to any service, we can secure any service behind our KeyCloak OIDC provider, with minimal processing / handling overhead.
Created:* [X] Traefik-forward-auth configured to authenticate against KeyCloak
13.5 Chef’s Notes
- KeyCloak is very powerful. You can add 2FA and all other clever things outside of the scope of this simple recipe ;)
14 Create registry mirror
Although we now have shared storage for our persistent container data, our docker nodes don’t share any other docker data, such as container images. This results in an inefficiency - every node which participates in the swarm will, at some point, need the docker image for every container deployed in the swarm.
When dealing with large container (looking at you, GitLab!), this can result in several gigabytes of wasted bandwidth per-node, and long delays when restarting containers on an alternate node. (It also wastes disk space on each node, but we’ll get to that in the next section)
The solution is to run an official Docker registry container as a “pull-through” cache, or “registry mirror”. By using our persistent storage for the registry cache, we can ensure we have a single copy of all the containers we’ve pulled at least once. After the first pull, any subsequent pulls from our nodes will use the cached version from our registry mirror. As a result, services are available more quickly when restarting container nodes, and we can be more aggressive about cleaning up unused containers on our nodes (more later)
The registry mirror runs as a swarm stack, using a simple docker-compose.yml. Customize your mirror FQDN below, so that Traefik will generate the appropriate LetsEncrypt certificates for it, and make it available via HTTPS.
14.1 Ingredients
- Docker swarm cluster with persistent shared storage
- Traefik configured per design
- DNS entry for the hostname you intend to use, pointed to your keepalived IP
14.2 Preparation
Create /var/data/config/registry/registry.yml as follows:
version: "3"
services:
registry-mirror:
image: registry:2
networks:
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:<your mirror FQDN>
- traefik.docker.network=traefik_public
- traefik.port=5000
ports:
- 5000:5000
volumes:
- /var/data/registry/registry-mirror-data:/var/lib/registry
- /var/data/registry/registry-mirror-config.yml:/etc/docker/registry/config.yml
networks:
traefik_public:
external: true
Create /var/data/registry/registry-mirror-config.yml as follows:
version
:
0.1
log
:
fields
:
service
:
registry
storage
:
cache
:
blobdescriptor
:
inmemory
filesystem
:
rootdirectory
:
/var/lib/
registry
delete
:
enabled
:
true
http
:
addr5000
headers
:
X
-
Content
-
Type
-
Options
:
[
nosniff
]
health
:
storagedriver
:
enabled
:
true
interval
:
10
s
threshold
:
3
proxy
:
remoteurl
:
https
://
registry
-
1
.
docker
.
io
14.3 Serving
Launch registry stack
Launch the registry stack by running docker stack deploy registry -c <path-to-docker-compose.yml>
Enable registry mirror and experimental features
To tell docker to use the registry mirror, and (while we’re here) in order to be able to watch the logs of any service from any manager node (an experimental feature in the current Atomic docker build), edit /etc/docker-latest/daemon.json on each node, and change from:
{
"log-driver": "journald",
"signature-verification": false
}
To:
{
"log-driver": "journald",
"signature-verification": false,
"experimental": true,
"registry-mirrors": ["https://<your registry mirror FQDN>"]
}
Then restart docker by running:
systemctl restart docker-latest
14.4 Chef’s notes
II Chef’s Favorites (Docker)
The following recipes are the chef’s current favorites - these are recipes actively in use and updated by @funkypenguin
hero: AutoPirate - A fully-featured recipe to automate finding, downloading, and organising your media
15 AutoPirate
Once the cutting edge of the “internet” (pre-world-wide-web and mosiac days), Usenet is now a murky, geeky alternative to torrents for file-sharing. However, it’s cool geeky, especially if you’re into having a fully automated media platform.
A good starter for the usenet scene is https://www.reddit.com/r/usenet/. Because it’s so damn complicated, a host of automated tools exist to automate the process of finding, downloading, and managing content. The tools included in this recipe are as follows:
![](/site_images/geek-cookbook/..----images----autopirate.png)
This recipe presents a method to combine these tools into a single swarm deployment, and make them available securely.
15.1 Menu
Tools included in the AutoPirate stack are:
- SABnzbd : downloads data from usenet servers based on .nzb definitions
- NZBGet : downloads data from usenet servers based on .nzb definitions, but written in C++ and designed with performance in mind to achieve maximum download speed by using very little system resources (this is a popular alternative to SABnzbd)
- RTorrent is a CLI-based torrent client, which when combined with ruTorrent becomes a powerful and fully browser-managed torrent client. (Yes, it’s not Usenet, but Sonarr/Radarr will let fulfill your watchlist using either Usenet or torrents, so it’s worth including)
- NZBHydra : acts as a “meta-indexer”, so that your downloading tools (radarr, sonarr, etc) only need to be setup for a single indexes. Also produces interesting stats on indexers, which helps when evaluating which indexers are performing well.
- NZBHydra2 : is a high-performance rewrite of the original NZBHydra, with extra features. While still in beta, this NZBHydra2 will eventually supercede NZBHydra
- Sonarr : finds, downloads and manages TV shows
- Radarr : finds, downloads and manages movies
- Mylar : finds, downloads and manages comic books
- Headphones : finds, downloads and manages music
- Lazy Librarian : finds, downloads and manages ebooks
- Ombi : provides an interface to request additions to a Plex/Emby library using the above tools
- Jackett : Provides an local, caching, API-based interface to torrent trackers, simplifying the way your tools search for torrents.
Since this recipe is so long, and so many of the tools are optional to the final result (i.e., if you’re not interested in comics, you won’t want Mylar), I’ve described each individual tool on its own sub-recipe page (below), even though most of them are deployed very similarly.
15.2 Ingredients
- Docker swarm cluster with persistent shared storage
- Traefik configured per design
- Access to NZB indexers and Usenet servers
- DNS entries configured for each of the NZB tools in this recipe that you want to use
15.3 Preparation
Setup data locations
We’ll need a unique directories for each tool in the stack, bind-mounted into our containers, so create them upfront, in /var/data/autopirate:
mkdir /var/data/autopirate
cd /var/data/autopirate
mkdir -p {lazylibrarian,mylar,ombi,sonarr,radarr,headphones,plexpy,nzbhydra,sabnzbd,\
nzbget,rtorrent,jackett}
Create a directory for the storage of your downloaded media, i.e., something like:
mkdir /var/data/media
Create a user to “own” the above directories, and note the uid and gid of the created user. You’ll need to specify the UID/GID in the environment variables passed to the container (in the example below, I used 4242 - twice the meaning of life).
Secure public access
What you’ll quickly notice about this recipe is that every web interface is protected by an OAuth proxy.
Why? Because these tools are developed by a handful of volunteer developers who are focused on adding features, not necessarily implementing robust security. Most users wouldn’t expose these tools directly to the internet, so the tools have rudimentary (if any) access control.
To mitigate the risk associated with public exposure of these tools (you’re on your smartphone and you want to add a movie to your watchlist, what do you do, hotshot?), in order to gain access to each tool you’ll first need to authenticate against your given OAuth provider.
This is tedious, but you only have to do it once. Each tool (Sonarr, Radarr, etc) to be protected by an OAuth proxy, requires unique configuration. I use github to provide my oauth, giving each tool a unique logo while I’m at it (make up your own random string for OAUTH2PROXYCOOKIE_SECRET)
For each tool, create /var/data/autopirate/<tool>.env, and set the following:
OAUTH2_PROXY_CLIENT_ID=
OAUTH2_PROXY_CLIENT_SECRET=
OAUTH2_PROXY_COOKIE_SECRET=
PUID=4242
PGID=4242
Create at least /var/data/autopirate/authenticated-emails.txt, containing at least your own email address with your OAuth provider. If you wanted to grant access to a specific tool to other users, you’d need a unique authenticated-emails-<tool>.txt which included both normal email address as well as any addresses to be granted tool-specific access.
Setup components
Stack basics
Start with a swarm config file in docker-compose syntax, like this:
version: '3'
services:
And end with a stanza like this:
networks
:
traefik_public
:
external
:
true
internal
:
driver
:
overlay
ipam
:
config
:
-
subnet
:
172.16
.
11.0
/
24
Assemble the tools..
Now work your way through the list of tools below, adding whichever tools your want to use, and finishing with the end section:
- SABnzbd
- NZBGet
- RTorrent
- Sonarr
- Radarr
- Mylar
- Lazy Librarian
- Headphones
- NZBHydra
- NZBHydra2
- Ombi
- Jackett
- End (launch the stack)
16 SABnzbd
16.1 Introduction
SABnzbd is the workhorse of the stack. It takes .nzb files as input (manually or from other autopirate stack tools), then connects to your chosen Usenet provider, downloads all the individual binaries referenced by the .nzb, and then tests/repairs/combines/uncompresses them all into the final result - media files.
![](/site_images/geek-cookbook/missing.png)
16.2 Inclusion into AutoPirate
To include SABnzbd in your AutoPirate stack
(The only reason you wouldn’t use SABnzbd, would be if you were using NZBGet instead), include the following in your autopirate.yml stack definition file:
git pull
and a docker stack deploy
sabnzbd
:
image
:
linuxserver
/
sabnzbd
:
latest
env_file
:
/var/data/config/autopirate/s
abnzbd
.
env
volumes
:
-
/var/data/autopirate/sabnzbd:/
config
-
/var/data/media:/
media
networks
:
-
internal
sabnzbd_proxy
:
image
:
a5huynh
/
oauth2_proxy
env_file
:
/var/data/config/autopirate/s
abnzbd
.
env
networks
:
-
internal
-
traefik_public
deploy
:
labels
:
-
traefik
.
frontend
.
rule
=
Host
:
sabnzbd
.
example
.
com
-
traefik
.
docker
.
network
=
traefik_public
-
traefik
.
port
=
4180
volumes
:
-
/var/data/config/autopirate/authenticated-emails.txt:/
authenticated
-
emails
.
txt
command
:
|
-
cookie
-
secure
=
false
-
upstream
=
http
://
sabnzbd
:
8080
-
redirect
-
url
=
https
://
sabnzbd
.
example
.
com
-
http
-
address
=
http
://
0.0
.
0.0
:
4180
-
email
-
domain
=
example
.
com
-
provider
=
github
-
authenticated
-
emails
-
file
=/
authenticated
-
emails
.
txt
You’ll need to edit sabnzbd.ini (only created after your first launch), and replace the value in
host_whitelist
configuration (it’s comma-separated) with the name of your service within the swarm definition, as well as your FQDN as accessed via traefik.For example, mine simply reads
host_whitelist = sabnzbd.funkypenguin.co.nz, sabnzbd
16.3 Assemble more tools..
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the end section:
- SABnzbd (this page)
- NZBGet
- RTorrent
- Sonarr
- Radarr
- Mylar
- Lazy Librarian
- Headphones
- Lidarr
- NZBHydra
- NZBHydra2
- Ombi
- Jackett
- Heimdall
- End (launch the stack)
16.4 Chef’s Notes
- In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. “radarr”), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
17 NZBGet
17.1 Introduction
NZBGet performs the same function as SABnzbd (downloading content from Usenet servers), but it’s lightweight and fast(er), written in C++ (as opposed to Python).
![](/site_images/geek-cookbook/missing.png)
17.2 Inclusion into AutoPirate
To include NZBGet in your AutoPirate stack
(The only reason you wouldn’t use NZBGet, would be if you were using SABnzbd instead), include the following in your autopirate.yml stack definition file:
git pull
and a docker stack deploy
nzbget
:
image
:
linuxserver
/
nzbget
env_file
:
/var/data/config/autopirate/
nzbget
.
env
volumes
:
-
/var/data/autopirate/nzbget:/
config
-
/var/data/media:/
data
networks
:
-
internal
nzbget_proxy
:
image
:
a5huynh
/
oauth2_proxy
env_file
:
/var/data/config/autopirate/
nzbget
.
env
networks
:
-
internal
-
traefik_public
deploy
:
labels
:
-
traefik
.
frontend
.
rule
=
Host
:
nzbget
.
example
.
com
-
traefik
.
docker
.
network
=
traefik_public
-
traefik
.
port
=
4180
volumes
:
-
/var/data/config/autopirate/authenticated-emails.txt:/
authenticated
-
emails
.
txt
command
:
|
-
cookie
-
secure
=
false
-
upstream
=
http
://
nzbget
:
6789
-
redirect
-
url
=
https
://
nzbget
.
example
.
com
-
http
-
address
=
http
://
0.0
.
0.0
:
4180
-
email
-
domain
=
example
.
com
-
provider
=
github
-
authenticated
-
emails
-
file
=/
authenticated
-
emails
.
txt
ControlPassword=
)17.3 Assemble more tools..
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the end section:
- SABnzbd
- NZBGet (this page)
- RTorrent
- Sonarr
- Radarr
- Mylar
- Lazy Librarian
- Headphones
- Lidarr
- NZBHydra
- NZBHydra2
- Ombi
- Jackett
- Heimdall
- End (launch the stack)
17.4 Chef’s Notes
- In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. “radarr”), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
18 RTorrent / ruTorrent
RTorrent is a popular CLI-based bittorrent client, and ruTorrent is a powerful web interface for rtorrent.
![](/site_images/geek-cookbook/missing.png)
18.1 Choose incoming port
When using a torrent client from behind NAT (which swarm, by nature, is), you typically need to set a static port for inbound torrent communications. In the example below, I’ve set the port to 36258. You’ll need to configure /var/data/autopirate/rtorrent/rtorrent/rtorrent.rc with the equivalent port.
18.2 Inclusion into AutoPirate
To include ruTorrent in your AutoPirate stack, include the following in your autopirate.yml stack definition file:
rtorrent
:
image
:
linuxserver
/
rutorrent
env_file
:
/var/data/config/autopirate/
rtorrent
.
env
ports
:
-
36258
:
36258
volumes
:
-
/var/data/media/:/
media
-
/var/data/autopirate/rtorrent:/
config
networks
:
-
internal
rtorrent_proxy
:
image
:
skippy
/
oauth2_proxy
env_file
:
/var/data/config/autopirate/
rtorrent
.
env
networks
:
-
internal
-
traefik_public
deploy
:
labels
:
-
traefik
.
frontend
.
rule
=
Host
:
rtorrent
.
example
.
com
-
traefik
.
docker
.
network
=
traefik_public
-
traefik
.
port
=
4180
volumes
:
-
/var/data/config/autopirate/authenticated-emails.txt:/
authenticated
-
emails
.
txt
command
:
|
-
cookie
-
secure
=
false
-
upstream
=
http
://
rtorrent
:
80
-
redirect
-
url
=
https
://
rtorrent
.
example
.
com
-
http
-
address
=
http
://
0.0
.
0.0
:
4180
-
email
-
domain
=
example
.
com
-
provider
=
github
-
authenticated
-
emails
-
file
=/
authenticated
-
emails
.
txt
git pull
and a docker stack deploy
18.3 Assemble more tools..
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the end section:
- SABnzbd
- NZBGet
- RTorrent (this page)
- Sonarr
- Radarr
- Mylar
- Lazy Librarian
- Headphones
- Lidarr
- NZBHydra
- NZBHydra2
- Ombi
- Jackett
- Heimdall
- End (launch the stack)
18.4 Chef’s Notes
- In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. “radarr”), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
19 Sonarr
Sonarr is a tool for finding, downloading and managing your TV series.
![](/site_images/geek-cookbook/missing.png)
19.1 Inclusion into AutoPirate
To include Sonarr in your AutoPirate stack, include the following in your autopirate.yml stack definition file:
sonarr
:
image
:
linuxserver
/
sonarr
:
latest
env_file
:
/var/data/config/autopirate/s
onarr
.
env
volumes
:
-
/var/data/autopirate/sonarr:/
config
-
/var/data/media:/
media
networks
:
-
internal
sonarr_proxy
:
image
:
a5huynh
/
oauth2_proxy
env_file
:
/var/data/config/autopirate/s
onarr
.
env
networks
:
-
internal
-
traefik_public
deploy
:
labels
:
-
traefik
.
frontend
.
rule
=
Host
:
sonarr
.
example
.
com
-
traefik
.
docker
.
network
=
traefik_public
-
traefik
.
port
=
4180
volumes
:
-
/var/data/config/autopirate/authenticated-emails.txt:/
authenticated
-
emails
.
txt
command
:
|
-
cookie
-
secure
=
false
-
upstream
=
http
://
sonarr
:
8989
-
redirect
-
url
=
https
://
sonarr
.
example
.
com
-
http
-
address
=
http
://
0.0
.
0.0
:
4180
-
email
-
domain
=
example
.
com
-
provider
=
github
-
authenticated
-
emails
-
file
=/
authenticated
-
emails
.
txt
git pull
and a docker stack deploy
19.2 Assemble more tools..
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the end section:
- SABnzbd
- NZBGet
- RTorrent
- Sonarr (this page)
- Radarr
- Mylar
- Lazy Librarian
- Headphones
- Lidarr
- NZBHydra
- NZBHydra2
- Ombi
- Jackett
- Heimdall
- End (launch the stack)
19.3 Chef’s Notes
- In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. “radarr”), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
20 Radarr
Radarr is a tool for finding, downloading and managing movies. Features include:
- Adding new movies with lots of information, such as trailers, ratings, etc.
- Can watch for better quality of the movies you have and do an automatic upgrade. eg. from DVD to Blu-Ray
- Automatic failed download handling will try another release if one fails
- Manual search so you can pick any release or to see why a release was not downloaded automatically
- Full integration with SABnzbd and NZBGet
- Automatically searching for releases as well as RSS Sync
- Automatically importing downloaded movies
- Recognizing Special Editions, Director’s Cut, etc.
- Identifying releases with hardcoded subs
- Importing movies from various online sources, such as IMDb Watchlists (A complete list can be found here)
- Full integration with Kodi, Plex (notification, library update)
- And a beautiful UI
- Importing Metadata such as trailers or subtitles
![](/site_images/geek-cookbook/missing.png)
20.1 Inclusion into AutoPirate
To include Radarr in your AutoPirate stack, include the following in your autopirate.yml stack definition file:
radarr
:
image
:
linuxserver
/
radarr
:
latest
env_file
:
/var/data/config/autopirate/
radarr
.
env
volumes
:
-
/var/data/autopirate/radarr:/
config
-
/var/data/media:/
media
networks
:
-
internal
radarr_proxy
:
image
:
a5huynh
/
oauth2_proxy
env_file
:
/var/data/config/autopirate/
radarr
.
env
networks
:
-
internal
-
traefik_public
deploy
:
labels
:
-
traefik
.
frontend
.
rule
=
Host
:
radarr
.
example
.
com
-
traefik
.
docker
.
network
=
traefik_public
-
traefik
.
port
=
4180
volumes
:
-
/var/data/config/autopirate/authenticated-emails.txt:/
authenticated
-
emails
.
txt
command
:
|
-
cookie
-
secure
=
false
-
upstream
=
http
://
radarr
:
7878
-
redirect
-
url
=
https
://
radarr
.
example
.
com
-
http
-
address
=
http
://
0.0
.
0.0
:
4180
-
email
-
domain
=
example
.
com
-
provider
=
github
-
authenticated
-
emails
-
file
=/
authenticated
-
emails
.
txt
git pull
and a docker stack deploy
20.2 Assemble more tools..
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the end section:
- SABnzbd
- NZBGet
- RTorrent
- Sonarr
- Radarr (this page)
- Mylar
- Lazy Librarian
- Headphones
- Lidarr
- NZBHydra
- NZBHydra2
- Ombi
- Jackett
- Heimdall
- End (launch the stack)
20.3 Chef’s Notes
- In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. “radarr”), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
21 Mylar
Mylar is a tool for downloading and managing digital comic books.
![](/site_images/geek-cookbook/missing.png)
21.1 Inclusion into AutoPirate
To include Mylar in your AutoPirate stack, include the following in your autopirate.yml stack definition file:
mylar
:
image
:
linuxserver
/
mylar
:
latest
env_file
:
/var/data/config/autopirate/
mylar
.
env
volumes
:
-
/var/data/autopirate/mylar:/
config
-
/var/data/media:/
media
networks
:
-
internal
mylar_proxy
:
image
:
a5huynh
/
oauth2_proxy
env_file
:
/var/data/config/autopirate/
mylar
.
env
networks
:
-
internal
-
traefik_public
deploy
:
labels
:
-
traefik
.
frontend
.
rule
=
Host
:
mylar
.
example
.
com
-
traefik
.
docker
.
network
=
traefik_public
-
traefik
.
port
=
4180
volumes
:
-
/var/data/config/autopirate/authenticated-emails.txt:/
authenticated
-
emails
.
txt
command
:
|
-
cookie
-
secure
=
false
-
upstream
=
http
://
mylar
:
8090
-
redirect
-
url
=
https
://
mylar
.
example
.
com
-
http
-
address
=
http
://
0.0
.
0.0
:
4180
-
email
-
domain
=
example
.
com
-
provider
=
github
-
authenticated
-
emails
-
file
=/
authenticated
-
emails
.
txt
git pull
and a docker stack deploy
21.2 Assemble more tools..
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the end section:
- SABnzbd
- NZBGet
- RTorrent
- Sonarr
- Radarr
- Mylar (this page)
- Lazy Librarian
- Headphones
- Lidarr
- NZBHydra
- NZBHydra2
- Ombi
- Jackett
- Heimdall
- End (launch the stack)
21.3 Chef’s Notes
- In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. “radarr”), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
- If you intend to configure Mylar to perform its own NZB searches and push the hits to a downloader such as SABnzbd, then in addition to configuring the connection to SAB with host, port and api key, you will need to set the parameter
host_return
parameter to the fully qualified Mylar address (e.g.http://mylar:8090
).This will provide the link to the downloader necessary to initiate the download. This parameter is not presented in the user interface so the config file (
$MYLAR_HOME/config.ini
) will need to be manually updated. The parameter can be found under the [Interface] section of the file. (Details)
22 LazyLibrarian
LazyLibrarian is a tool to follow authors and grab metadata for all your digital reading needs. It uses a combination of Goodreads Librarything and optionally GoogleBooks as sources for author info and book info. Features include:
- Find authors and add them to the database
- List all books of an author and mark ebooks or audiobooks as ‘wanted’.
- When processing the downloaded books it will save a cover picture (if available) and save all metadata into metadata.opf next to the bookfile (calibre compatible format)
- AutoAdd feature for book management tools like Calibre which must have books in flattened directory structure, or use calibre to import your books into an existing calibre library
- LazyLibrarian can also be used to search for and download magazines, and monitor for new issues
![](/site_images/geek-cookbook/missing.png)
22.1 Inclusion into AutoPirate
To include LazyLibrarian in your AutoPirate stack, include the following in your autopirate.yml stack definition file:
lazylibrarian
:
image
:
linuxserver
/
lazylibrarian
:
latest
env_file
:
/var/data/config/autopirate/
lazylibrarian
.
env
volumes
:
-
/var/data/autopirate/lazylibrarian:/
config
-
/var/data/media:/
media
networks
:
-
internal
lazylibrarian_proxy
:
image
:
a5huynh
/
oauth2_proxy
env_file
:
/var/data/config/autopirate/
lazylibrarian
.
env
networks
:
-
internal
-
traefik_public
deploy
:
labels
:
-
traefik
.
frontend
.
rule
=
Host
:
lazylibrarian
.
example
.
com
-
traefik
.
docker
.
network
=
traefik_public
-
traefik
.
port
=
4180
volumes
:
-
/var/data/config/autopirate/authenticated-emails.txt:/
authenticated
-
emails
.
txt
command
:
|
-
cookie
-
secure
=
false
-
upstream
=
http
://
lazylibrarian
:
5299
-
redirect
-
url
=
https
://
lazylibrarian
.
example
.
com
-
http
-
address
=
http
://
0.0
.
0.0
:
4180
-
email
-
domain
=
example
.
com
-
provider
=
github
-
authenticated
-
emails
-
file
=/
authenticated
-
emails
.
txt
calibre
-
server
:
image
:
regueiro
/
calibre
-
server
volumes
:
-
/var/data/media/Ebooks/calibre/:/opt/calibre/
library
networks
:
-
internal
git pull
and a docker stack deploy
22.2 Assemble more tools..
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the end section:
- SABnzbd
- NZBGet
- RTorrent
- Sonarr
- Radarr
- Mylar
- Lazy Librarian (this page)
- Headphones
- Lidarr
- NZBHydra
- NZBHydra2
- Ombi
- Jackett
- Heimdall
- End (launch the stack)
22.3 Chef’s Notes
- The calibre-server container co-exists within the Lazy Librarian (LL) containers so that LL can automatically add a book to Calibre using the calibre-server interface. The calibre library can then be properly viewed using the calibre-web recipe.
- In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. “radarr”), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
hero: AutoPirate - A fully-featured recipe to automate finding, downloading, and organising your media
This is not a complete recipe - it’s a component of the autopirate “uber-recipe”, but has been split into its own page to reduce complexity.23 Headphones
Headphones is an automated music downloader for NZB and Torrent, written in Python. It supports SABnzbd, NZBget, Transmission, Torrent, Deluge and Blackhole.
![](/site_images/geek-cookbook/missing.png)
23.1 Inclusion into AutoPirate
To include Headphones in your AutoPirate stack, include the following in your autopirate.yml stack definition file:
headphones
:
image
:
linuxserver
/
headphones
:
latest
env_file
:
/var/data/config/autopirate/
headphones
.
env
volumes
:
-
/var/data/autopirate/headphones:/
config
-
/var/data/media:/
media
networks
:
-
internal
headphones_proxy
:
image
:
a5huynh
/
oauth2_proxy
env_file
:
/var/data/config/autopirate/
headphones
.
env
networks
:
-
internal
-
traefik_public
deploy
:
labels
:
-
traefik
.
frontend
.
rule
=
Host
:
headphones
.
example
.
com
-
traefik
.
docker
.
network
=
traefik_public
-
traefik
.
port
=
4180
volumes
:
-
/var/data/config/autopirate/authenticated-emails.txt:/
authenticated
-
emails
.
txt
command
:
|
-
cookie
-
secure
=
false
-
upstream
=
http
://
headphones
:
8181
-
redirect
-
url
=
https
://
headphones
.
example
.
com
-
http
-
address
=
http
://
0.0
.
0.0
:
4180
-
email
-
domain
=
example
.
com
-
provider
=
github
-
authenticated
-
emails
-
file
=/
authenticated
-
emails
.
txt
git pull
and a docker stack deploy
23.2 Assemble more tools..
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the end section:
- SABnzbd
- NZBGet
- RTorrent
- Sonarr
- Radarr
- Mylar
- Lazy Librarian
- Headphones (this page)
- Lidarr
- NZBHydra
- NZBHydra2
- Ombi
- Jackett
- Heimdall
- End (launch the stack)
23.3 Chef’s Notes
- In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. “radarr”), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
hero: AutoPirate - A fully-featured recipe to automate finding, downloading, and organising your media
This is not a complete recipe - it’s a component of the autopirate “uber-recipe”, but has been split into its own page to reduce complexity.24 Lidarr
Lidarr is an automated music downloader for NZB and Torrent. It performs the same function as Headphones, but is written using the same(ish) codebase as Radarr and Sonarr. It’s blazingly fast, and includes beautiful album/artist art. Lidarr supports SABnzbd, NZBGet, Transmission, Torrent, Deluge and Blackhole (just like Sonarr / Radarr)
![](/site_images/geek-cookbook/missing.png)
24.1 Inclusion into AutoPirate
To include Lidarr in your AutoPirate stack, include the following in your autopirate.yml stack definition file:
lidarr
:
image
:
linuxserver
/
lidarr
:
latest
env_file
:
/var/data/config/autopirate/
lidarr
.
env
volumes
:
-
/var/data/autopirate/lidarr:/
config
-
/var/data/media:/
media
networks
:
-
internal
lidarr_proxy
:
image
:
a5huynh
/
oauth2_proxy
env_file
:
/var/data/config/autopirate/
lidarr
.
env
networks
:
-
internal
-
traefik_public
deploy
:
labels
:
-
traefik
.
frontend
.
rule
=
Host
:
lidarr
.
example
.
com
-
traefik
.
docker
.
network
=
traefik_public
-
traefik
.
port
=
4180
volumes
:
-
/var/data/config/autopirate/authenticated-emails.txt:/
authenticated
-
emails
.
txt
command
:
|
-
cookie
-
secure
=
false
-
upstream
=
http
://
lidarr
:
8181
-
redirect
-
url
=
https
://
lidarr
.
example
.
com
-
http
-
address
=
http
://
0.0
.
0.0
:
4180
-
email
-
domain
=
example
.
com
-
provider
=
github
-
authenticated
-
emails
-
file
=/
authenticated
-
emails
.
txt
git pull
and a docker stack deploy
24.2 Assemble more tools..
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the end section:
- SABnzbd
- NZBGet
- RTorrent
- Sonarr
- Radarr
- Mylar
- Lazy Librarian
- Headphones
- Lidarr (this page)
- NZBHydra
- NZBHydra
- NZBHydra2
- Ombi
- Jackett
- Heimdall
- End (launch the stack)
24.3 Chef’s Notes
- In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. “radarr”), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
- The addition of the Lidarr recipe was contributed by our very own @gpulido in Discord (http://chat.funkypenguin.co.nz) - Thanks Gabriel!
25 NZBHydra
NZBHydra is a meta search for NZB indexers. It provides easy access to a number of raw and newznab based indexers. You can search all your indexers from one place and use it as indexer source for tools like Sonarr or CouchPotato. Features include:
- Search by IMDB, TMDB, TVDB, TVRage and TVMaze ID (including season and episode) and filter by age and size. If an ID is not supported by an indexer it is attempted to be converted (e.g. TMDB to IMDB)
- Query generation, meaning when you search for a movie using e.g. an IMDB ID a query will be generated for raw indexers. Searching for a series season 1 episode 2 will also generate queries for raw indexers, like s01e02 and 1x02
- Grouping of results with the same title and of duplicate results, accounting for result posting time, size, group and poster. By default only one of the duplicates is shown. You can provide an indexer score to influence which one that might be
- Compatible with Sonarr, CP, NZB 360, SickBeard, Mylar and Lazy Librarian (and others)
- Statistics on indexers (average response time, share of results, access errors), searches and downloads per time of day and day of week, NZB download history and search history (both via internal GUI and API)
![](/site_images/geek-cookbook/missing.png)
25.1 Inclusion into AutoPirate
To include NZBHydra in your AutoPirate stack, include the following in your autopirate.yml stack definition file:
nzbhydra
:
image
:
linuxserver
/
hydra
:
latest
env_file
:
/var/data/config/autopirate/
nzbhydra
.
env
volumes
:
-
/var/data/autopirate/nzbhydra:/
config
networks
:
-
internal
nzbhydra_proxy
:
image
:
a5huynh
/
oauth2_proxy
env_file
:
/var/data/config/autopirate/
nzbhydra
.
env
networks
:
-
internal
-
traefik_public
deploy
:
labels
:
-
traefik
.
frontend
.
rule
=
Host
:
nzbhydra
.
example
.
com
-
traefik
.
docker
.
network
=
traefik_public
-
traefik
.
port
=
4180
volumes
:
-
/var/data/config/autopirate/authenticated-emails.txt:/
authenticated
-
emails
.
txt
command
:
|
-
cookie
-
secure
=
false
-
upstream
=
http
://
nzbhydra
:
5075
-
redirect
-
url
=
https
://
nzbhydra
.
example
.
com
-
http
-
address
=
http
://
0.0
.
0.0
:
4180
-
email
-
domain
=
example
.
com
-
provider
=
github
-
authenticated
-
emails
-
file
=/
authenticated
-
emails
.
txt
git pull
and a docker stack deploy
25.2 Assemble more tools..
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the end section:
- SABnzbd
- NZBGet
- RTorrent
- Sonarr
- Radarr
- Mylar
- Lazy Librarian
- Headphones
- Lidarr
- NZBHydra (this page)
- NZBHydra2
- Ombi
- Jackett
- Heimdall
- End (launch the stack)
25.3 Chef’s Notes
- In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. “radarr”), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
26 NZBHydra 2
NZBHydra 2 is a meta search for NZB indexers. It provides easy access to a number of raw and newznab based indexers. You can search all your indexers from one place and use it as an indexer source for tools like Sonarr, Radarr or CouchPotato.
NZBHydra 2 is a complete rewrite of NZBHydra (1). It’s currently in Beta. It works mostly fine but some functions might not be completely done and incompatibilities with some tools might still exist. You might want to run both in parallel for migration / testing purposes, but ultimately you’ll probably want to switch over to NZBHydra 2 exclusively.![](/site_images/geek-cookbook/missing.png)
Features include:
- Searches Anizb, BinSearch, NZBIndex and any newznab compatible indexers. Merges all results, filters them by a number of configurable restrictions, recognizes duplicates and returns them all in one place
- Add results to NZBGet or SABnzbd
- Support for all relevant media IDs (IMDB, TMDB, TVDB, TVRage, TVMaze) and conversion between them
- Query generation, meaning a query will be generated if only a media ID is provided in the search and the indexer doesn’t support the ID or if no results were found
- Compatible with Sonarr, Radarr, NZBGet, SABnzbd, nzb360, CouchPotato, Mylar, Lazy Librarian, Sick Beard, Jackett/Cardigann, Watcher, etc.
- Search and download history and extensive stats. E.g. indexer response times, download shares, NZB age, etc.
- Authentication and multi-user support
- Automatic update of NZB download status by querying configured downloaders
- RSS support with configurable cache times
- Torrent support (Although I prefer Jackett for this):
* For GUI searches, allowing you to download torrents to a blackhole folder
* A separate Torznab compatible endpoint for API requests, allowing you to merge multiple trackers - Extensive configurability
- Migration of database and settings from v1
26.1 Inclusion into AutoPirate
To include NZBHydra2 in your AutoPirate stack, include the following in your autopirate.yml stack definition file:
nzbhydra2
:
image
:
linuxserver
/
hydra2
:
latest
env_file
:
/var/data/config/autopirate/
nzbhydra2
.
env
volumes
:
-
/var/data/autopirate/nzbhydra2:/
config
networks
:
-
internal
nzbhydra2_proxy
:
image
:
a5huynh
/
oauth2_proxy
env_file
:
/var/data/config/autopirate/
nzbhydra2
.
env
networks
:
-
internal
-
traefik_public
deploy
:
labels
:
-
traefik
.
frontend
.
rule
=
Host
:
nzbhydra2
.
example
.
com
-
traefik
.
docker
.
network
=
traefik_public
-
traefik
.
port
=
4180
volumes
:
-
/var/data/config/autopirate/authenticated-emails.txt:/
authenticated
-
emails
.
txt
command
:
|
-
cookie
-
secure
=
false
-
upstream
=
http
://
nzbhydra2
:
5076
-
redirect
-
url
=
https
://
nzbhydra2
.
example
.
com
-
http
-
address
=
http
://
0.0
.
0.0
:
4180
-
email
-
domain
=
example
.
com
-
provider
=
github
-
authenticated
-
emails
-
file
=/
authenticated
-
emails
.
txt
git pull
and a docker stack deploy
26.2 Assemble more tools..
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the end section:
- SABnzbd
- NZBGet
- RTorrent
- Sonarr
- Radarr
- Mylar
- Lazy Librarian
- Headphones
- Lidarr
- NZBHydra
- NZBHydra2 (this page)
- Ombi
- Jackett
- Heimdall
- End (launch the stack)
26.3 Chef’s Notes
- In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra2, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. “radarr”), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
- Note that NZBHydra2 can co-exist with NZBHydra (1), but if you want your tools (Sonarr, Radarr, etc) to use NZBHydra2, you’ll need to change both the target hostname (to “hydra2”) and the target port (to 5076).
27 Ombi
Ombi is a useful addition to the autopirate stack. Features include:
- Lets users request Movies and TV Shows (whether it being the entire series, an entire season, or even single episodes.)
- Easily manage your requests
User management system (supports plex.tv, Emby and local accounts)
- A landing page that will give you the availability of your Plex/Emby server and also add custom notification text to inform your users of downtime.
- Allows your users to get custom notifications!
- Will show if the request is already on plex or even if it’s already monitored.
Automatically updates the status of requests when they are available on Plex/Emby
![](/site_images/geek-cookbook/missing.png)
27.1 Inclusion into AutoPirate
To include Ombi in your AutoPirate stack, include the following in your autopirate.yml stack definition file:
ombi
:
image
:
linuxserver
/
ombi
:
latest
env_file
:
/var/data/config/autopirate/
ombi
.
env
volumes
:
-
/var/data/autopirate/ombi:/
config
networks
:
-
internal
ombi_proxy
:
image
:
a5huynh
/
oauth2_proxy
env_file
:
/var/data/config/autopirate/
ombi
.
env
networks
:
-
internal
-
traefik_public
deploy
:
labels
:
-
traefik
.
frontend
.
rule
=
Host
:
ombi
.
example
.
com
-
traefik
.
docker
.
network
=
traefik_public
-
traefik
.
port
=
4180
volumes
:
-
/var/data/config/autopirate/authenticated-emails.txt:/
authenticated
-
emails
.
txt
command
:
|
-
cookie
-
secure
=
false
-
upstream
=
http
://
ombi
:
3579
-
redirect
-
url
=
https
://
ombi
.
example
.
com
-
http
-
address
=
http
://
0.0
.
0.0
:
4180
-
email
-
domain
=
example
.
com
-
provider
=
github
-
authenticated
-
emails
-
file
=/
authenticated
-
emails
.
txt
git pull
and a docker stack deploy
27.2 Assemble more tools..
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the end section:
- SABnzbd
- NZBGet
- RTorrent
- Sonarr
- Radarr
- Mylar
- Lazy Librarian
- Headphones
- Lidarr
- NZBHydra
- NZBHydra2
- Ombi (this page)
- Jackett
- Heimdall
- End (launch the stack)
27.3 Chef’s Notes
- In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. “radarr”), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
28 Jackett
Jackett works as a proxy server: it translates queries from apps (Sonarr, Radarr, Mylar, etc) into tracker-site-specific http queries, parses the html response, then sends results back to the requesting software.
This allows for getting recent uploads (like RSS) and performing searches. Jackett is a single repository of maintained indexer scraping & translation logic - removing the burden from other apps.
![](/site_images/geek-cookbook/missing.png)
28.1 Inclusion into AutoPirate
To include Jackett in your AutoPirate stack, include the following in your autopirate.yml stack definition file:
jackett
:
image
:
linuxserver
/
jackett
:
latest
env_file
:
/var/data/config/autopirate/
jackett
.
env
volumes
:
-
/var/data/autopirate/jackett:/
config
networks
:
-
internal
jackett_proxy
:
image
:
a5huynh
/
oauth2_proxy
env_file
:
/var/data/config/autopirate/
jackett
.
env
networks
:
-
internal
-
traefik_public
deploy
:
labels
:
-
traefik
.
frontend
.
rule
=
Host
:
jackett
.
example
.
com
-
traefik
.
docker
.
network
=
traefik_public
-
traefik
.
port
=
4180
volumes
:
-
/var/data/config/autopirate/authenticated-emails.txt:/
authenticated
-
emails
.
txt
command
:
|
-
cookie
-
secure
=
false
-
upstream
=
http
://
jackett
:
9117
-
redirect
-
url
=
https
://
jackett
.
example
.
com
-
http
-
address
=
http
://
0.0
.
0.0
:
4180
-
email
-
domain
=
example
.
com
-
provider
=
github
-
authenticated
-
emails
-
file
=/
authenticated
-
emails
.
txt
git pull
and a docker stack deploy
28.2 Assemble more tools..
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the end section:
- SABnzbd
- NZBGet
- RTorrent
- Sonarr
- Radarr
- Mylar
- Lazy Librarian
- Headphones
- Lidarr
- NZBHydra
- NZBHydra2
- Ombi
- Jackett (this page)
- Heimdall
- End (launch the stack)
28.3 Chef’s Notes
- In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. “radarr”), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
29 Heimdall
Heimdall Application Dashboard is a dashboard for all your web applications. It doesn’t need to be limited to applications though, you can add links to anything you like.
Heimdall is an elegant solution to organise all your web applications. Its dedicated to this purpose so you wont lose your links in a sea of bookmarks.
Heimdall provides a single URL to manage access to all of your autopirate tools, and includes “enhanced” (i.e., display stats within Heimdall without launching the app) access to NZBGet, SABnzbd, and friends.
![](/site_images/geek-cookbook/missing.png)
29.1 Inclusion into AutoPirate
To include Heimdall in your AutoPirate stack, include the following in your autopirate.yml stack definition file:
heimdall:
image: linuxserver/heimdall:latest
env_file: /var/data/config/autopirate/heimdall.env
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/data/heimdall:/config
networks:
- internal
heimdall_proxy:
image: funkypenguin/oauth2_proxy:latest
env_file : /var/data/config/autopirate/heimdall.env
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:heimdall.example.com
- traefik.docker.network=traefik_public
- traefik.port=4180
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/data/config/autopirate/authenticated-emails.txt:/authenticated-emails.t\
xt
command: |
-cookie-secure=false
-upstream=http://heimdall:80
-redirect-url=https://heimdall.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
-authenticated-emails-file=/authenticated-emails.txt
git pull
and a docker stack deploy
29.2 Assemble more tools..
Continue through the list of tools below, adding whichever tools your want to use, and finishing with the end section:
- SABnzbd
- NZBGet
- RTorrent
- Sonarr
- Radarr
- Mylar
- Lazy Librarian
- Headphones
- Lidarr
- NZBHydra
- NZBHydra2
- Ombi
- Jackett
- Heimdall (this page)
- End (launch the stack)
29.3 Chef’s Notes
- In many cases, tools will integrate with each other. I.e., Radarr needs to talk to SABnzbd and NZBHydra, Ombi needs to talk to Radarr, etc. Since each tool runs within the stack under its own name, just refer to each tool by name (i.e. “radarr”), and docker swarm will resolve the name to the appropriate container. You can identify the tool-specific port by looking at the docker-compose service definition.
- The inclusion of Heimdall was due to the efforts of @gkoerk in our Discord server. Thanks gkoerk!
Launch Autopirate stack
Launch the AutoPirate stack by running docker stack deploy autopirate -c <path -to-docker-compose.yml>
Confirm the container status by running “docker stack ps autopirate”, and wait for all containers to enter the “Running” state.
Log into each of your new tools at its respective HTTPS URL. You’ll be prompted to authenticate against your OAuth provider, and upon success, redirected to the tool’s UI.
29.4 Chef’s Notes
- This is a complex stack. Sing out in the comments if you found a flaw or need a hand :)
hero: Duplicity - A boring recipe to backup your exciting stuff. Boring is good.
30 Duplicity
Intro
![](/site_images/geek-cookbook/..----images----duplicity.png)
Duplicity backs directories by producing encrypted tar-format volumes and uploading them to a remote or local file server. Because duplicity uses librsync, the incremental archives are space efficient and only record the parts of files that have changed since the last backup. Because duplicity uses GnuPG to encrypt and/or sign these archives, they will be safe from spying and/or modification by the server.
So what does this mean for our stack? It means we can leverage Duplicity to backup all our data-at-rest to a wide variety of cloud providers, including, but not limited to:
- acd_cli
- Amazon S3
- Backblaze B2
- DropBox
- ftp
- Google Docs
- Google Drive
- Microsoft Azure
- Microsoft Onedrive
- Rackspace Cloudfiles
- rsync
- ssh/scp
- SwiftStack
30.1 Ingredients
- Docker swarm cluster with persistent shared storage
- Credentials for one of the Duplicity’s supported upload destinations
30.2 Preparation
Setup data locations
We’ll need a folder to store a docker-compose .yml file, and an associated .env file. If you’re following my filesystem layout, create /var/data/config/duplicity (for the config), and /var/data/duplicity (for the metadata) as follows:
mkdir /var/data/config/duplicity
mkdir /var/data/duplicity
cd /var/data/config/duplicity
(Optional) Create Google Cloud Storage bucket
I didn’t already have an archival/backup provider, so I chose Google Cloud “cloud” storage for the low price-point - 0.7 cents per GB/month (Plus you start with $300 credit even when signing up for the free tier). You can use any destination supported by Duplicity’s URL scheme though, just make sure you specify the necessary environment variables.
- Sign up, create an empty project, enable billing, and create a bucket. Give your bucket a unique name, example “jack-and-jills-bucket” (it’s unique across the entire Google Cloud)
- Under “Storage” section > “Settings” > “Interoperability” tab > click “Enable interoperable access” and then “Create a new key” button and note both Access Key and Secret.
Prepare environment
- Generate a random passphrase to use to encrypt your data. Save this somewhere safe, without it you won’t be able to restore!
- Seriously, save. it. somewhere. safe.
- Create duplicity.env, and populate with the following variables
SRC=/var/data/
DST=gs://jack-and-jills-bucket/yes-you-can-have-subdirectories
TMPDIR=/tmp
GS_ACCESS_KEY_ID=<YOUR GS ACCESS KEY>
GS_SECRET_ACCESS_KEY=<YOUR GS SECRET ACCESS KEY>
OPTIONS=--allow-source-mismatch --exclude /var/data/runtime --exclude /var/data/regi\
stry --exclude /var/data/duplicity --archive-dir=/archive
PASSPHRASE=<YOUR CHOSEN PASSPHRASE>
Run a test backup
Before we launch the automated daily backups, let’s run a test backup, as follows:
docker run --env-file duplicity.env -it --rm -v \
/var/data:/var/data:ro -v /var/data/duplicity/tmp:/tmp -v \
/var/data/duplicity/archive:/archive tecnativa/duplicity \
/etc/periodic/daily/jobrunner
You should see some activity, with a summary of bytes transferred at the end.
Run a test restore
Repeat after me: “If you don’t verify your backup, it’s not a backup”.
Depending on what tier of storage you chose from your provider (i.e., Google Coldline, or Amazon S3), you may be charged for downloading data.Run a variation of the following to confirm a file you expect to be backed up, is backed up. (I used traefik.yml from the traefik recipie, since this is likely to exist for every reader).
docker run --env-file duplicity.env -it --rm \
-v /var/data:/var/data:ro \
-v /var/data/duplicity/tmp:/tmp \
-v /var/data/duplicity/archive:/archive tecnativa/duplicity \
duplicity list-current-files \
\$DST | grep traefik.yml
Once you’ve identified a file to test-restore, use a variation of the following to restore it to /tmp (from the perspective of the container - it’s actually /var/data/duplicity/tmp)
docker run --env-file duplicity.env -it --rm \
-v /var/data:/var/data:ro \
-v /var/data/duplicity/tmp:/tmp \
-v /var/data/duplicity/archive:/archive \
tecnativa/duplicity duplicity restore \
--file-to-restore config/traefik/traefik.yml \
\$DST /tmp/traefik-restored.yml
Examine the contents of /var/data/duplicity/tmp/traefik-restored.yml to confirm it contains valid data.
Setup Docker Swarm
Now that we have confidence in our backup/restore process, let’s automate it by creating a docker swarm config file in docker-compose syntax (v3), something like this:
I share (with my patreon patrons) a private “premix” git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just agit pull
and a docker stack deploy
version: "3"
services:
backup:
image: tecnativa/duplicity
env_file: /var/data/config/duplicity/duplicity.env
networks:
- internal
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/data:/var/data:ro
- /var/data/duplicity/tmp:/tmp
- /var/data/duplicity/archive:/archive
networks:
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.10.0/24
30.3 Serving
Launch Duplicity stack
Launch Duplicity stack by running docker stack deploy duplicity -c <path -to-docker-compose.yml>
Nothing will happen. Very boring. But when the cron script fires (daily), duplicity will do its thing, and backup everything in /var/data to your cloud destination.
30.4 Chef’s Notes
- Automatic backup can still fail if nobody checks that it’s running successfully. I’ll be working on an upcoming recipe to monitor the elements of the stack, including the success/failure of duplicity jobs.
- The container provides the facility to specify an SMTP host and port, but not credentials, which makes it close to useless. As a result, I’ve left SMTP out of this recipe. To enable email notifications (if your SMTP server doesn’t require auth), add
SMTP_HOST
,SMTP_PORT
,EMAIL_FROM
andEMAIL_TO
variables to duplicity.env
hero: Real heroes backup their
31 Elkar Backup
Don’t be like Cameron. Backup your stuff.
<iframe width=”560” height=”315” src=”https://www.youtube.com/embed/1UtFeMoqVHQ” frameborder=”0” allow=”accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture” allowfullscreen></iframe>
Ongoing development of this recipe is sponsored by The Common Observatory. Thanks guys![
![](/site_images/geek-cookbook/..----images----common_observatory.png)
ElkarBackup is a free open-source backup solution based on RSync/RSnapshot. It’s basically a web wrapper around rsync/rsnapshot, which means that your backups are just files on a filesystem, utilising hardlinks for tracking incremental changes. I find this result more reassuring than a blob of compressed, (encrypted?) data that more sophisticated backup solutions would produce for you.
![](/site_images/geek-cookbook/..----images----elkarbackup.png)
31.1 Details
31.2 Ingredients
- Docker swarm cluster with persistent shared storage
- Traefik configured per design
- DNS entry for the hostname you intend to use, pointed to your keepalived IP
31.3 Preparation
Setup data locations
We’ll need several directories to bind-mount into our container, so create them in /var/data/elkarbackup:
mkdir -p /var/data/elkarbackup/{backups,uploads,sshkeys,database-dump}
mkdir -p /var/data/runtime/elkarbackup/db
mkdir -p /var/data/config/elkarbackup
Prepare environment
Create /var/data/config/elkarbackup/elkarbackup.env, and populate with the following variables
SYMFONY__DATABASE__PASSWORD=password
EB_CRON=enabled
TZ='Etc/UTC'
#SMTP - Populate these if you want email notifications
#SYMFONY__MAILER__HOST=
#SYMFONY__MAILER__USER=
#SYMFONY__MAILER__PASSWORD=
#SYMFONY__MAILER__FROM=
# For mysql
MYSQL_ROOT_PASSWORD=password
#oauth2_proxy
OAUTH2_PROXY_CLIENT_ID=
OAUTH2_PROXY_CLIENT_SECRET=
OAUTH2_PROXY_COOKIE_SECRET=
Create /var/data/config/elkarbackup/elkarbackup-db-backup.env
, and populate with the following, to setup the nightly database dump.
Also, did you ever hear about the guy who said “_I wish I had fewer backups”?
No, me either :shrug:
# For database backup (keep 7 days daily backups)
MYSQL_PWD=<same as SYMFONY__DATABASE__PASSWORD above>
MYSQL_USER=root
BACKUP_NUM_KEEP=7
BACKUP_FREQUENCY=1d
Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
I share (with my patreon patrons) a private “premix” git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just agit pull
and a docker stack deploy
version: "3"
services:
db:
image: mariadb:10.4
env_file: /var/data/config/elkarbackup/elkarbackup.env
networks:
- internal
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/data/runtime/elkarbackup/db:/var/lib/mysql
db-backup:
image: mariadb:10.4
env_file: /var/data/config/elkarbackup/elkarbackup-db-backup.env
volumes:
- /var/data/elkarbackup/database-dump:/dump
- /etc/localtime:/etc/localtime:ro
entrypoint: |
bash -c 'bash -s <<EOF
trap "break;exit" SIGHUP SIGINT SIGTERM
sleep 2m
while /bin/true; do
mysqldump -h db --all-databases | gzip -c > /dump/dump_\`date +%d-%m-%Y"_"%H\
_%M_%S\`.sql.gz
(ls -t /dump/dump*.sql.gz|head -n $$BACKUP_NUM_KEEP;ls /dump/dump*.sql.gz)|s\
ort|uniq -u|xargs rm -- {}
sleep $$BACKUP_FREQUENCY
done
EOF'
networks:
- internal
app:
image: elkarbackup/elkarbackup
env_file: /var/data/config/elkarbackup/elkarbackup.env
networks:
- internal
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/data/:/var/data
- /var/data/elkarbackup/backups:/app/backups
- /var/data/elkarbackup/uploads:/app/uploads
- /var/data/elkarbackup/sshkeys:/app/.ssh
proxy:
image: funkypenguin/oauth2_proxy
env_file: /var/data/config/elkarbackup/elkarbackup.env
networks:
- traefik_public
- internal
deploy:
labels:
- traefik.frontend.rule=Host:elkarbackup.example.com
- traefik.port=4180
volumes:
- /var/data/config/traefik/authenticated-emails.txt:/authenticated-emails.txt
command: |
-cookie-secure=false
-upstream=http://app:80
-redirect-url=https://elkarbackup.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
-authenticated-emails-file=/authenticated-emails.txt
networks:
traefik_public:
external: true
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.36.0/24
31.4 Serving
Launch ElkarBackup stack
Launch the ElkarBackup stack by running docker stack deploy elkarbackup -c <path -to-docker-compose.yml>
Log into your new instance at https://YOUR-FQDN, with user “root” and the password default password “root”:
![](/site_images/geek-cookbook/geek-cookbook_funkypenguin_co_nz---images---elkarbackup-setup-1.png)
First thing you do, change your password, using the gear icon, and “Change Password” link:
![](/site_images/geek-cookbook/geek-cookbook_funkypenguin_co_nz---images---elkarbackup-setup-2.png)
Have a read of the Elkarbackup Docs - they introduce the concept of clients (hosts containing data to be backed up), jobs (what data gets backed up), policies (when is data backed up and how long is it kept).
At the very least, you want to setup a client called “localhost” with an empty path (i.e., the job path will be accessed locally, without SSH), and then add a job to this client to backup /var/data, excluding /var/data/runtime
and /var/data/elkarbackup/backup
(unless you like “backup-ception”)
Copying your backup data offsite
From the WebUI, you can download a script intended to be executed on a remote host, to backup your backup data to an offsite location. This is a Good Idea(tm), but needs some massaging for a Docker swarm deployment.
Here’s a variation to the standard script, which I’ve employed:
#!/bin/bash
REPOSITORY
=
/var/data/elkarbackup/backups
SERVER
=
<target host member of docker swarm>
SERVER_USER
=
elkarbackup
UPLOADS
=
/var/data/elkarbackup/uploads
TARGET
=
/srv/backup/elkarbackup
echo
"Starting backup..."
echo
"Date: "
`
date "+%Y-%m-%d (%H:%M)"
`
ssh "
$SERVER_USER
@
$SERVER
"
"cd '
$REPOSITORY
'; find . -maxdepth 2 -mindepth 2"
|
sed \
s/^..// |
while
read
jobId
do
echo
Backing up job $jobId
mkdir -p $TARGET
/$jobId
2
>/dev/null
rsync -aH --delete "
$SERVER_USER
@
$SERVER
:
$REPOSITORY
/
$jobId
/"
$TARGET
/$jobId
done
echo
Backing up uploads
rsync -aH --delete "
$SERVER_USER
@
$SERVER
"
:"
$UPLOADS
/"
$TARGET
/uploads
USED
=
`
df -h . |
awk 'NR==2 { print $3 }'
`
USE
=
`
df -h . |
awk 'NR==2 { print $5 }'
`
AVAILABLE
=
`
df -h . |
awk 'NR==2 { print $4 }'
`
echo
"Backup finished succesfully!"
echo
"Date: "
`
date "+%Y-%m-%d (%H:%M)"
`
echo
""
echo
"**** INFO ****"
echo
"Used disk space:
$USED
(
$USE
)"
echo
"Available disk space:
$AVAILABLE
"
echo
""
/var/data/elkarbackup/database-dump/
Restoring data
Repeat after me : “It’s not a backup unless you’ve tested a restore”
I had some difficulty making restoring work well in the webUI. My attempts to “Restore to client” failed with an SSH error about “localhost” not found. I was able to download the backup from my web browser, so I considered it a successful restore, since I can retrieve the backed-up data either from the webUI or from the filesystem directly.To restore files form a job, click on the “Restore” button in the WebUI, while on the Jobs tab:
![](/site_images/geek-cookbook/geek-cookbook_funkypenguin_co_nz---images---elkarbackup-setup-3.png)
This takes you to a list of backup names and file paths. You can choose to download the entire contents of the backup from your browser as a .tar.gz, or to restore the backup to the client. If you click on the name of the backup, you can also drill down into the file structure, choosing to restore a single file or directory.
Ongoing development of this recipe is sponsored by The Common Observatory. Thanks guys![
![](/site_images/geek-cookbook/..----images----common_observatory.png)
31.5 Chef’s Notes
- If you wanted to expose the ElkarBackup UI directly, you could remove the oauth2_proxy from the design, and move the traefik_public-related labels directly to the app service. You’d also need to add the traefik_public network to the app service.
- The original inclusion of ElkarBackup was due to the efforts of @gpulido in our Discord server. Thanks Gabriel!
32 Emby
Emby (think “M.B.” or “Media Browser”) is best described as “like Plex but different” - It’s a bit geekier and less polished than Plex, but it allows for more flexibility and customization.
![](/site_images/geek-cookbook/..----images----emby.png)
I’ve started experimenting with Emby as an alternative to Plex, because of the advanced parental controls it offers. Based on my experimentation thus far, I have a “kid-safe” profile which automatically logs in, and only displays kid-safe content, based on ratings.
32.1 Ingredients
- Docker swarm cluster with persistent shared storage
- Traefik configured per design
- DNS entry for the hostname you intend to use, pointed to your keepalived IP
32.2 Preparation
Setup data locations
We’ll need a location to store Emby’s library data, config files, logs and temporary transcoding space, so create /var/data/emby, and make sure it’s owned by the user and group who also own your media data.
mkdir /var/data/emby
Prepare environment
Create emby.env, and populate with PUID/GUID for the user who owns the /var/data/emby directory (above) and your actual media content (in this example, the media content is at /srv/data)
PUID=
GUID=
Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
I share (with my patreon patrons) a private “premix” git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just agit pull
and a docker stack deploy
version: "3.0"
services:
emby:
image: emby/emby-server
env_file: /var/data/config/emby/emby.env
volumes:
- /var/data/emby/emby:/config
- /srv/data/:/data
deploy:
labels:
- traefik.frontend.rule=Host:emby.example.com
- traefik.docker.network=traefik_public
- traefik.port=8096
networks:
- traefik_public
- internal
ports:
- 8096:8096
networks:
traefik_public:
external: true
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.17.0/24
32.3 Serving
Launch Emby stack
Launch the stack by running docker stack deploy emby -c <path -to-docker-compose.yml>
Log into your new instance at https://YOUR-FQDN, and complete the wizard-based setup to complete deploying your Emby.
32.4 Chef’s Notes
- I didn’t use an oauth2_proxy for this stack, because it would interfere with mobile client support.
- Got an NVIDIA GPU? See this blog post re how to use your GPU to transcode your media!
- We don’t bother exposing the HTTPS port for Emby, since Traefik is doing the SSL termination for us already.
33 Home Assistant
Home Assistant is a home automation platform written in Python, with extensive support for 3rd-party home-automation platforms including Xaomi, Phillips Hue, and a bazillion others.
![](/site_images/geek-cookbook/..----images----homeassistant.png)
This recipie combines the extensibility of Home Assistant with the flexibility of InfluxDB (for time series data store) and Grafana (for beautiful visualisation of that data).
33.1 Ingredients
- Docker swarm cluster with persistent shared storage
- Traefik configured per design
- DNS entry for the hostname you intend to use, pointed to your keepalived IP
33.2 Preparation
Setup data locations
We’ll need several directories to bind-mount into our container, so create them in /var/data/homeassistant:
mkdir /var/data/homeassistant
cd /var/data/homeassistant
mkdir -p {homeassistant,grafana,influxdb-backup}
Now create a directory for the influxdb realtime data:
mkdir /var/data/runtime/homeassistant/influxdb
Prepare environment
Create /var/data/config/homeassistant/grafana.env, and populate with the following - this is to enable grafana to work with oauth2_proxy without requiring an additional level of authentication:
GF_AUTH_BASIC_ENABLED=false
Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
I share (with my patreon patrons) a private “premix” git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just agit pull
and a docker stack deploy
version: "3"
services:
influxdb:
image: influxdb
networks:
- internal
volumes:
- /var/data/homeassistant/influxdb:/var/lib/influxdb
- /etc/localtime:/etc/localtime:ro
homeassistant:
image: homeassistant/home-assistant
dns_search: hq.example.com
volumes:
- /var/data/homeassistant/homeassistant:/config
- /etc/localtime:/etc/localtime:ro
deploy:
labels:
- traefik.frontend.rule=Host:homeassistant.example.com
- traefik.docker.network=traefik_public
- traefik.port=8123
networks:
- traefik_public
- internal
ports:
- 8123:8123
grafana-app:
image: grafana/grafana
env_file : /var/data/config/homeassistant/grafana.env
volumes:
- /var/data/homeassistant/grafana:/var/lib/grafana
- /etc/localtime:/etc/localtime:ro
networks:
- internal
grafana-proxy:
image: a5huynh/oauth2_proxy
env_file : /var/data/config/homeassistant/grafana.env
dns_search: hq.example.com
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:grafana.example.com
- traefik.docker.network=traefik_public
- traefik.port=4180
volumes:
- /var/data/config/homeassistant/authenticated-emails.txt:/authenticated-ema\
ils.txt
command: |
-cookie-secure=false
-upstream=http://grafana-app:3000
-redirect-url=https://grafana.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
-authenticated-emails-file=/authenticated-emails.txt
networks:
traefik_public:
external: true
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.13.0/24
33.3 Serving
Launch Home Assistant stack
Launch the Home Assistant stack by running docker stack deploy homeassistant -c <path -to-docker-compose.yml>
Log into your new instance at https://YOUR-FQDN, the password you created in configuration.yml as “frontend - api_key”. Then setup a bunch of sensors, and log into https://grafana.YOUR FQDN and create some beautiful graphs :)
33.4 Chef’s Notes
- I tried to protect Home Assistant using oauth2_proxy, but HA is incompatible with the websockets implementation used by Home Assistant. Until this can be fixed, I suggest that geeks set frontend: api_key to a long and complex string, and rely on this to prevent malevolent internet miscreants from turning their lights on at 2am!
34 iBeacons with Home assistant
This is not a complete recipe - it’s an optional additional of the HomeAssistant “recipe”, since it only applies to a subset of usersOne of the most useful features of Home Assistant is location awareness. I don’t care if someone opens my office door when I’m home, but you bet I care about (and want to be notified) it if I’m away!
34.1 Ingredients
1. HomeAssistant per recipe
2. iBeacon(s) - This recipe is for https://s.click.aliexpress.com/e/bzyLCnAp
34.2 Preparation
Write UUID to iBeacon
The iBeacons come with no UUID. We use the LightBlue Explorer app to pair with them (code is “123456”), and assign own own UUID.
Generate your own UUID, or get a random one at https://www.uuidgenerator.net/
Plug in your iBeacon, launch LightBlue Explorer, and find your iBeacon. The first time you attempt to interrogate it, you’ll be prompted to pair. Although it’s not recorded anywhere in the documentation (grr!), the pairing code is 123456
Having paired, you’ll be able to see the vital statistics of your iBeacon.
34.3 Chef’s Notes
hero: Huginn - A recipe for self-hosted, hackable version of IFFTT / Zapier
35 Huginn
Huginn is a system for building agents that perform automated tasks for you online. They can read the web, watch for events, and take actions on your behalf. Huginn’s Agents create and consume events, propagating them along a directed graph. Think of it as a hackable version of IFTTT or Zapier on your own server.
<iframe src=”https://player.vimeo.com/video/61976251” width=”640” height=”433” frameborder=”0” webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>
35.1 Ingredients
- Docker swarm cluster with persistent shared storage
- Traefik configured per design
35.2 Preparation
Setup data locations
Create the location for the bind-mount of the database, so that it’s persistent:
mkdir -p /var/data/huginn/database
Create email address
Strictly speaking, you don’t have to integrate Huginn with email. However, since we created our own mailserver stack earlier, it’s worth using it to enable emails within Huginn.
cd /var/data/docker-mailserver/
./setup.sh email add huginn@huginn.example.com my-password-here
# Setup MX and DKIM if they don't already exist:
./setup.sh config dkim
cat config/opendkim/keys/huginn.example.com/mail.txt
Prepare environment
Create /var/data/huginn/huginn.env, and populate with the following variables. Set the “INVITATION_CODE” variable if you want to require users to enter a code to sign up (protects the UI from abuse) (The full list of Huginn environment variables is available here)
# For huginn/huginn - essential
SMTP_DOMAIN=your-domain-here.com
SMTP_USER_NAME=you@gmail.com
SMTP_PASSWORD=somepassword
SMTP_SERVER=your-mailserver-here.com
SMTP_PORT=587
SMTP_AUTHENTICATION=plain
SMTP_ENABLE_STARTTLS_AUTO=true
INVITATION_CODE=<set an invitation code here>
POSTGRES_PORT_5432_TCP_ADDR=db
POSTGRES_PORT_5432_TCP_PORT=5432
DATABASE_USERNAME=huginn
DATABASE_PASSWORD=<database password>
DATABASE_ADAPTER=postgresql
# Optional extras for huginn/huginn, customize or append based on .env.example lined\
above
TWITTER_OAUTH_KEY=
TWITTER_OAUTH_SECRET=
# For postgres/postgres
POSTGRES_USER=huginn
POSTGRES_PASSWORD=<database password>
BACKUP_NUM_KEEP=7
BACKUP_FREQUENCY=1d
Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
I share (with my patreon patrons) a private “premix” git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just agit pull
and a docker stack deploy
version: '3'
services:
huginn:
image: huginn/huginn
env_file: /var/data/config/huginn/huginn.env
volumes:
- /etc/localtime:/etc/localtime:ro
networks:
- internal
- traefik
deploy:
labels:
- traefik.frontend.rule=Host:huginn.funkypenguin.co.nz
- traefik.docker.network=traefik
- traefik.port=3000
db:
env_file: /var/data/config/huginn/huginn.env
image: postgres:latest
volumes:
- /var/data/runtime/huginn/database:/var/lib/postgresql/data
- /etc/localtime:/etc/localtime:ro
networks:
- internal
db-backup:
image: postgres:latest
env_file: /var/data/config/huginn/huginn.env
volumes:
- /var/data/huginn/database-dump:/dump
- /etc/localtime:/etc/localtime:ro
entrypoint: |
bash -c 'bash -s <<EOF
trap "break;exit" SIGHUP SIGINT SIGTERM
sleep 2m
while /bin/true; do
pg_dump -Fc > /dump/dump_\`date +%d-%m-%Y"_"%H_%M_%S\`.psql
(ls -t /dump/dump*.psql|head -n $$BACKUP_NUM_KEEP;ls /dump/dump*.psql)|sort|\
uniq -u|xargs rm -- {}
sleep $$BACKUP_FREQUENCY
done
EOF'
networks:
- internal
networks:
traefik:
external: true
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.6.0/24
35.3 Serving
Launch Huginn stack
Launch the Huginn stack by running docker stack deploy huginn -c <path -to-docker-compose.yml>
Log into your new instance at https://YOUR-FQDN. You’ll need to use the “Sign Up” button, and (optionally) enter your invitation code in order to create your account.
35.4 Chef’s Notes
1. I initially considered putting an oauth proxy in front of Huginn, but since the invitation code logic prevents untrusted access, and since using a proxy would break oauth for sevices like Twitter integration, I left it out.
hero: Kanboard - A recipe to get your personal kanban on
36 Kanboard
Kanboard is a Kanban tool, developed by Frdric Guillot. (Who also happens to be the developer of my favorite RSS reader, Miniflux)
Kanboard is one of my sponsored projects - a project I financially support on a regular basis because of its utility to me. I use it both in my DayJob(tm), and to manage my overflowing, overly-optimistic personal commitments!Features include:
- Visualize your work
- Limit your work in progress to be more efficient
- Customize your boards according to your business activities
- Multiple projects with the ability to drag and drop tasks
- Reports and analytics
- Fast and simple to use
- Access from anywhere with a modern browser
- Plugins and integrations with external services
- Free, open source and self-hosted
- Super simple installation
![](/site_images/geek-cookbook/geek-cookbook_funkypenguin_co_nz---images---kanboard.png)
36.1 Ingredients
- Docker swarm cluster with persistent shared storage
- Traefik configured per design
- DNS entry pointing your NextCloud url (kanboard.example.com) to your keepalived IP
36.2 Preparation
Setup data locations
Create the location for the bind-mount of the application data, so that it’s persistent:
mkdir -p /var/data/kanboard
Setup Environment
If you intend to use an OAuth proxy to further secure public access to your instance, create a kanboard.env
file to hold your environment variables, and populate with your OAuth provider’s details (the cookie secret you can just make up):
# If you decide to protect kanboard with an oauth_proxy, complete these
OAUTH2_PROXY_CLIENT_ID=
OAUTH2_PROXY_CLIENT_SECRET=
OAUTH2_PROXY_COOKIE_SECRET=
Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
I share (with my patreon patrons) a private “premix” git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just agit pull
and a docker stack deploy
version: '3'
services:
kanboard:
image: kanboard/kanboard
volumes:
- /var/data/kanboard/data:/var/www/app/data
- /var/data/kanboard/plugins:/var/www/app/plugins
networks:
- internal
deploy:
labels:
- traefik.frontend.rule=Host:kanboard.example.com
- traefik.docker.network=traefik_public
- traefik.port=80
proxy:
image: a5huynh/oauth2_proxy
env_file : /var/data/config/kanboard/kanboard.env
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:kanboard.example.com
- traefik.docker.network=traefik_public
- traefik.port=4180
volumes:
- /var/data/config/kanboard/authenticated-emails.txt:/authenticated-emails.t\
xt
command: |
-cookie-secure=false
-upstream=http://app
-redirect-url=https://kanboard.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
-authenticated-emails-file=/authenticated-emails.txt
networks:
traefik_public:
external: true
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.8.0/24
36.3 Serving
Launch Kanboard stack
Launch the Kanboard stack by running docker stack deploy kanboard -c <path -to-docker-compose.yml>
Log into your new instance at https://YOUR-FQDN. Default credentials are admin/admin, after which you can change (under ‘profile’) and add more users.
36.4 Chef’s Notes
- The default theme can be significantly improved by applying the ThemePlus plugin.
- Kanboard becomes more useful when you integrate in/outbound email with MailGun, SendGrid, or Postmark.
hero: Miniflux - A recipe for a lightweight minimalist RSS reader
37 Miniflux
Miniflux is a lightweight RSS reader, developed by Frdric Guillot. (Who also happens to be the developer of the favorite Open Source Kanban app, Kanboard)
![](/site_images/geek-cookbook/..----images----miniflux.png)
I’ve reviewed Miniflux in detail on my blog, but features (among many) that I appreciate:
- Compatible with the Fever API, read your feeds through existing mobile and desktop clients (This is the killer feature for me. I hardly ever read RSS on my desktop, I typically read on my iPhone or iPad, using Fiery Feeds or my new squeeze, Unread)
- Send your bookmarks to Pinboard, Wallabag, Shaarli or Instapaper (I use this to automatically pin my bookmarks for collection on my blog)
- Feeds can be configured to download a “full” version of the content (rather than an excerpt)
- Use the Bookmarklet to subscribe to a website directly from any browsers
37.1 Ingredients
- Docker swarm cluster with persistent shared storage
- Traefik configured per design
- DNS entry pointing your Miniflux url (i.e. miniflux.example.com) to your keepalived IP
37.2 Preparation
Setup data locations
Create the location for the bind-mount of the application data, so that it’s persistent:
mkdir -p /var/data/miniflux/database-dump
mkdir -p /var/data/runtime/miniflux/database
Setup environment
Create /var/data/config/miniflux/miniflux.env
something like this:
DATABASE_URL=postgres://miniflux:secret@db/miniflux?sslmode=disable
POSTGRES_USER=miniflux
POSTGRES_PASSWORD=secret
# This is necessary for the miniflux to update the db schema, even on an empty DB
RUN_MIGRATIONS=1
# Uncomment this on first run, else leave it commented out after adding your own use\
r account
CREATE_ADMIN=1
ADMIN_USERNAME=admin
ADMIN_PASSWORD=test1234
# For backups
PGUSER=miniflux
PGPASSWORD=secret
PGHOST=db
BACKUP_NUM_KEEP=7
BACKUP_FREQUENCY=1d
The entire application is configured using environment variables, including the initial username. Once you’ve successfully deployed once, comment out CREATE_ADMIN
and the two successive lines.
Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
I share (with my patreon patrons) a private “premix” git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just agit pull
and a docker stack deploy
version: '3'
services:
miniflux:
image: miniflux/miniflux:2.0.7
env_file: /var/data/config/miniflux/miniflux.env
volumes:
- /etc/localtime:/etc/localtime:ro
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:miniflux.example.com
- traefik.port=8080
- traefik.docker.network=traefik_public
db:
env_file: /var/data/config/miniflux/miniflux.env
image: postgres:10.1
volumes:
- /var/data/runtime/miniflux/database:/var/lib/postgresql/data
- /etc/localtime:/etc/localtime:ro
networks:
- internal
db-backup:
image: postgres:10.1
env_file: /var/data/config/miniflux/miniflux.env
volumes:
- /var/data/miniflux/database-dump:/dump
- /etc/localtime:/etc/localtime:ro
entrypoint: |
bash -c 'bash -s <<EOF
trap "break;exit" SIGHUP SIGINT SIGTERM
sleep 2m
while /bin/true; do
pg_dump -Fc > /dump/dump_\`date +%d-%m-%Y"_"%H_%M_%S\`.psql
(ls -t /dump/dump*.psql|head -n $$BACKUP_NUM_KEEP;ls /dump/dump*.psql)|sort|\
uniq -u|xargs rm -- {}
sleep $$BACKUP_FREQUENCY
done
EOF'
networks:
- internal
networks:
traefik_public:
external: true
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.22.0/24
37.3 Serving
Launch Miniflux stack
Launch the Miniflux stack by running docker stack deploy miniflux -c <path -to-docker-compose.yml>
Log into your new instance at https://YOUR-FQDN, using the credentials you setup in the environment flie. After this, change your user/password as you see fit, and comment out the CREATE_ADMIN
line in the env file (if you don’t, then an additional admin will be created the next time you deploy)
37.4 Chef’s Notes
1. Find the bookmarklet under the Settings -> Integration page.
38 Munin
Munin is a networked resource monitoring tool that can help analyze resource trends and “what just happened to kill our performance?” problems. It is designed to be very plug and play. A default installation provides a lot of graphs with almost no work.
![](/site_images/geek-cookbook/..----images----munin.png)
Using Munin you can easily monitor the performance of your computers, networks, SANs, applications, weather measurements and whatever comes to mind. It makes it easy to determine “what’s different today” when a performance problem crops up. It makes it easy to see how you’re doing capacity-wise on any resources.
Munin uses the excellent RRDTool (written by Tobi Oetiker) and the framework is written in Perl, while plugins may be written in any language. Munin has a master/node architecture in which the master connects to all the nodes at regular intervals and asks them for data. It then stores the data in RRD files, and (if needed) updates the graphs. One of the main goals has been ease of creating new plugins (graphs).
38.1 Ingredients
- Docker swarm cluster with persistent shared storage
- Traefik configured per design
- DNS entry for the hostname you intend to use, pointed to your keepalived IP
38.2 Preparation
Prepare target nodes
Depending on what you want to monitor, you’ll want to install munin-node. On Ubuntu/Debian, you’ll use apt-get install munin-node
, and on RHEL/CentOS, run yum install munin-node
. Remember to edit /etc/munin/munin-node.conf
, and set your node to allow the server to poll it, by adding cidr_allow x.x.x.x/x
.
On CentOS Atomic, of course, you can’t install munin-node directly, but you can run it as a containerized instance. In this case, you can’t use swarm since you need the container running in privileged mode, so launch a munin-node container on each atomic host using:
docker run -d --name munin-node --restart=always \
--privileged --net=host \
-v /:/rootfs:ro \
-v /sys:/sys:ro \
-e ALLOW="cidr_allow 0.0.0.0/0" \
-p 4949:4949 \
--restart=always \
funkypenguin/munin-node
Setup data locations
We’ll need several directories to bind-mount into our container, so create them in /var/data/munin:
mkdir /var/data/munin
cd /var/data/munin
mkdir -p {log,lib,run,cache}
Prepare environment
Create /var/data/config/munin/munin.env, and populate with the following variables. Use the OAUTH2 variables if you plan to use an oauth2_proxy to protect munin, and set at a minimum the MUNIN_USER
, MUNIN_PASSWORD
, and NODES
values:
# Use these if you plan to protect the webUI with an oauth_proxy
OAUTH2_PROXY_CLIENT_ID=
OAUTH2_PROXY_CLIENT_SECRET=
OAUTH2_PROXY_COOKIE_SECRET=
MUNIN_USER=odin
MUNIN_PASSWORD=lokiisadopted
SMTP_HOST=smtp.example.com
SMTP_PORT=587
SMTP_USERNAME=smtp-username
SMTP_PASSWORD=smtp-password
SMTP_USE_TLS=false
SMTP_ALWAYS_SEND=false
SMTP_MESSAGE='[${
var
:
group
}
;${
var
:
host
}
] -> ${
var
:
graph_title
}
-> warnings: ${
loop
<
,
\
>
:
wfields
$
{
var
:
label
}
=${
var
:
value
}
} / criticals: ${
loop
<
,
>
:
cfields
$
{
var
:
label
}
=$\
{var:value}}'
ALERT_RECIPIENT=monitoring@example.com
ALERT_SENDER=alerts@example.com
NODES="node1:192.168.1.1 node2:192.168.1.2 node3:192.168.1.3"
SNMP_NODES="router1:10.0.0.254:9999"
Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
I share (with my patreon patrons) a private “premix” git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just agit pull
and a docker stack deploy
version: '3'
services:
munin:
image: funkypenguin/munin-server
env_file: /var/data/config/munin/munin.env
networks:
- internal
volumes:
- /var/data/munin/log:/var/log/munin
- /var/data/munin/lib:/var/lib/munin
- /var/data/munin/run:/var/run/munin
- /var/data/munin/cache:/var/cache/munin
proxy:
image: funkypenguin/oauth2_proxy
env_file: /var/data/config/munin/munin.env
networks:
- traefik_public
- internal
deploy:
labels:
- traefik.frontend.rule=Host:munin.example.com
- traefik.docker.network=traefik
- traefik.port=4180
command: |
-cookie-secure=false
-upstream=http://munin:8080
-redirect-url=https://munin.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
networks:
traefik_public:
external: true
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.24.0/24
38.3 Serving
Launch Munin stack
Launch the Munin stack by running docker stack deploy munin -c <path -to-docker-compose.yml>
Log into your new instance at https://YOUR-FQDN, with user and password password you specified in munin.env above.
38.4 Chef’s Notes
1. If you wanted to expose the Munin UI directly, you could remove the oauth2_proxy from the design, and move the traefik-related labels directly to the munin container. You’d also need to add the traefik_public network to the munin container.
hero: Backup all your stuff. Share it. Privately.
39 NextCloud
Ongoing development of this recipe is sponsored by The Common Observatory. Thanks guys![
![](/site_images/geek-cookbook/..----images----common_observatory.png)
NextCloud (a fork of OwnCloud, led by original developer Frank Karlitschek) is a suite of client-server software for creating and using file hosting services. It is functionally similar to Dropbox, although Nextcloud is free and open-source, allowing anyone to install and operate it on a private server.
- https://en.wikipedia.org/wiki/Nextcloud
![](/site_images/geek-cookbook/..----images----nextcloud.png)
This recipe is based on the official NextCloud docker image, but includes seprate containers ofor the database (MariaDB), Redis (for transactional locking), Apache Solr (for full-text searching), automated database backup, (you do backup the stuff you care about, right?) and a separate cron container for running NextCloud’s 15-min crons.
39.1 Ingredients
- Docker swarm cluster with persistent shared storage
- Traefik configured per design
- DNS entry pointing your NextCloud url (nextcloud.example.com) to your keepalived IP
39.2 Preparation
Setup data locations
We’ll need several directories for static data to bind-mount into our container, so create them in /var/data/nextcloud (so that they can be backed up)
mkdir /var/data/nextcloud
cd /var/data/nextcloud
mkdir -p {html,apps,config,data,database-dump}
Now make more directories for runtime data (so that they can be not backed-up):
mkdir /var/data/runtime/nextcloud
cd /var/data/runtime/nextcloud
mkdir -p {db,redis}
Prepare environment
Create nextcloud.env, and populate with the following variables
NEXTCLOUD_ADMIN_USER=admin
NEXTCLOUD_ADMIN_PASSWORD=FVuojphozxMVyaYCUWomiP9b
MYSQL_HOST=db
# For mysql
MYSQL_ROOT_PASSWORD=<set to something secure>
MYSQL_DATABASE=nextcloud
MYSQL_USER=nextcloud
MYSQL_PASSWORD=set to something secure>
Now create a separate nextcloud-db-backup.env file, to capture the environment variables necessary to perform the backup. (If the same variables are shared with the mariadb container, they cause issues with database access)
# For database backup (keep 7 days daily backups)
MYSQL_PWD=<set to something secure, same as MYSQL_ROOT_PASSWORD above>
MYSQL_USER=root
BACKUP_NUM_KEEP=7
BACKUP_FREQUENCY=1d
Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
I share (with my patreon patrons) a private “premix” git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just agit pull
and a docker stack deploy
version: "3.0"
services:
nextcloud:
image: nextcloud
env_file: /var/data/config/nextcloud/nextcloud.env
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:nextcloud.example.com
- traefik.docker.network=traefik_public
- traefik.port=80
volumes:
- /var/data/nextcloud/html:/var/www/html
- /var/data/nextcloud/apps:/var/www/html/custom_apps
- /var/data/nextcloud/config:/var/www/html/config
- /var/data/nextcloud/data:/var/www/html/data
db:
image: mariadb:10
env_file: /var/data/config/nextcloud/nextcloud.env
networks:
- internal
volumes:
- /var/data/runtime/nextcloud/db:/var/lib/mysql
db-backup:
image: mariadb:10
env_file: /var/data/config/nextcloud/nextcloud.env
volumes:
- /var/data/nextcloud/database-dump:/dump
- /etc/localtime:/etc/localtime:ro
entrypoint: |
bash -c 'bash -s <<EOF
trap "break;exit" SIGHUP SIGINT SIGTERM
sleep 2m
while /bin/true; do
mysqldump -h db --all-databases | gzip -c > /dump/dump_\`date +%d-%m-%Y"_"%H\
_%M_%S\`.sql.gz
(ls -t /dump/dump*.sql.gz|head -n $$BACKUP_NUM_KEEP;ls /dump/dump*.sql.gz)|s\
ort|uniq -u|xargs rm -- {}
sleep $$BACKUP_FREQUENCY
done
EOF'
networks:
- internal
redis:
image: redis:alpine
networks:
- internal
volumes:
- /var/data/runtime/nextcloud/redis:/data
cron:
image: nextcloud
volumes:
- /var/data/nextcloud/:/var/www/html
user: www-data
networks:
- internal
entrypoint: |
bash -c 'bash -s <<EOF
trap "break;exit" SIGHUP SIGINT SIGTERM
while [ ! -f /var/www/html/config/config.php ]; do
sleep 1
done
while true; do
php -f /var/www/html/cron.php
sleep 15m
done
EOF'
networks:
traefik_public:
external: true
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.12.0/24
39.3 Serving
Launch NextCloud stack
Launch the NextCloud stack by running docker stack deploy nextcloud -c <path -to-docker-compose.yml>
Log into your new instance at https://YOUR-FQDN, with user “admin” and the password you specified in nextcloud.env.
Enable redis
To make NextCloud a little snappier, edit /var/data/nextcloud/config/config.php
(now that it’s been created on the first container launch), and add the following:
'redis' => array(
'host' => 'redis',
'port' => 6379,
),
Use service discovery
Want to use Calendar/Contacts on your iOS device? Want to avoid dictating long, rambling URL strings to your users, like https://nextcloud.batcave.com/remote.php/dav/principals/users/USERNAME/
?
Huzzah! NextCloud supports service discovery for CalDAV/CardDAV, allowing you to simply tell your device the primary URL of your server (nextcloud.batcave.org, for example), and have the device figure out the correct WebDAV path to use.
We (and anyone else using the NextCloud Docker image) are using an SSL-terminating reverse proxy (Traefik) in front of our NextCloud container. In fact, it’s not possible to setup SSL within the NextCloud container.
When using a reverse proxy, your device requests a URL from your proxy (https://nextcloud.batcave.com/.well-known/caldav), and the reverse proxy then passes that request unencrypted to the internal URL of the NextCloud instance (i.e., http://172.16.12.123/.well-known/caldav)
The Apache webserver on the NextCloud container (knowing it was spoken to via HTTP), responds with a 301 redirect to http://nextcloud.batcave.com/remote.php/dav/. See the problem? You requested an HTTPS (encrypted) url, and in return, you received a redirect to an HTTP (unencrypted) URL. Any sensible client (iOS included) will refuse such schenanigans.
To correct this, we need to tell NextCloud to always redirect the .well-known URLs to an HTTPS location. This can only be done after deploying NextCloud, since it’s only on first launch of the container that the .htaccess file is created in the first place.
To make NextCloud service discovery work with Traefik reverse proxy, edit /var/data/nextcloud/html/.htaccess
, and change this:
RewriteRule ^\.well-known/carddav /remote.php/dav/ [R=301,L]
RewriteRule ^\.well-known/caldav /remote.php/dav/ [R=301,L]
To this:
RewriteRule ^\.well-known/carddav https://%{SERVER_NAME}/remote.php/dav/ [R=301,L]
RewriteRule ^\.well-known/caldav https://%{SERVER_NAME}/remote.php/dav/ [R=301,L]
Then restart your container with docker service update nextcloud_nextcloud --force
to restart apache.
Your can test for success by running curl -i https://nextcloud.batcave.org/.well-known/carddav
. You should get a 301 redirect to your equivalent of https://nextcloud.batcave.org/remote.php/dav/, as below:
[
davidy
:
~
]
%
curl
-i
https
://
nextcloud
.
batcave
.
org
/
.
well-known
/
carddav
HTTP
/
2
301
content-type
:
text
/
html
;
charset
=
iso-8859-1
date
:
Wed
,
12
Dec
2018
08
:
30
:
11
GMT
location
:
https
://
nextcloud
.
batcave
.
org
/
remote
.
php
/
dav
/
Note that this .htaccess can be overwritten by NextCloud, and you may have to reapply the change in future. I’ve created an issue requesting a permanent fix.
Ongoing development of this recipe is sponsored by The Common Observatory. Thanks guys![
![](/site_images/geek-cookbook/..----images----common_observatory.png)
39.4 Chef’s Notes
- Since many of my other recipes use PostgreSQL, I’d have preferred to use Postgres over MariaDB, but MariaDB seems to be the preferred database type.
- I’m not the first user to stumble across the service discovery bug with reverse proxies.
40 OwnTracks
OwnTracks allows you to keep track of your own location. You can build your private location diary or share it with your family and friends. OwnTracks is open-source and uses open protocols for communication so you can be sure your data stays secure and private.
![](/site_images/geek-cookbook/..----images----owntracks.png)
Using a smartphone app, OwnTracks allows you to collect and analyse your own location data without sharing this data with a cloud provider (i.e. Apple, Google). Potential use cases are:
- Sharing family locations without relying on Apple Find-My-friends
- Performing automated actions in HomeAssistant when you arrive/leave home
40.1 Ingredients
- Docker swarm cluster with persistent shared storage
- Traefik configured per design
- DNS entry for the hostname you intend to use, pointed to your keepalived IP
40.2 Preparation
Setup data locations
We’ll need a directory so store OwnTracks’ data , so create /var/data/owntracks
:
mkdir /var/data/owntracks
Prepare environment
Create owntracks.env, and populate with the following variables
OAUTH2_PROXY_CLIENT_ID=
OAUTH2_PROXY_CLIENT_SECRET=
OAUTH2_PROXY_COOKIE_SECRET=
OTR_USER=recorder
OTR_PASS=yourpassword
OTR_HOST=owntracks.example.com
Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
I share (with my patreon patrons) a private “premix” git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just agit pull
and a docker stack deploy
version: "3.0"
services:
owntracks-app:
image: funkypenguin/owntracks
env_file : /var/data/config/owntracks/owntracks.env
volumes:
- /var/data/owntracks:/owntracks
networks:
- internal
ports:
- 1883:1883
- 8883:8883
- 8083:8083
owntracks-proxy:
image: a5huynh/oauth2_proxy
env_file : /var/data/config/owntracks/owntracks.env
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:owntracks.example.com
- traefik.docker.network=traefik_public
- traefik.port=4180
volumes:
- /var/data/config/owntracks/authenticated-emails.txt:/authenticated-emails.\
txt
command: |
-cookie-secure=false
-upstream=http://owntracks-app:8083
-redirect-url=https://owntracks.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
-authenticated-emails-file=/authenticated-emails.txt
networks:
traefik_public:
external: true
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.15.0/24
40.3 Serving
Launch OwnTracks stack
Launch the OwnTracks stack by running docker stack deploy owntracks -c <path -to-docker-compose.yml>
Log into your new instance at https://YOUR-FQDN, with user “root” and the password you specified in gitlab.env.
40.4 Chef’s Notes
- If you wanted to expose the OwnTracks Web UI directly, you could remove the oauth2_proxy from the design, and move the traefik-related labels directly to the wekan container. You’d also need to add the traefik network to the owntracks container.
- I’m using my own image rather than owntracks/recorderd, because of a potentially swarm-breaking bug I found in the official container. If this gets resolved (or if I was mistaken) I’ll update the recipe accordingly.
- By default, you’ll get a fully accessible, unprotected MQTT broker. This may not be suitable for public exposure, so you’ll want to look into securing mosquitto with TLS and ACLs.
41 phpIPAM
phpIPAM is an open-source web IP address management application (IPAM). Its goal is to provide light, modern and useful IP address management. It is php-based application with MySQL database backend, using jQuery libraries, ajax and HTML5/CSS3 features.
![](/site_images/geek-cookbook/..----images----phpipam.png)
phpIPAM fulfils a non-sexy, but important role - It helps you manage your IP address allocation.
41.1 Why should you care about this?
You probably have a home network, with 20-30 IP addresses, for your family devices, your , your smart TV, etc. If you want to (a) monitor them, and (b) audit who does what, you care about what IPs they’re assigned by your DHCP server.
You could simple keep track of all devices with leases in your DHCP server, but what happens if your (hypothetical?) Ubiquity Edge Router X crashes and burns due to lack of disk space, and you loose track of all your leases? Well, you have to start from scratch, is what!
And that HomeAssistant config, which you so carefully compiled, refers to each device by IP/DNS name, so you’d better make sure you recreate it consistently!
Enter phpIPAM. A tool designed to help home keeps as well as large organisations keep track of their IP (and VLAN, VRF, and AS number) allocations.
41.2 Ingredients
- Docker swarm cluster with persistent shared storage
- Traefik configured per design
- DNS entry for the hostname (i.e. “phpipam.your-domain.com”) you intend to use for phpIPAM, pointed to your keepalived IPIP
41.3 Preparation
Setup data locations
We’ll need several directories to bind-mount into our container, so create them in /var/data/phpipam:
mkdir /var/data/phpipam/databases-dump -p
mkdir /var/data/runtime/phpipam -p
Prepare environment
Create phpipam.env, and populate with the following variables
# Setup for github, phpipam application
OAUTH2_PROXY_CLIENT_ID=
OAUTH2_PROXY_CLIENT_SECRET=
OAUTH2_PROXY_COOKIE_SECRET=
# For MariaDB/MySQL database
MYSQL_ROOT_PASSWORD=imtoosecretformyshorts
MYSQL_DATABASE=phpipam
MYSQL_USER=phpipam
MYSQL_PASSWORD=secret
# phpIPAM-specific variables
MYSQL_ENV_MYSQL_USER=phpipam
MYSQL_ENV_MYSQL_PASSWORD=secret
MYSQL_ENV_MYSQL_DB=phpipam
MYSQL_ENV_MYSQL_HOST=db
# For backup
BACKUP_NUM_KEEP=7
BACKUP_FREQUENCY=1d
Additionally, create phpipam-backup.env, and populate with the following variables:
# For MariaDB/MySQL database
MYSQL_ROOT_PASSWORD=imtoosecretformyshorts
MYSQL_DATABASE=phpipam
MYSQL_USER=phpipam
MYSQL_PASSWORD=secret
# For backup
BACKUP_NUM_KEEP=7
BACKUP_FREQUENCY=1d
Create nginx.conf
I usually protect my stacks using an oauth proxy container in front of the app. This protects me from either accidentally exposing a platform to the world, or having a insecure platform accessed and abused.
In the case of phpIPAM, the oauth_proxy creates an additional complexity, since it passes the “Authorization” HTTP header to the phpIPAM container. phpIPAH then examines the header, determines that the provided username (my email address associated with my oauth provider) doesn’t match a local user account, and denies me access without the opportunity to retry.
The (dirty) solution I’ve come up with is to insert an Nginx instance in the path between the oauth_proxy and the phpIPAM container itself. Nginx can remove the authorization header, so that phpIPAM can prompt me to login with a web-based form.
Create /var/data/phpipam/nginx.conf as follows:
upstream
app-upstream
{
server
app
:
80
;
}
server
{
listen
80
;
server_name
~.
;
#
Just
redirect
everything
to
the
upstream
#
Yes,
it's
embarassing.
We
are
just
a
mechanism
to
strip
an
AUTH
header
:(
location
^~
/
{
proxy_pass
http
://
app-upstream
;
proxy_set_header
Authorization
""
;
}
}
Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
I share (with my patreon patrons) a private “premix” git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just agit pull
and a docker stack deploy
version: '3'
services:
db:
image: mariadb:10
env_file: /var/data/config/phpipam/phpipam.env
networks:
- internal
volumes:
- /var/data/runtime/phpipam/db:/var/lib/mysql
proxy:
image: funkypenguin/oauth2_proxy
env_file: /var/data/config/phpipam/phpipam.env
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:phpipam.example.com
- traefik.docker.network=traefik_public
- traefik.port=4180
volumes:
- /var/data/config/phpipam/authenticated-emails.txt:/authenticated-emails.txt
command: |
-cookie-secure=false
-upstream=http://nginx
-redirect-url=https://phpipam.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
-authenticated-emails-file=/authenticated-emails.txt
# Wait, what? Why do we have an oauth_proxy _and_ an nginx frontend for a simple w\
ebapp?
# Well, it's a long story. Basically, the phpipam container sees the "auth" header\
s passed by the
# oauth_proxy, and decides to use these exclusively to authenticate users. So no w\
eb-based login form, just "access denied"
# To work around this, we add nginx reverse proxy to the mix. A PITA, but an easy \
way to solve without altering the PHPIPAM code
nginx:
image: nginx:latest
networks:
- internal
volumes:
- /var/data/phpipam/nginx.conf:/etc/nginx/conf.d/default.conf:ro
app:
image: pierrecdn/phpipam
env_file: /var/data/config/phpipam/phpipam.env
networks:
- internal
db-backup:
image: mariadb:10
env_file: /var/data/config/phpipam/phpipam.env
volumes:
- /var/data/phpipam/database-dump:/dump
- /etc/localtime:/etc/localtime:ro
entrypoint: |
bash -c 'bash -s <<EOF
trap "break;exit" SIGHUP SIGINT SIGTERM
sleep 2m
while /bin/true; do
mysqldump -h db --all-databases | gzip -c > /dump/dump_\`date +%d-%m-%Y"_"%H\
_%M_%S\`.sql.gz
(ls -t /dump/dump*.sql.gz|head -n $$BACKUP_NUM_KEEP;ls /dump/dump*.sql.gz)|s\
ort|uniq -u|xargs rm -- {}
sleep $$BACKUP_FREQUENCY
done
EOF'
networks:
- internal
networks:
traefik_public:
external: true
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.47.0/24
41.4 Serving
Launch phpIPAM stack
Launch the phpIPAM stack by running docker stack deploy phpipam -c <path -to-docker-compose.yml>
Log into your new instance at https://YOUR-FQDN, and follow the on-screen prompts to set your first user/password.
41.5 Chef’s Notes
1. If you wanted to expose the phpIPAM UI directly, you could remove the oauth2_proxy and the nginx services from the design, and move the traefik_public-related labels directly to the phpipam container. You’d also need to add the traefik_public network to the phpipam container.
hero: A recipe to manage your Media
42 Plex
Plex is a client-server media player system and software suite comprising two main components (a media server and client applications)
![](/site_images/geek-cookbook/..----images----plex.jpg)
42.1 Ingredients
- Docker swarm cluster with persistent shared storage
- Traefik configured per design
- A DNS entry for the hostname you intend to use, pointed to your keepalived IP
42.2 Preparation
Setup data locations
We’ll need a directories to bind-mount into our container for Plex to store its library, so create /var/data/plex:
mkdir /var/data/plex
Prepare environment
Create plex.env, and populate with the following variables. Set PUID and GUID to the UID and GID of the user who owns your media files, on the local filesystem
EDGE=1
VERSION=latest
PUID=42
PGID=42
Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
I share (with my patreon patrons) a private “premix” git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just agit pull
and a docker stack deploy
version: "3.0"
services:
plex:
image: linuxserver/plex
env_file: plex.env
volumes:
- /var/data/config/plex:/config
- /var/data/media:/media
deploy:
labels:
- traefik.frontend.rule=Host:plex.example.com
- traefik.docker.network=traefik_public
- traefik.port=32400
networks:
- traefik_public
- internal
ports:
- 32469:32469
- 32400:32400
- 32401:32401
- 3005:3005
- 8324:8324
- 1900:1900/udp
- 32410:32410/udp
- 32412:32412/udp
- 32413:32413/udp
- 32414:32414/udp
networks:
traefik_public:
external: true
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.16.0/24
42.3 Serving
Launch Plex stack
Launch the Plex stack by running docker stack deploy plex -c <path -to-docker-compose.yml>
Log into your new instance at https://YOUR-FQDN (You’ll need to setup a plex.tv login for remote access / discovery to work from certain clients)
42.4 Chef’s Notes
- Plex uses port 32400 for remote access, using your plex.tv user/password to authenticate you. The inclusion of the traefik proxy in this recipe is simply to allow you to use the web client (as opposed to a client app) by connecting directly to your instance, as opposed to browsing your media via https://plex.tv/web
- Got an NVIDIA GPU? See this blog post re how to use your GPU to transcode your media!
43 PrivateBin
PrivateBin is a minimalist, open source online pastebin where the server (can) has zero knowledge of pasted data. We all need to paste data / log files somewhere when it doesn’t make sense to paste it inline. With PasteBin, you can own the hosting, access, and eventual deletion of this data.
![](/site_images/geek-cookbook/..----images----privatebin.png)
43.1 Ingredients
- Docker swarm cluster with persistent shared storage
- Traefik configured per design
- DNS entry for the hostname you intend to use, pointed to your keepalived IP
43.2 Preparation
Setup data locations
We’ll need a single location to bind-mount into our container, so create /var/data/privatebin, and make it world-writable (there might be a more secure way to do this!)
mkdir /var/data/privatebin
chmod 777 /var/data/privatebin/
Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
I share (with my patreon patrons) a private “premix” git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just agit pull
and a docker stack deploy
version: '3'
services:
app:
image: privatebin/nginx-fpm-alpine
volumes:
- /var/data/privatebin:/srv/data
networks:
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:privatebin.example.com
- traefik.docker.network=traefik_public
- traefik.port=4180
networks:
traefik_public:
external: true
43.3 Serving
Launch PrivateBin stack
Launch the PrivateBin stack by running docker stack deploy privatebin -c <path -to-docker-compose.yml>
Log into your new instance at https://YOUR-FQDN, with user “root” and the password you specified in gitlab.env.
43.4 Chef’s Notes
- The PrivateBin repo explains how to tweak configuration options, or to use a database instead of file storage, if your volume justifies it :)
- The inclusion of PrivateBin was due to the efforts of @gkoerk in our Discord server. Thanks Jerry!!
44 Swarmprom
Swarmprom is a starter kit for Docker Swarm monitoring with Prometheus, Grafana, cAdvisor, Node Exporter, Alert Manager and Unsee. And it’s damn sexy. See for yourself:
![](/site_images/geek-cookbook/..----images----swarmprom.png)
So what do all these components do?
- Prometheus is an open-source systems monitoring and alerting toolkit originally built at SoundCloud.
- Grafana is a tool to make data beautiful.
- cAdvisor
cAdvisor (Container Advisor) provides container users an understanding of the resource usage and performance characteristics of their running containers. It is a running daemon that collects, aggregates, processes, and exports information about running containers.
- Node Exporter is a Prometheus exporter for hardware and OS metrics
- Alert Manager Alertmanager handles alerts sent by client applications such as the Prometheus server. It takes care of deduplicating, grouping, and routing them to the correct receiver integrations such as email, Slack, etc.
- Unsee is an alert dashboard for Alert Manager
44.1 How does this magic work?
I’d encourage you to spend some time reading https://github.com/stefanprodan/swarmprom. Stefan has included detailed explanations about which elements perform which functions, as well as how to customize your stack. (This is only a starting point, after all)
44.2 Ingredients
- Docker swarm cluster on 17.09.0 or newer (doesn’t work with CentOS Atomic, unfortunately) with persistent shared storage
- Traefik configured per design
- DNS entry for the hostnames you intend to use, pointed to your keepalived IP
44.3 Preparation
This is basically a rehash of stefanprodan’s instructions to match the way I’ve configured other recipes.
Setup oauth provider
Grafana includes decent login protections, but from what I can see, Prometheus, AlertManager, and Unsee do no authentication. In order to expose these publicly for your own consumption (my assumption for the rest of this recipe), you’ll want to prepare to run oauth_proxy containers in front of each of the 4 web UIs in this recipe.
Setup metrics
Edit (or create, depending on your OS) /etc/docker/daemon.json, and add the following, to enable the experimental export of metrics to Prometheus:
{
"metrics-addr" : "0.0.0.0:9323",
"experimental" : true
}
Restart docker with systemctl restart docker
Setup and populate data locations
We’ll need several files to bind-mount into our containers, so create directories for them and get the latest copies:
mkdir -p /var/data/swarmprom/dockerd-exporter/
cd /var/data/swarmprom/dockerd-exporter/
wget https://raw.githubusercontent.com/stefanprodan/swarmprom/master/dockerd-exporte\
r/Caddyfile
mkdir -p /var/data/swarmprom/prometheus/rules/
cd /var/data/swarmprom/prometheus/rules/
wget https://raw.githubusercontent.com/stefanprodan/swarmprom/master/prometheus/rule\
s/swarm_task.rules.yml
wget https://raw.githubusercontent.com/stefanprodan/swarmprom/master/prometheus/rule\
s/swarm_node.rules.yml
# Directories for holding runtime data
mkdir /var/data/runtime/swarmprom/grafana/
mkdir /var/data/runtime/swarmprom/alertmanager/
mkdir /var/data/runtime/prometheus
chown nobody:nogroup /var/data/runtime/prometheus
Prepare Grafana
Grafana will make all the data we collect from our swarm beautiful.
Create /var/data/swarmprom/grafana.env, and populate with the following variables
OAUTH2_PROXY_CLIENT_ID=
OAUTH2_PROXY_CLIENT_SECRET=
OAUTH2_PROXY_COOKIE_SECRET=
# Disable basic auth (it conflicts with oauth_proxy)
GF_AUTH_BASIC_ENABLED=false
# Set this to the real-world URL to your grafana install (else you get screwy CSS th\
anks to oauth_proxy)
GF_SERVER_ROOT_URL=https://grafana.example.com
GF_SERVER_DOMAIN=grafana.example.com
# Set your default admin/pass here
GF_SECURITY_ADMIN_USER=admin
GF_SECURITY_ADMIN_PASSWORD=ilovemybatmanunderpants
Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), based on the original swarmprom docker-compose.yml file
???+ note “This example is 274 lines long. Click here to collapse it for better readability”
I share (with my patreon patrons) a private “premix” git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just agit pull
and a docker stack deploy
version
:
"3.3"
networks
:
net
:
driver
:
overlay
attachable
:
true
volumes
:
prometheus
:
{}
grafana
:
{}
alertmanager
:
{}
configs
:
dockerd_config:
file
:
/
var
/
data
/
swarmprom
/
dockerd
-
exporter
/
Caddyfile
node_rules:
file
:
/
var
/
data
/
swarmprom
/
prometheus
/
rules
/
swarm_node
.
rules
.
yml
task_rules:
file
:
/
var
/
data
/
swarmprom
/
prometheus
/
rules
/
swarm_task
.
rules
.
yml
services
:
dockerd
-
exporter
:
image
:
stefanprodan
/
caddy
networks
:
-
internal
environment
:
-
DOCKER_GWBRIDGE_IP
=
172.18.0.1
configs
:
-
source
:
dockerd_config
target
:
/
etc
/
caddy
/
Caddyfile
deploy
:
mode
:
global
resources
:
limits
:
memory
:
128
M
reservations
:
memory
:
64
M
cadvisor
:
image
:
google
/
cadvisor
networks
:
-
internal
command
:
-
logtostderr
-
docker_only
volumes
:
-
/
var
/
run
/
docker
.
sock
:
/
var
/
run
/
docker
.
sock
:
ro
-
/:/
rootfs
:
ro
-
/
var
/
run
:
/
var
/
run
-
/
sys
:
/
sys
:
ro
-
/
var
/
lib
/
docker/:/var
/
lib
/
docker
:
ro
deploy
:
mode
:
global
resources
:
limits
:
memory
:
128
M
reservations
:
memory
:
64
M
grafana
:
image
:
stefanprodan
/
swarmprom
-
grafana
:
5.3.4
networks
:
-
internal
env_file:
/
var
/
data
/
config
/
swarmprom
/
grafana
.
env
environment
:
-
GF_USERS_ALLOW_SIGN_UP
=
false
-
GF_SMTP_ENABLED
=
$
{
GF_SMTP_ENABLED
:-
false
}
-
GF_SMTP_FROM_ADDRESS
=
$
{
GF_SMTP_FROM_ADDRESS
:-
grafana@test
.
com
}
-
GF_SMTP_FROM_NAME
=
$
{
GF_SMTP_FROM_NAME
:-
Grafana
}
-
GF_SMTP_HOST
=
$
{
GF_SMTP_HOST
:-
smtp
:
25
}
-
GF_SMTP_USER
=
$
{
GF_SMTP_USER
}
-
GF_SMTP_PASSWORD
=
$
{
GF_SMTP_PASSWORD
}
volumes
:
-
/
var
/
data
/
runtime
/
swarmprom
/
grafana
:
/
var
/
lib
/
grafana
deploy
:
mode
:
replicated
replicas
:
1
placement
:
constraints
:
-
node
.
role
==
manager
resources
:
limits
:
memory
:
128
M
reservations
:
memory
:
64
M
grafana
-
proxy
:
image
:
a5huynh
/
oauth2_proxy
env_file
:
/
var
/
data
/
config
/
swarmprom
/
grafana
.
env
networks
:
-
internal
-
traefik_public
deploy
:
labels
:
-
traefik
.
frontend
.
rule
=
Host
:grafana
.
swarmprom
.
example
.
com
-
traefik
.
docker
.
network
=
traefik_public
-
traefik
.
port
=
4180
volumes
:
-
/
var
/
data
/
config
/
swarmprom
/
authenticated
-
emails
.
txt
:
/
authenticated
-
ema\
ils
.
txt
command
:
|
-
cookie
-
secure
=
false
-
upstream
=
http
:
//
grafana
:
3000
-
redirect
-
url
=
https
:
//
grafana
.
swarmprom
.
example
.
com
-
http
-
address
=
http
:
//
0.0.0.0
:
4180
-
email
-
domain
=
example
.
com
-
provider
=
github
-
authenticated
-
emails
-
file=/authenticated
-
emails
.
txt
alertmanager
:
image
:
stefanprodan
/
swarmprom
-
alertmanager
:
v0
.14.0
networks
:
-
internal
environment
:
-
SLACK_URL
=
$
{
SLACK_URL
:-
https
:
//
hooks
.
slack
.
com
/
services
/
TOKEN
}
-
SLACK_CHANNEL
=
$
{
SLACK_CHANNEL
:-
general
}
-
SLACK_USER
=
$
{
SLACK_USER
:-
alertmanager
}
command
:
-
'--config.file=/etc/alertmanager/alertmanager.yml'
-
'--storage.path=/alertmanager'
volumes
:
-
/
var
/
data
/
runtime
/
swarmprom
/
alertmanager
:
/
alertmanager
deploy
:
mode
:
replicated
replicas
:
1
placement
:
constraints
:
-
node
.
role
==
manager
resources
:
limits
:
memory
:
128
M
reservations
:
memory
:
64
M
alertmanager
-
proxy
:
image
:
a5huynh
/
oauth2_proxy
env_file
:
/
var
/
data
/
config
/
swarmprom
/
alertmanager
.
env
networks
:
-
internal
-
traefik_public
deploy
:
labels
:
-
traefik
.
frontend
.
rule
=
Host
:alertmanager
.
swarmprom
.
example
.
com
-
traefik
.
docker
.
network
=
traefik_public
-
traefik
.
port
=
4180
volumes
:
-
/
var
/
data
/
config
/
swarmprom
/
authenticated
-
emails
.
txt
:
/
authenticated
-
ema\
ils
.
txt
command
:
|
-
cookie
-
secure
=
false
-
upstream
=
http
:
//
alertmanager
:
9093
-
redirect
-
url
=
https
:
//
alertmanager
.
swarmprom
.
example
.
com
-
http
-
address
=
http
:
//
0.0.0.0
:
4180
-
email
-
domain
=
example
.
com
-
provider
=
github
-
authenticated
-
emails
-
file=/authenticated
-
emails
.
txt
unsee
:
image
:
cloudflare
/
unsee
:
v0
.8.0
networks
:
-
internal
environment
:
-
"ALERTMANAGER_URIS=default:http://alertmanager:9093"
deploy
:
mode
:
replicated
replicas
:
1
unsee
-
proxy
:
image
:
a5huynh
/
oauth2_proxy
env_file
:
/
var
/
data
/
config
/
swarmprom
/
unsee
.
env
networks
:
-
internal
-
traefik_public
deploy
:
labels
:
-
traefik
.
frontend
.
rule
=
Host
:unsee
.
swarmprom
.
example
.
com
-
traefik
.
docker
.
network
=
traefik_public
-
traefik
.
port
=
4180
volumes
:
-
/
var
/
data
/
config
/
swarmprom
/
authenticated
-
emails
.
txt
:
/
authenticated
-
ema\
ils
.
txt
command
:
|
-
cookie
-
secure
=
false
-
upstream
=
http
:
//
unsee
:
8080
-
redirect
-
url
=
https
:
//
unsee
.
swarmprom
.
example
.
com
-
http
-
address
=
http
:
//
0.0.0.0
:
4180
-
email
-
domain
=
example
.
com
-
provider
=
github
-
authenticated
-
emails
-
file=/authenticated
-
emails
.
txt
node
-
exporter
:
image
:
stefanprodan
/
swarmprom
-
node
-
exporter
:
v0
.16.0
networks
:
-
internal
environment
:
-
NODE_ID
=
{{.
Node
.
ID
}}
volumes
:
-
/
proc
:
/
host
/
proc
:
ro
-
/
sys
:
/
host
/
sys
:
ro
-
/:/
rootfs
:
ro
-
/
etc
/
hostname
:
/
etc
/
nodename
command
:
-
'--path.sysfs=/host/sys'
-
'--path.procfs=/host/proc'
-
'--collector.textfile.directory=/etc/node-exporter/'
-
'--collector.filesystem.ignored-mount-points=^/(sys|proc|dev|host|etc)\
($$|/)'
#
no
collectors
are
explicitely
enabled
here
,
because
the
defaults
are
j\
ust
fine
,
#
see
https
:
//
github
.
com
/
prometheus
/
node_exporter
#
disable
ipvs
collector
because
it
barfs
the
node
-
exporter
logs
full
wi\
th
errors
on
my
centos
7
vm's
- '--no
-
collector
.
ipvs'
deploy:
mode: global
resources:
limits:
memory: 128M
reservations:
memory: 64M
prometheus:
image: stefanprodan/swarmprom-prometheus:v2.5.0
networks:
- internal
command:
- '--config
.
file=/etc
/
prometheus
/
prometheus
.
yml'
- '--web
.
console
.
libraries=/etc
/
prometheus
/
console_libraries'
- '--web
.
console
.
templates=/etc
/
prometheus
/
consoles'
- '--storage
.
tsdb
.
path=/prometheus'
- '--storage
.
tsdb
.
retention
=
24
h
'
volumes
:
-
/
var
/
data
/
runtime
/
swarmprom
/
prometheus
:
/
prometheus
configs
:
-
source
:
node_rules
target
:
/
etc
/
prometheus
/
swarm_node
.
rules
.
yml
-
source
:
task_rules
target
:
/
etc
/
prometheus
/
swarm_task
.
rules
.
yml
deploy
:
mode
:
replicated
replicas
:
1
placement
:
constraints
:
-
node
.
role
==
manager
resources
:
limits
:
memory
:
2048
M
reservations
:
memory
:
128
M
prometheus
-
proxy
:
image
:
a5huynh
/
oauth2_proxy
env_file
:
/
var
/
data
/
config
/
swarmprom
/
prometheus
.
env
networks
:
-
internal
-
traefik_public
deploy
:
labels
:
-
traefik
.
frontend
.
rule
=
Host
:prometheus
.
swarmprom
.
example
.
com
-
traefik
.
docker
.
network
=
traefik_public
-
traefik
.
port
=
4180
volumes
:
-
/
var
/
data
/
config
/
swarmprom
/
authenticated
-
emails
.
txt
:
/
authenticated
-
ema\
ils
.
txt
command
:
|
-
cookie
-
secure
=
false
-
upstream
=
http
:
//
prometheus
:
9090
-
redirect
-
url
=
https
:
//
prometheus
.
swarmprom
.
example
.
com
-
http
-
address
=
http
:
//
0.0.0.0
:
4180
-
email
-
domain
=
example
.
com
-
provider
=
github
-
authenticated
-
emails
-
file=/authenticated
-
emails
.
txt
networks
:
traefik_public:
external
:
true
internal
:
driver
:
overlay
ipam
:
config
:
-
subnet
:
172.16.29.0
/
24
44.4 Serving
Launch Swarmprom stack
Launch the Swarm stack by running docker stack deploy swarmprom -c <path -to-docker-compose.yml>
Log into your new grafana instance, check out your beautiful graphs. Move onto drooling over Prometheus, AlertManager, and Unsee.
44.5 Chef’s Notes
1. Pay close attention to the grafana.env
config. If you encounter errors about basic auth failed
, or failed CSS, it’s likely due to misconfiguration of one of the grafana environment variables.
III Recipies (Docker)
Now follows individual recipes.
45 Bitwarden
Heard about the latest password breach (since lunch)? HaveYouBeenPowned yet (today)? Passwords are broken, and as the amount of sites for which you need to store credentials grows exponetially, so does the risk of using a common password.
“Duh, use a password manager”, you say. Sure, but be aware that even password managers have security flaws.
OK, look smartass.. no software is perfect, and there will always be a risk of your credentials being exposed in ways you didn’t intend. You can at least minimize the impact of such exposure by using a password manager to store unique credentials per-site. While 1Password is king of the commercial password manager, BitWarden is king of the open-source, self-hosted password manager.
Enter Bitwarden..
![](/site_images/geek-cookbook/..----images----bitwarden.png)
Bitwarden is a free and open source password management solution for individuals, teams, and business organizations. While Bitwarden does offer a paid / hosted version, the free version comes with the following (better than any other free password manager!):
- Access & install all Bitwarden apps
- Sync all of your devices, no limits!
- Store unlimited items in your vault
- Logins, secure notes, credit cards, & identities
- Two-step authentication (2FA)
- Secure password generator
- Self-host on your own server (optional)
45.1 Ingredients
Existing:1. [X] Docker swarm cluster with persistent shared storage
2. [X] Traefik configured per design
3. [X] DNS entry for the hostname you intend to use, pointed to your keepalived IP
45.2 Preparation
Setup data locations
We’ll need to create a directory to bind-mount into our container, so create /var/data/bitwarden
:
mkdir /var/data/bitwarden
### Setup environment
Create /var/data/config/bitwarden/bitwarden.env
, and leave it empty for now.
Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
I share (with my patreon patrons) a private “premix” git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just agit pull
and a docker stack deploy
version
:
"3"
services
:
bitwarden
:
image
:
bitwardenrs
/
server
env_file
:
/
var
/
data
/
config
/
bitwarden
/
bitwarden
.
env
volumes
:
-
/
etc
/
localtime
:/
etc
/
localtime
:
ro
-
/
var
/
data
/
bitwarden
:/
data
/
:
rw
deploy
:
labels
:
-
traefik
.
enable
=
true
-
traefik
.
web
.
frontend
.
rule
=
Host
:
bitwarden
.
example
.
com
-
traefik
.
web
.
port
=
80
-
traefik
.
hub
.
frontend
.
rule
=
Host
:
bitwarden
.
example
.
com
;
Path
:/
notifications
/
h
\
ub
-
traefik
.
hub
.
port
=
3012
-
traefik
.
docker
.
network
=
traefik_public
networks
:
-
traefik_public
networks
:
traefik_public
:
external
:
true
45.3 Serving
Launch Bitwarden stack
Launch the Bitwarden stack by running docker stack deploy bitwarden -c <path -to-docker-compose.yml>
Browse to your new instance at https://YOUR-FQDN, and create a new user account and master password (Just click the Create Account button without filling in your email address or master password)
Get the apps / extensions
Once you’ve created your account, jump over to https://bitwarden.com/#download and download the apps for your mobile and browser, and start adding your logins!
45.4 Chef’s Notes
- You’ll notice we’re not using the official container images (all 6 of them required!), but rather a more lightweight version ideal for self-hosting. All of the elements are contained within a single container, and SQLite is used for the database backend.
- As mentioned above, readers should refer to the dani-garcia/bitwarden_rs wiki for details on customizing the behaviour of Bitwarden.
- The inclusion of Bitwarden was due to the efforts of @gkoerk in our Discord server- Thanks Gerry!
46 BookStack
BookStack is a simple, self-hosted, easy-to-use platform for organising and storing information.
A friendly middle ground between heavyweights like MediaWiki or Confluence and Gollum, BookStack relies on a database backend (so searching and versioning is easy), but limits itself to a pre-defined, 3-tier structure (book, chapter, page). The result is a lightweight, approachable personal documentation stack, which includes search and Markdown editing.
![](/site_images/geek-cookbook/..----images----bookstack.png)
I like to protect my public-facing web UIs with an oauth_proxy, ensuring that if an application bug (or a user misconfiguration) exposes the app to unplanned public scrutiny, I have a second layer of defense.
46.1 Ingredients
- Docker swarm cluster with persistent shared storage
- Traefik configured per design
- DNS entry for the hostname you intend to use, pointed to your keepalived IP
46.2 Preparation
Setup data locations
We’ll need several directories to bind-mount into our container, so create them in /var/data/bookstack:
mkdir -p /var/data/bookstack/database-dump
mkdir -p /var/data/runtime/bookstack/db
Prepare environment
Create bookstack.env, and populate with the following variables. Set the oauth_proxy variables provided by your OAuth provider (if applicable.)
# For oauth-proxy (optional)
OAUTH2_PROXY_CLIENT_ID=
OAUTH2_PROXY_CLIENT_SECRET=
OAUTH2_PROXY_COOKIE_SECRET=
# For MariaDB/MySQL database
MYSQL_RANDOM_ROOT_PASSWORD=true
MYSQL_DATABASE=bookstack
MYSQL_USER=bookstack
MYSQL_PASSWORD=secret
# Bookstack-specific variables
DB_HOST=bookstack_db:3306
DB_DATABASE=bookstack
DB_USERNAME=bookstack
DB_PASSWORD=secret
Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
I share (with my patreon patrons) a private “premix” git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just agit pull
and a docker stack deploy
version: '3'
services:
db:
image: mariadb:10
env_file: /var/data/config/bookstack/bookstack.env
networks:
- internal
volumes:
- /var/data/runtime/bookstack/db:/var/lib/mysql
proxy:
image: a5huynh/oauth2_proxy
env_file : /var/data/config/bookstack/bookstack.env
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:bookstack.example.com
- traefik.docker.network=traefik_public
- traefik.port=4180
volumes:
- /var/data/config/bookstack/authenticated-emails.txt:/authenticated-emails.txt
command: |
-cookie-secure=false
-upstream=http://app
-redirect-url=https://bookstack.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
-authenticated-emails-file=/authenticated-emails.txt
app:
image: solidnerd/bookstack
env_file: /var/data/config/bookstack/bookstack.env
networks:
- internal
db-backup:
image: mariadb:10
env_file: /var/data/config/bookstack/bookstack.env
volumes:
- /var/data/bookstack/database-dump:/dump
- /etc/localtime:/etc/localtime:ro
entrypoint: |
bash -c 'bash -s <<EOF
trap "break;exit" SIGHUP SIGINT SIGTERM
sleep 2m
while /bin/true; do
mysqldump -h db --all-databases | gzip -c > /dump/dump_\`date +%d-%m-%Y"_"%H\
_%M_%S\`.sql.gz
(ls -t /dump/dump*.sql.gz|head -n $$BACKUP_NUM_KEEP;ls /dump/dump*.sql.gz)|s\
ort|uniq -u|xargs rm -- {}
sleep $$BACKUP_FREQUENCY
done
EOF'
networks:
- internal
networks:
traefik_public:
external: true
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.33.0/24
46.3 Serving
Launch Bookstack stack
Launch the BookStack stack by running docker stack deploy bookstack -c <path -to-docker-compose.yml>
Log into your new instance at https://YOUR-FQDN, authenticate with oauth_proxy, and then login with username ‘admin@admin.com’ and password ‘password’.
46.4 Chef’s Notes
1. If you wanted to expose the BookStack UI directly, you could remove the oauth2_proxy from the design, and move the traefik_public-related labels directly to the bookstack container. You’d also need to add the traefik_public network to the bookstack container.
hero: Manage your ebook collection. Like a BOSS.
47 Calibre-Web
The AutoPirate recipe includes Lazy Librarian, a tool for tracking, finding, and downloading eBooks. However, after the eBooks are downloaded, Lazy Librarian is not much use for organising, tracking, and actually reading them.
Calibre-Web could be described as “Plex (or Emby) for eBooks” - it’s a web-based interface to manage your eBook library, screenshot below:
![](/site_images/geek-cookbook/..----images----calibre-web.png)
Of course, you probably already manage your eBooks using the excellent Calibre, but this is primarily a (powerful) desktop application. Calibre-Web is an alternative way to manage / view your existing Calibre database, meaning you can continue to use Calibre on your desktop if you wish.
As a long-time Kindle user, Calibre-Web brings (among others) the following features which appeal to me:
- Filter and search by titles, authors, tags, series and language
- Create custom book collection (shelves)
Support for editing eBook metadata and deleting eBooks from Calibre library
- Support for converting eBooks from EPUB to Kindle format (mobi/azw)
- Send eBooks to Kindle devices with the click of a button
- Support for reading eBooks directly in the browser (.txt, .epub, .pdf, .cbr, .cbt, .cbz)
- Upload new books in PDF, epub, fb2 format
47.1 Ingredients
- Docker swarm cluster with persistent shared storage
- Traefik configured per design
- DNS entry for the hostname you intend to use, pointed to your keepalived IP
47.2 Preparation
Setup data locations
We’ll need a directory to store some config data for Calibre-Web, container, so create /var/data/calibre-web, and ensure the directory is owned by the same use which owns your Calibre data (below)
mkdir /var/data/calibre-web
chown calibre:calibre /var/data/calibre-web # for example
Ensure that your Calibre library is accessible to the swarm (i.e., exists on shared storage), and that the same user who owns the config directory above, also owns the actual calibre library data (including the ebooks managed by Calibre).
Prepare environment
We’ll use an oauth-proxy to protect the UI from public access, so create calibre-web.env, and populate with the following variables:
OAUTH2_PROXY_CLIENT_ID=
OAUTH2_PROXY_CLIENT_SECRET=
OAUTH2_PROXY_COOKIE_SECRET=<make this a random string>
PUID=
PGID=
Follow the instructions to setup your oauth provider. You need to setup a unique key/secret for each instance of the proxy you want to run, since in each case the callback URL will differ.
Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
I share (with my patreon patrons) a private “premix” git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just agit pull
and a docker stack deploy
version: '3'
services:
app:
image: technosoft2000/calibre-web
env_file : /var/data/config/calibre-web/calibre-web.env
volumes:
- /var/data/calibre-web:/config
- /srv/data/Archive/Ebooks/calibre:/books
networks:
- internal
proxy:
image: a5huynh/oauth2_proxy
env_file : /var/data/config/calibre-web/calibre-web.env
dns_search: hq.example.com
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:calibre-web.example.com
- traefik.docker.network=traefik_public
- traefik.port=4180
volumes:
- /var/data/config/calibre-web/authenticated-emails.txt:/authenticated-emails.\
txt
command: |
-cookie-secure=false
-upstream=http://app:8083
-redirect-url=https://calibre-web.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
-authenticated-emails-file=/authenticated-emails.txt
networks:
traefik_public:
external: true
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.18.0/24
47.3 Serving
Launch Calibre-Web
Launch the Calibre-Web stack by running docker stack deploy calibre-web -c <path -to-docker-compose.yml>
Log into your new instance at https://YOUR-FQDN. You’ll be directed to the initial GUI configuraition. Set the first field (Location of Calibre database) to “/books/”, and when complete, login using defaults username of “admin” with password “admin123”.
47.4 Chef’s Notes
- Yes, Calibre does provide a server component. But it’s not as fully-featured as Calibre-Web (i.e., you can’t use it to send ebooks directly to your Kindle)
- A future enhancement might be integrating this recipe with the filestore for NextCloud, so that the desktop database (Calibre) can be kept synced with Calibre-Web.
48 Collabora Online
Development of this recipe is sponsored by The Common Observatory. Thanks guys![
![](/site_images/geek-cookbook/..----images----common_observatory.png)
Collabora Online Development Edition (or “CODE”), is the lightweight, or “home” edition of the commercially-supported Collabora Online platform. It
It’s basically the LibreOffice interface in a web-browser. CODE is not a standalone app, it’s a backend intended to be accessed via “WOPI” from an existing interface (in our case, NextCloud)
![](/site_images/geek-cookbook/..----images----collabora-online.png)
48.1 Ingredients
- Docker swarm cluster with persistent shared storage
- Traefik configured per design
- DNS entry for the hostname (i.e. “collabora.your-domain.com”) you intend to use for LDAP Account Manager, pointed to your keepalived IP
- NextCloud installed and operational
- Docker-compose installed on your node(s) - this is a special case which needs to run outside of Docker Swarm
48.2 Preparation
Explanation for complexity
Due to the clever magic that Collabora does to present a “headless” LibreOffice UI to the browser, the CODE docker container requires system capabilities which cannot be granted under Docker Swarm (specifically, MKNOD).
So we have to run Collabora itself in the next best thing to Docker swarm - a docker-compose stack. Using docker-compose will at least provide us with consistent and version-able configuration files.
This presents another problem though - Docker Swarm with Traefik is superb at making all our stacks “just work” with ingress routing and LetsEncyrpt certificates. We don’t want to have to do this manually (like a cave-man), so we engage in some trickery to allow us to still use our swarmed Traefik to terminate SSL.
We run a single swarmed Nginx instance, which forwards all requests to an upstream, with the target IP of the docker0 interface, on port 9980 (the port exposed by the CODE container)
We attach the necessary labels to the Nginx container to instruct Trafeik to setup a front/backend for collabora.<ourdomain>. Now incoming requests to https://collabora.<ourdomain> will hit Traefik, be forwarded to nginx (wherever in the swarm it’s running), and then to port 9980 on the same node that nginx is running on.
What if we’re running multiple nodes in our swarm, and nginx ends up on a different node to the one running Collabora via docker-compose? Well, either constrain nginx to the same node as Collabora (example below), or just launch an instance of Collabora on every node then. It’s just a rendering / GUI engine after all, it doesn’t hold any persistent data.
Here’s a (highly technical) diagram to illustrate:
![](/site_images/geek-cookbook/..----images----collabora-traffic-flow.png)
Setup data locations
We’ll need a directory for holding config to bind-mount into our containers, so create /var/data/collabora
, and /var/data/config/collabora
for holding the docker/swarm config
mkdir /var/data/collabora/
mkdir /var/data/config/collabora/
Prepare environment
Create /var/data/config/collabora/collabora.env, and populate with the following variables, customized for your installation.
Note the following:1. Variables are in lower-case, unlike our standard convention. This is to align with the CODE container
2. Set domain to your NextCloud domain, and escape all the periods as per the example
3. Set your server_name to collabora.<yourdomain>. Escaping periods is unnecessary
4. Your password cannot include triangular brackets - the entrypoint script will insert this password into an XML document, and triangular brackets will make bad(tm) things happen
username=admin
password=ilovemypassword
domain=nextcloud\.batcave\.com
server_name=collabora.batcave.com
termination=true
Create docker-compose.yml
Create /var/data/config/collabora/docker-compose.yml
as follows:
version: "3.0"
services:
local-collabora:
image: funkypenguin/collabora
# the funkypenguin version has a patch to include "termination" behind SSL-termi\
nating reverse proxy (traefik), see CODE PR #50.
# Once merged, the official container can be used again.
#image: collabora/code
env_file: /var/data/config/collabora/collabora.env
volumes:
- /var/data/collabora/loolwsd.xml:/etc/loolwsd/loolwsd.xml-new
cap_add:
- MKNOD
ports:
- 9980:9980
Create nginx.conf
Create /var/data/config/collabora/nginx.conf
as follows, changing the server_name
value to match the environment variable you established above:
upstream
collabora-upstream
{
#
Run
collabora
under
docker-compose,
since
it
needs
MKNOD
cap,
which
can't
be
p\
rovided
by
Docker
Swarm.
#
The
IP
here
is
the
typical
IP
of
docker0
-
change
if
yours
is
different.
server
172.17.0.1:9980
;
}
server
{
listen
80
;
server_name
collabora.batcave.com
;
#
static
files
location
^~
/loleaflet
{
proxy_pass
http
:
//
collabora-upstream
;
proxy_set_header
Host
$http_host
;
}
#
WOPI
discovery
URL
location
^~
/
hosting
/
discovery
{
proxy_pass
http
:
//
collabora-upstream
;
proxy_set_header
Host
$http_host
;
}
#
Main
websocket
location
~
/
lool
/(.*)/
ws
$
{
proxy_pass
http
:
//
collabora-upstream
;
proxy_set_header
Upgrade
$http_upgrade
;
proxy_set_header
Connection
"Upgrade"
;
proxy_set_header
Host
$http_host
;
proxy_read_timeout
36000s
;
}
#
Admin
Console
websocket
location
^~
/
lool
/
adminws
{
proxy_buffering
off
;
proxy_pass
http
:
//
collabora-upstream
;
proxy_set_header
Upgrade
$http_upgrade
;
proxy_set_header
Connection
"Upgrade"
;
proxy_set_header
Host
$http_host
;
proxy_read_timeout
36000s
;
}
#
download
,
presentation
and
image
upload
location
~
/
lool
{
proxy_pass
https
:
//
collabora-upstream
;
proxy_set_header
Host
$http_host
;
}
}
Create loolwsd.xml
Until we understand how to pass trusted network parameters to the entrypoint script using environment variables, we have to maintain a manually edited version of loolwsd.xml
, and bind-mount it into our collabora container.
The way we do this is we mount/var/data/collabora/loolwsd.xml
as /etc/loolwsd/loolwsd.xml-new
, then allow the container to create its default /etc/loolwsd/loolwsd.xml
, copy this default over our /var/data/collabora/loolwsd.xml
as /etc/loolwsd/loolwsd.xml-new
, and then update the container to use our /var/data/collabora/loolwsd.xml
as /etc/loolwsd/loolwsd.xml
instead (confused yet?)
Create an empty /var/data/collabora/loolwsd.xml
by running touch /var/data/collabora/loolwsd.xml
. We’ll populate this in the next section…
Setup Docker Swarm
Create /var/data/config/collabora/collabora.yml
as follows, changing the traefik frontend_rule as necessary:
git pull
and a docker stack deploy
version: "3.0"
services:
nginx:
image: nginx:latest
networks:
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:collabora.batcave.com
- traefik.docker.network=traefik_public
- traefik.port=80
- traefik.frontend.passHostHeader=true
# uncomment this line if you want to force nginx to always run on one node (\
i.e., the one running collabora)
#placement:
# constraints:
# - node.hostname == ds1
volumes:
- /var/data/collabora/nginx.conf:/etc/nginx/conf.d/default.conf:ro
networks:
traefik_public:
external: true
48.3 Serving
Generate loolwsd.xml
Well. This is awkward. There’s no documented way to make Collabora work with Docker Swarm, so we’re doing a bit of a hack here, until I understand how to pass these arguments via environment variables.
Launching Collabora is (for now) a 2-step process. First.. we launch collabora itself, by running:
cd /var/data/config/collabora/
docker-compose -d up
Output looks something like this:
root@ds1:/var/data/config/collabora# docker-compose up -d
WARNING: The Docker Engine you're using is running in swarm mode.
Compose does not use swarm mode to deploy services to multiple nodes in a swarm. All\
containers will be scheduled on the current node.
To deploy your application across the swarm, use `docker stack deploy`.
Pulling local-collabora (funkypenguin/collabora:latest)...
latest: Pulling from funkypenguin/collabora
7b8b6451c85f: Pull complete
ab4d1096d9ba: Pull complete
e6797d1788ac: Pull complete
e25c5c290bde: Pull complete
4b8e1b074e06: Pull complete
f51a3d1fb75e: Pull complete
8b826e2ae5ad: Pull complete
Digest: sha256:6cd38cb5cbd170da0e3f0af85cecf07a6bc366e44555c236f81d5b433421a39d
Status: Downloaded newer image for funkypenguin/collabora:latest
Creating collabora_local-collabora_1 ...
Creating collabora_local-collabora_1 ... done
root@ds1:/var/data/config/collabora#
Now exec into the container (from another shell session), by running exec <container name> -it /bin/bash
. Make a copy of /etc/loolwsd/loolwsd, by running cp /etc/loolwsd/loolwsd.xml /etc/loolwsd/loolwsd.xml-new
, and then exit the container with exit
.
Delete the collabora container by hitting CTRL-C in the docker-compose shell, running docker-compose rm
, and then altering this line in docker-compose.yml:
- /var/data/collabora/loolwsd.xml:/etc/loolwsd/loolwsd.xml-new
To this:
- /var/data/collabora/loolwsd.xml:/etc/loolwsd/loolwsd.xml
Edit /var/data/collabora/loolwsd.xml, find the storage.filesystem.wopi section, and add lines like this to the existing allow rules (to allow IPv6-enabled hosts to still connect with their IPv4 addreses):
<host
desc=
"Regex pattern of hostname to allow or deny."
allow=
"true"
>
::ffff:10\.[0-\
9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}</host>
<host
desc=
"Regex pattern of hostname to allow or deny."
allow=
"true"
>
::ffff:172\.1[\
6789]\.[0-9]{1,3}\.[0-9]{1,3}</host>
<host
desc=
"Regex pattern of hostname to allow or deny."
allow=
"true"
>
::ffff:172\.2[\
0-9]\.[0-9]{1,3}\.[0-9]{1,3}</host>
<host
desc=
"Regex pattern of hostname to allow or deny."
allow=
"true"
>
::ffff:172\.3[\
01]\.[0-9]{1,3}\.[0-9]{1,3}</host>
<host
desc=
"Regex pattern of hostname to allow or deny."
allow=
"true"
>
::ffff:192\.16\
8\.[0-9]{1,3}\.[0-9]{1,3}</host>
Find the net.post_allow section, and add a line like this:
<host
desc=
"RFC1918 private addressing in inet6 format"
>
::ffff:10\.[0-9]{1,3}\.[0-9]\
{1,3}\.[0-9]{1,3}</host>
<host
desc=
"RFC1918 private addressing in inet6 format"
>
::ffff:172\.1[6789]\.[0-9]{1\
,3}\.[0-9]{1,3}</host>
<host
desc=
"RFC1918 private addressing in inet6 format"
>
::ffff:172\.2[0-9]\.[0-9]{1,\
3}\.[0-9]{1,3}</host>
<host
desc=
"RFC1918 private addressing in inet6 format"
>
::ffff:172\.3[01]\.[0-9]{1,3\
}\.[0-9]{1,3}</host>
<host
desc=
"RFC1918 private addressing in inet6 format"
>
::ffff:192\.168\.[0-9]{1,3}\\
.[0-9]{1,3}</host>
Find these 2 lines:
<ssl
desc=
"SSL settings"
>
<enable
type=
"bool"
default=
"true"
>
true</enable>
And change to:
<ssl
desc=
"SSL settings"
>
<enable
type=
"bool"
default=
"true"
>
false</enable>
Now re-launch collabora (with the correct with loolwsd.xml) under docker-compose, by running:
docker-compose -d up
Once collabora is up, we launch the swarm stack, by running:
docker stack deploy collabora -c /var/data/config/collabora/collabora.yml
Visit https://collabora.<yourdomain>/l/loleaflet/dist/admin/admin.html and confirm you can login with the user/password you specified in collabora.env
Integrate into NextCloud
In NextCloud, Install the Collabora Online app (https://apps.nextcloud.com/apps/richdocuments), and then under Settings -> Collabora Online, set your Collabora Online Server to https://collabora.<your domain>
![](/site_images/geek-cookbook/..----images----collabora-online-in-nextcloud.png)
Now browse your NextCloud files. Click the plus (+) sign to create a new document, and create either a new document, spreadsheet, or presentation. Name your document and then click on it. If Collabora is setup correctly, you’ll shortly enter into the rich editing interface provided by Collabora :)
Development of this recipe is sponsored by The Common Observatory. Thanks guys![
![](/site_images/geek-cookbook/..----images----common_observatory.png)
48.4 Chef’s Notes
1. Yes, this recipe is complicated. And you probably only care if you feel strongly about using Open Source rich document editing in the browser, vs using something like Google Docs. It works impressively well however, once it works. I hope to make this recipe simpler once the CODE developers have documented how to pass optional parameters as environment variables.
hero: Ghost - A recipe for beautiful online publication.
49 Ghost
Ghost is “a fully open source, hackable platform for building and running a modern online publication.”
![](/site_images/geek-cookbook/geek-cookbook_funkypenguin_co_nz---images---ghost.png)
49.1 Ingredients
Existing:1. [X] Docker swarm cluster with persistent shared storage
2. [X] Traefik configured per design
3. [X] DNS entry for the hostname you intend to use, pointed to your keepalived IP
49.2 Preparation
Setup data locations
Create the location for the bind-mount of the application data, so that it’s persistent:
mkdir -p /var/data/ghost
Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
I share (with my patreon patrons) a private “premix” git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just agit pull
and a docker stack deploy
version: '3'
services:
ghost:
image: ghost:1-alpine
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/data/ghost/:/var/lib/ghost/content
networks:
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:ghost.example.com
- traefik.docker.network=traefik
- traefik.port=2368
networks:
traefik_public:
external: true
49.3 Serving
Launch Ghost stack
Launch the Ghost stack by running docker stack deploy ghost -c <path -to-docker-compose.yml>
Create your first administrative account at https://YOUR-FQDN/admin/
49.4 Chef’s Notes
- If I wasn’t committed to a static-site-generated blog, Ghost is the platform I’d use for my blog.
- A default using the SQlite database takes 548k of space:
[root@ds1 ghost]# du -sh /var/data/ghost/
548K /var/data/ghost/
[root@ds1 ghost]#
hero: Gitlab - A recipe for a self-hosted GitHub alternative
50 GitLab
GitLab is a self-hosted alternative to GitHub. The most common use case is (a set of) developers with the desire for the rich feature-set of GitHub, but with unlimited private repositories.
Docker does maintain an official “Omnibus” container, but for this recipe I prefer the “dockerized gitlab” project, since it allows distribution of the various Gitlab components across multiple swarm nodes.
50.1 Ingredients
Existing:1. [X] Docker swarm cluster with persistent shared storage
2. [X] Traefik configured per design
3. [X] DNS entry for the hostname you intend to use, pointed to your keepalived IP
50.2 Preparation
Setup data locations
We’ll need several directories to bind-mount into our container, so create them in /var/data/gitlab:
cd /var/data
mkdir gitlab
cd gitlab
mkdir -p {postgresql,redis,gitlab}
Prepare environment
You’ll need to know the following:
1. Choose a password for postgresql, you’ll need it for DB_PASS in the compose file (below)
2. Generate 3 passwords using pwgen -Bsv1 64
. You’ll use these for the XXX_KEY_BASE environment variables below
- Create gitlab.env, and populate with at least the following variables (the full set is available at https://github.com/sameersbn/docker-gitlab#available-configuration-parameters):
DB_USER=gitlab
DB_PASS=gitlabdbpass
DB_NAME=gitlabhq_production
DB_EXTENSION=pg_trgm
DB_ADAPTER=postgresql
DB_HOST=postgresql
TZ=Pacific/Auckland
REDIS_HOST=redis
REDIS_PORT=6379
GITLAB_TIMEZONE=Auckland
GITLAB_HTTPS=true
SSL_SELF_SIGNED=false
GITLAB_HOST=gitlab.example.com
GITLAB_PORT=443
GITLAB_SSH_PORT=2222
GITLAB_SECRETS_DB_KEY_BASE=CFf7sS3kV2nGXBtMHDsTcjkRX8PWLlKTPJMc3lRc6GCzJDdVljZ85Nkkz\
J8mZbM5
GITLAB_SECRETS_SECRET_KEY_BASE=h2LBVffktDgb6BxM3B97mDSjhnSNwLc5VL2Hqzq9cdrvBtVw48WSp\
5wKj5HZrJM5
GITLAB_SECRETS_OTP_KEY_BASE=t9LPjnLzbkJ7Nt6LZJj6hptdpgG58MPJPwnMMMDdx27KSwLWHDrz9bMW\
XQMjq5mp
GITLAB_ROOT_PASSWORD=changeme
Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
I share (with my patreon patrons) a private “premix” git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just agit pull
and a docker stack deploy
version: '3'
services:
redis:
image: sameersbn/redis:latest
command:
- --loglevel warning
volumes:
- /var/data/gitlab/redis:/var/lib/redis:Z
networks:
- internal
postgresql:
image: sameersbn/postgresql:9.6-2
env_file: /var/data/config/gitlab/gitlab.env
volumes:
- /var/data/gitlab/postgresql:/var/lib/postgresql:Z
networks:
- internal
gitlab:
image: sameersbn/gitlab:latest
env_file: /var/data/config/gitlab/gitlab.env
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:gitlab.example.com
- traefik.docker.network=traefik
- traefik.port=80
restart_policy:
delay: 10s
max_attempts: 10
window: 60s
ports:
- "2222:22"
volumes:
- /var/data/gitlab/gitlab:/home/git/data:Z
networks:
traefik_public:
external: true
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.2.0/24
50.3 Serving
Launch gitlab
Launch the mail server stack by running docker stack deploy gitlab -c <path -to-docker-compose.yml>
Log into your new instance at https://[your FQDN], with user “root” and the password you specified in gitlab.env.
50.4 Chef’s Notes
A few comments on decisions taken in this design:
- I use the sameersbn/gitlab:latest image, rather than a specific version. This lets me execute updates simply by redeploying the stack (and why wouldn’t I want the latest version?)
51 Gitlab Runner
Some features of GitLab require a “runner” (in the sense of a “gopher” or a “minion”). A runner “registers” itself with a GitLab instance, and is given tasks to run. Tasks include running Continuous Integration (CI) builds, and building container images.
While a runner isn’t strictly required to use GitLab, if you want to do CI, you’ll need at least one. There are many ways to deploy a runner - this recipe focuses on the docker container model.
51.1 Ingredients
Existing:1. [X] Docker swarm cluster with persistent shared storage
2. [X] Traefik configured per design
3. [X] DNS entry for the hostname you intend to use, pointed to your keepalived IP
4. [X] GitLab installation (see previous recipe)
51.2 Preparation
Setup data locations
We’ll need several directories to bind-mount into our runner containers, so create them in /var/data/gitlab
:
cd /var/data
mkdir gitlab
cd gitlab
mkdir -p {runners/1,runners/2}
Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
I share (with my patreon patrons) a private “premix” git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just agit pull
and a docker stack deploy
version: '3'
services:
thing1:
image: gitlab/gitlab-runner
volumes:
- /var/data/gitlab/runners/1:/etc/gitlab-runner
networks:
- internal
thing2:
image: gitlab/gitlab-runner
volumes:
- /var/data/gitlab/runners/2:/etc/gitlab-runner
networks:
- internal
networks:
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.23.0/24
Configure runners
From your GitLab UI, you can retrieve a “token” necessary to register a new runner. To register the runner, you can either create config.toml in each runner’s bind-mounted folder (example below), or just docker exec
into each runner container and execute gitlab-runner register
to interactively generate config.toml.
Sample runner config.toml:
concurrent = 1
check_interval = 0
[[runners]]
name = "myrunner1"
url = "https://gitlab.example.com"
token = "<long string here>"
executor = "docker"
[runners.docker]
tls_verify = false
image = "ruby:2.1"
privileged = false
disable_cache = false
volumes = ["/cache"]
shm_size = 0
[runners.cache]
51.3 Serving
Launch runners
Launch the mail server stack by running docker stack deploy gitlab-runner -c <path -to-docker-compose.yml>
Log into your new instance at https://YOUR-FQDN, with user “root” and the password you specified in gitlab.env.
51.4 Chef’s Notes
- You’ll note that I setup 2 runners. One is locked to a single project (this cookbook build), and the other is a shared runner. I wanted to ensure that one runner was always available to run CI for this project, even if I’d tied up another runner on something heavy-duty, like a container build. Customize this to your use case.
- Originally I deployed runners in the same stack as GitLab, but I found that they would frequently fail to start properly when I launched the stack. I think that this was because the runners started so quickly (and GitLab starts sooo slowly!), that they always started up reporting that the GitLab instance was invalid or unavailable. I had issues with CI builds stuck permanently in a “pending” state, which were only resolved by restarting the runner. Having the runners deployed in a separate stack to GitLab avoids this problem.
hero: Gollum - A recipe for your own git-based wiki
52 Gollum
Gollum is a simple wiki system built on top of Git. A Gollum Wiki is simply a git repository (either bare or regular) of a specific nature:
- A Gollum repository’s contents are human-editable, unless the repository is bare.
- Pages are unique text files which may be organized into directories any way you choose.
- Other content can also be included, for example images, PDFs and headers/footers for your pages.
Gollum pages:
- May be written in a variety of markups.
- Can be edited with your favourite system editor or IDE (changes will be visible after committing) or with the built-in web interface.
- Can be displayed in all versions (commits).
![](/site_images/geek-cookbook/..----images----gollum.png)
As you’ll note in the (real world) screenshot above, my requirements for a personal wiki are:
- Portable across my devices
- Supports images
- Full-text search
- Supports inter-note links
- Revision control
Gollum meets all these requirements, and as an added bonus, is extremely fast and lightweight.
Since Gollum itself offers no user authentication, this design secures gollum behind an oauth2 proxy, so that in order to gain access to the Gollum UI at all, oauth2 authentication (to GitHub, GitLab, Google, etc) must have already occurred.52.1 Ingredients
Existing:1. [X] Docker swarm cluster with persistent shared storage
2. [X] Traefik configured per design
3. [X] DNS entry for the hostname you intend to use, pointed to your keepalived IP
52.2 Preparation
Setup data locations
We’ll need an empty git repository in /var/data/gollum for our data:
mkdir /var/data/gollum
cd /var/data/gollum
git init
Prepare environment
- Choose an oauth provider, and obtain a client ID and secret
- Create gollum.env, and populate with the following variables (you can make the cookie secret whatever you like)
OAUTH2_PROXY_CLIENT_ID=
OAUTH2_PROXY_CLIENT_SECRET=
OAUTH2_PROXY_COOKIE_SECRET=
Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
I share (with my patreon patrons) a private “premix” git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just agit pull
and a docker stack deploy
version: '3'
services:
app:
image: dakue/gollum
volumes:
- /var/data/gollum:/gollum
networks:
- internal
command: |
--allow-uploads
--emoji
--user-icons gravatar
proxy:
image: a5huynh/oauth2_proxy
env_file : /var/data/config/gollum/gollum.env
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:gollum.example.com
- traefik.docker.network=traefik_public
- traefik.port=4180
volumes:
- /var/data/config/gollum/authenticated-emails.txt:/authenticated-emails.txt
command: |
-cookie-secure=false
-upstream=http://app:4567
-redirect-url=https://gollum.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
-authenticated-emails-file=/authenticated-emails.txt
networks:
traefik_public:
external: true
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.9.0/24
52.3 Serving
Launch Gollum stack
Launch the Gollum stack by running docker stack deploy gollum -c <path-to-docker-compose.yml>
Authenticate against your OAuth provider, and then start editing your wiki!
52.4 Chef’s Notes
1. In the current implementation, Gollum is a “single user” tool only. The contents of the wiki are saved as markdown files under /var/data/gollum, and all the git commits are currently “Anonymous”
53 InstaPy
InstaPy is an Instagram bot, developed by Tim Grossman. Tim describes his motivation and experiences developing the bot here.
What’s an Instagram bot? Basically, you feed the bot your Instagram user/password, and it executes follows/unfollows/likes/comments on your behalf based on rules you set. (I set my bot to like one photo tagged with “#penguin” per-run)
![](/site_images/geek-cookbook/..----images----instapy.png)
Great power, right? A client (yes, you can hire me!) asked me to integrate InstaPy into their swarm, and this recipe is the result.
53.1 Ingredients
Existing:1. [X] Docker swarm cluster with persistent shared storage
2. [X] Traefik configured per design
3. [X] DNS entry for the hostname you intend to use, pointed to your keepalived IP
53.2 Preparation
Setup data locations
We need a data location to store InstaPy’s config, as well as its log files. Create /var/data/instapy per below
mkdir -p /var/data/instapy/logs
Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
I share (with my patreon patrons) a private “premix” git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just agit pull
and a docker stack deploy
version: '3'
services:
web:
command: ["./wait-for", "selenium:4444", "--", "python", "docker_quickstart.py"]
environment:
- PYTHONUNBUFFERED=0
# Modify the image to whatever Tim's image tag ends up as. I used funkypenguin/i\
nstapy for my build
image: funkypenguin/instapy:latest
# When using swarm, you can't use relative paths, so the following needs to be s\
et to the full filesystem path to your logs and docker_quickstart.py
# Bind-mount docker_quickstart.py, since now that we're using a public image, we\
can't "bake" our credentials into the image anymore
volumes:
- /var/data/instapy/logs:/code/logs
- var/data/instapy/instapy.py:/code/docker_quickstart.py:ro
# This section allows docker to restart the container when it exits (either norm\
ally or abnormally), which ensures that
# InstaPy keeps re-running. Tweak the delay to avoid being banned for excessive \
activity
deploy:
restart_policy:
condition: any
delay: 3600s
selenium:
image: selenium/standalone-chrome-debug
ports:
- "5900:5900"
Command your bot
Create a variation of https://github.com/timgrossmann/InstaPy/blob/master/docker_quickstart.py at /var/data/instapy/instapy.py (the file we bind-mounted in the swarm config above)
Change at least the following:
insta_username = ''
insta_password = ''
Here’s an example of my config, set to like a single penguin-pic per run:
insta_username = 'funkypenguin'
insta_password = 'followmemypersonalbrandisawesome'
dont_like = ['food','girl','batman','gotham','dead','nsfw','porn','slut','baby','tv'\
,'athlete','nhl','hockey','estate','music','band','clothes']
friend_list = ['therock','ruinporn']
# If you want to enter your Instagram Credentials directly just enter
# username=<your-username-here> and password=<your-password> into InstaPy
# e.g like so InstaPy(username="instagram", password="test1234")
bot = InstaPy(username='insta_username', password='insta_password', selenium_local_s\
ession=False)
bot.set_selenium_remote_session(selenium_url='http://selenium:4444/wd/hub')
bot.login()
bot.set_upper_follower_count(limit=10000)
bot.set_lower_follower_count(limit=10)
bot.set_comments([u'Cool :penguin:!', u'Awesome :penguin:!!', u'Nice :penguin:!!'])
bot.set_dont_include(friend_list)
bot.set_dont_like(dont_like)
#bot.set_ignore_if_contains(ignore_words)
# OK, so go through my feed and like stuff, interacting with people I follow
# bot.like_by_feed(amount=3, randomize=True, unfollow=True, interact=True)
# Now find posts tagged as #penguin, and like 'em, commenting 50% of the time
bot.set_do_comment(True, percentage=50)
bot.set_comments([u'Cool :penguin:!', u'Awesome :penguin:!!', u'Nice :penguin:!!'])
bot.like_by_tags(['#penguin'], amount=1)
# goodnight, sweet bot
bot.end()
53.3 Serving
Destroy all humans
Launch the bot by running docker stack deploy instapy -c <path -to-docker-compose.yml>
While you’re waiting for Docker to pull down the images, educate yourself on the risk of a robotic uprising:
<iframe width=”560” height=”315” src=”https://www.youtube.com/embed/B1BdQcJ2ZYY” frameborder=”0” allow=”autoplay; encrypted-media” allowfullscreen></iframe>
After swarm deploys, you won’t see much, but you can monitor what InstaPy is doing, by running docker service logs instapy_web
.
You can also watch the bot at work by VNCing to your docker swarm, password “secret”. You’ll see Selenium browser window cycling away, interacting with all your real/fake friends on Instagram :)
53.4 Chef’s Notes
1. Amazingly, my bot has ended up tagging more non-penguins than actual penguins. I don’t understand how Instagrammers come up with their hashtags!
54 KeyCloak
KeyCloak is “an open source identity and access management solution”. Using a local database, or a variety of backends (think OpenLDAP), you can provide Single Sign-On (SSO) using OpenID, OAuth 2.0, and SAML. KeyCloak’s OpenID provider can be used in combination with Traefik Forward Auth, to protect vulnerable services with an extra layer of authentication.
Initial development of this recipe was sponsored by The Common Observatory. Thanks guys![
![](/site_images/geek-cookbook/..----images----common_observatory.png)
![](/site_images/geek-cookbook/..----images----keycloak.png)
54.1 Ingredients
Existing:* [X] Docker swarm cluster with persistent shared storage
* [X] Traefik configured per design
* [X] DNS entry for the hostname (i.e. “keycloak.your-domain.com”) you intend to use, pointed to your keepalived IP
54.2 Preparation
Setup data locations
We’ll need several directories to bind-mount into our container for both runtime and backup data, so create them as follows
mkdir -p /var/data/runtime/keycloak/database
mkdir -p /var/data/keycloak/database-dump
Prepare environment
Create /var/data/keycloak/keycloak.env
, and populate with the following variables, customized for your own domain structure.
# Technically, this could be auto-detected, but we prefer to be prescriptive
DB_VENDOR=postgres
DB_DATABASE=keycloak
DB_ADDR=keycloak-db
DB_USER=keycloak
DB_PASSWORD=myuberpassword
KEYCLOAK_USER=admin
KEYCLOAK_PASSWORD=ilovepasswords
# This is required to run keycloak behind traefik
PROXY_ADDRESS_FORWARDING=true
# What's our hostname?
KEYCLOAK_HOSTNAME=keycloak.batcave.com
# Tell Postgress what user/password to create
POSTGRES_USER=keycloak
POSTGRES_PASSWORD=myuberpassword
Create /var/data/keycloak/keycloak-backup.env
, and populate with the following, so that your database can be backed up to the filesystem, daily:
PGHOST=keycloak-db
PGUSER=keycloak
PGPASSWORD=myuberpassword
BACKUP_NUM_KEEP=7
BACKUP_FREQUENCY=1d
Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
I share (with my patreon patrons) a private “premix” git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just agit pull
and a docker stack deploy
version: '3'
services:
keycloak:
image: jboss/keycloak
env_file: /var/data/config/keycloak/keycloak.env
volumes:
- /etc/localtime:/etc/localtime:ro
networks:
- traefik_public
- internal
deploy:
labels:
- traefik.frontend.rule=Host:keycloak.batcave.com
- traefik.port=8080
- traefik.docker.network=traefik_public
keycloak-db:
env_file: /var/data/config/keycloak/keycloak.env
image: postgres:10.1
volumes:
- /var/data/runtime/keycloak/database:/var/lib/postgresql/data
- /etc/localtime:/etc/localtime:ro
networks:
- internal
keycloak-db-backup:
image: postgres:10.1
env_file: /var/data/config/keycloak/keycloak-backup.env
volumes:
- /var/data/keycloak/database-dump:/dump
- /etc/localtime:/etc/localtime:ro
entrypoint: |
bash -c 'bash -s <<EOF
trap "break;exit" SIGHUP SIGINT SIGTERM
sleep 2m
while /bin/true; do
pg_dump -Fc > /dump/dump_\`date +%d-%m-%Y"_"%H_%M_%S\`.psql
(ls -t /dump/dump*.psql|head -n $$BACKUP_NUM_KEEP;ls /dump/dump*.psql)|sort|\
uniq -u|xargs rm -- {}
sleep $$BACKUP_FREQUENCY
done
EOF'
networks:
- internal
networks:
traefik_public:
external: true
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.49.0/24
54.3 Serving
Launch KeyCloak stack
Launch the KeyCloak stack by running docker stack deploy keycloak -c <path -to-docker-compose.yml>
Log into your new instance at https://YOUR-FQDN, and login with the user/password you defined in keycloak.env
.
[
![](/site_images/geek-cookbook/..----images----common_observatory.png)
54.4 Chef’s Notes
55 Create KeyCloak Users
This is not a complete recipe - it’s an optional component of the Keycloak recipe, but has been split into its own page to reduce complexity.Unless you plan to authenticate against an outside provider (OpenLDAP, below, for example), you’ll want to create some local users..
55.1 Ingredients
Existing:* [X] KeyCloak recipe deployed successfully
Create User
Within the “Master” realm (no need for more realms yet), navigate to Manage -> Users, and then click Add User at the top right:
![](/site_images/geek-cookbook/geek-cookbook_funkypenguin_co_nz---images---keycloak-add-user-1.png)
Populate your new user’s username (it’s the only mandatory field)
![](/site_images/geek-cookbook/geek-cookbook_funkypenguin_co_nz---images---keycloak-add-user-2.png)
Set User Credentials
Once your user is created, to set their password, click on the “Credentials” tab, and procede to reset it. Set the password to non-temporary, unless you like extra work!
![](/site_images/geek-cookbook/geek-cookbook_funkypenguin_co_nz---images---keycloak-add-user-3.png)
55.2 Summary
We’ve setup users in KeyCloak, which we can now use to authenticate to KeyCloak, when it’s used as an OIDC Provider, potentially to secure vulnerable services using Traefik Forward Auth.
Created:* [X] Username / password to authenticate against KeyCloak
56 Authenticate KeyCloak against OpenLDAP
This is not a complete recipe - it’s an optional component of the Keycloak recipe, but has been split into its own page to reduce complexity.KeyCloak gets really sexy when you integrate it into your OpenLDAP stack (also, it’s great not to have to play with ugly LDAP tree UIs). Note that OpenLDAP integration is not necessary if you want to use KeyCloak with Traefik Forward Auth - all you need for that is local users, and an OIDC client.
56.1 Ingredients
Existing:* [X] KeyCloak recipe deployed successfully
New:
* [ ] An OpenLDAP server (assuming you want to authenticate against it)
56.2 Preparation
You’ll need to have completed the OpenLDAP recipe
You start in the “Master” realm - but mouseover the realm name, to a dropdown box allowing you add an new realm:
Create Realm
![](/site_images/geek-cookbook/geek-cookbook_funkypenguin_co_nz---images---sso-stack-keycloak-1.png)
Enter a name for your new realm, and click “Create”:
![](/site_images/geek-cookbook/geek-cookbook_funkypenguin_co_nz---images---sso-stack-keycloak-2.png)
Setup User Federation
Once in the desired realm, click on User Federation, and click Add Provider. On the next page (“Required Settings”), set the following:
- Edit Mode : Writeable
- Vendor : Other
- Connection URL : ldap://openldap
- Users DN : ou=People,<your base DN>
- Authentication Type : simple
- Bind DN : cn=admin,<your base DN>
- Bind Credential : <your chosen admin password>
Save your changes, and then navigate back to “User Federation” > Your LDAP name > Mappers:
![](/site_images/geek-cookbook/geek-cookbook_funkypenguin_co_nz---images---sso-stack-keycloak-3.png)
For each of the following mappers, click the name, and set the “Read Only” flag to “Off” (this enables 2-way sync between KeyCloak and OpenLDAP)
- last name
- username
- first name
![](/site_images/geek-cookbook/geek-cookbook_funkypenguin_co_nz---images---sso-stack-keycloak-4.png)
56.3 Summary
We’ve setup a new realm in KeyCloak, and configured read-write federation to an OpenLDAP backend. We can now manage our LDAP users using either KeyCloak or LDAP directly, and we can protect vulnerable services using Traefik Forward Auth.
Created:* [X] KeyCloak realm in read-write federation with OpenLDAP directory
56.4 Chef’s Notes
57 Add OIDC Provider to KeyCloak
This is not a complete recipe - it’s an optional component of the Keycloak recipe, but has been split into its own page to reduce complexity.Having an authentication provider is not much use until you start authenticating things against it! In order to authenticate against KeyCloak using OpenID Connect (OIDC), which is required for Traefik Forward Auth, we’ll setup a client in KeyCloak…
57.1 Ingredients
Existing:* [X] KeyCloak recipe deployed successfully
New:
* [ ] The URI(s) to protect with the OIDC provider. Refer to the Traefik Forward Auth recipe for more information
57.2 Preparation
Create Client
Within the “Master” realm (no need for more realms yet), navigate to Clients, and then click Create at the top right:
![](/site_images/geek-cookbook/geek-cookbook_funkypenguin_co_nz---images---keycloak-add-client-1.png)
Enter a name for your client (remember, we’re authenticating applications now, not users, so use an application-specific name):
![](/site_images/geek-cookbook/geek-cookbook_funkypenguin_co_nz---images---keycloak-add-client-2.png)
Configure Client
Once your client is created, set at least the following, and click Save
- Access Type : Confidential
- Valid Redirect URIs : <The URIs you want to protect>
![](/site_images/geek-cookbook/geek-cookbook_funkypenguin_co_nz---images---keycloak-add-client-3.png)
Retrieve Client Secret
Now that you’ve changed the access type, and clicked Save, an additional Credentials tab appears at the top of the window. Click on the tab, and capture the KeyCloak-generated secret. This secret, plus your client name, is required to authenticate against KeyCloak via OIDC.
![](/site_images/geek-cookbook/geek-cookbook_funkypenguin_co_nz---images---keycloak-add-client-4.png)
57.3 Summary
We’ve setup an OIDC client in KeyCloak, which we can now use to protect vulnerable services using Traefik Forward Auth. The OIDC URL provided by KeyCloak in the master realm, is https://<your-keycloak-url>/realms/master/.well-known/openid-configuration
Created:* [X] Client ID and Client Secret used to authenticate against KeyCloak with OpenID Connect
57.4 Chef’s Notes
58 OpenLDAP
Development of this recipe is sponsored by The Common Observatory. Thanks guys![
![](/site_images/geek-cookbook/..----images----common_observatory.png)
LDAP is probably the most ubiquitous authentication backend, before the current era of “stupid social sign-ons”. Many of the recipes featured in the cookbook (NextCloud, Kanboard, Gitlab, etc) offer LDAP integration.
58.1 Big deal, who cares?
If you’re the only user of your tools, it probably doesn’t bother you too much to setup new user accounts for every tool. As soon as you start sharing tools with collaborators (think 10 staff using NextCloud), you suddenly feel the pain of managing a growing collection of local user accounts per-service.
Enter OpenLDAP - the most crusty, PITA, fiddly platform to setup (yes, I’m a little bitter, dynamic configuration backend!), but hugely useful for one job - a Lightweight Protocol for managing a Directory used for Access (see what I did there?)
The nice thing about OpenLDAP is, like MySQL, once you’ve setup the server, you probably never have to interact directly with it. There are many tools which will let you interact with your LDAP database via a(n ugly) UI.
This recipe combines the raw power of OpenLDAP with the flexibility and featureset of LDAP Account Manager.
![](/site_images/geek-cookbook/..----images----openldap.jpeg)
58.2 What’s the takeaway?
What you’ll end up with is a directory structure which will allow integration with popular tools (NextCloud, Kanboard, Gitlab, etc), as well as with KeyCloak (an upcoming recipe), for true SSO.
58.3 Ingredients
- Docker swarm cluster with persistent shared storage
- Traefik configured per design
- DNS entry for the hostname (i.e. “lam.your-domain.com”) you intend to use for LDAP Account Manager, pointed to your keepalived IP
58.4 Preparation
Setup data locations
We’ll need several directories to bind-mount into our container, so create them in /var/data/openldap:
mkdir /var/data/openldap/openldap
mkdir /var/data/runtime/openldap/
Prepare environment
Create /var/data/openldap/openldap.env, and populate with the following variables, customized for your own domain structure. Take care with LDAP_DOMAIN, this is core to your directory structure, and can’t easily be changed later.
LDAP_DOMAIN=batcave.gotham
LDAP_ORGANISATION=BatCave Inc
LDAP_ADMIN_PASSWORD=supermansucks
LDAP_TLS=false
# Use these if you plan to protect the LDAP Account Manager webUI with an oauth_proxy
OAUTH2_PROXY_CLIENT_ID=
OAUTH2_PROXY_CLIENT_SECRET=
OAUTH2_PROXY_COOKIE_SECRET=
Create authenticated-emails.txt
, and populate with the email addresses (matched to GitHub user accounts, in my case) to which you want grant access, using OAuth2.
Create config.cfg
The Dockerized version of LDAP Account Manager is a little fiddly. In order to maintain a config file which persists across container restarts, we need to present the container with a copy of /var/www/html/config/lam.conf, tweaked for our own requirements.
Create /var/data/openldap/lam/config/config.cfg
as follows:
???+ note “Much scroll, very text. Click here to collapse it for better readability”
# password to add/delete/rename configuration profiles (default: lam)
password: {SSHA}D6AaX93kPmck9wAxNlq3GF93S7A= R7gkjQ==
# default profile, without ".conf"
default: batcave
# log level
logLevel: 4
# log destination
logDestination: SYSLOG
# session timeout in minutes
sessionTimeout: 30
# list of hosts which may access LAM
allowedHosts:
# list of hosts which may access LAM Pro self service
allowedHostsSelfService:
# encrypt session data
encryptSession: true
# Password: minimum password length
passwordMinLength: 0
# Password: minimum uppercase characters
passwordMinUpper: 0
# Password: minimum lowercase characters
passwordMinLower: 0
# Password: minimum numeric characters
passwordMinNumeric: 0
# Password: minimum symbolic characters
passwordMinSymbol: 0
# Password: minimum character classes (0-4)
passwordMinClasses: 0
# Password: checked rules
checkedRulesCount: -1
# Password: must not contain part of user name
passwordMustNotContain3Chars: false
# Password: must not contain user name
passwordMustNotContainUser: false
# Email format (default/unix)
mailEOL: default
# PHP error reporting (default/system)
errorReporting: default
# License
license:
Create <profile>.cfg
While config.cfg (above) defines application-level configuration, <profile>.cfg is used to configure “domain-specific” configuration. You probably only need a single profile, but LAM could theoretically be used to administer several totally unrelated LDAP servers, ergo the concept of “profiles”.
Create yours profile (you chose a default profile in config.cfg above, remember?) by creating /var/data/openldap/lam/config/<profile>.conf
, as follows:
???+ note “Much scroll, very text. Click here to collapse it for better readability”
#
LDAP
Account
Manager
configuration
#
#
Please
do
not
modify
this
file
manually
.
The
configuration
can
be
done
complet
\
ely
by
the
LAM
GUI
.
#
################################################################################\
###################
#
server
address
(
e
.
g
.
ldap
://
localhost
:
389
or
ldaps
://
localhost
:
636
)
ServerURL
:
ldap
://
openldap
:
389
#
list
of
users
who
are
allowed
to
use
LDAP
Account
Manager
#
names
have
to
be
separated
by
semicolons
#
e
.
g
.
admins
:
cn
=
admin
,
dc
=
yourdomain
,
dc
=
org
;
cn
=
root
,
dc
=
yourdomain
,
dc
=
org
Admins
:
cn
=
admin
,
dc
=
batcave
,
dc
=
gotham
#
password
to
change
these
preferences
via
webfrontend
(
default
:
lam
)
Passwd
:
{
SSHA
}
h39N9
+
gg
/
Qf1K
/
986VkKrjWlkcI
=
S
/
IAUQ
==
#
suffix
of
tree
view
#
e
.
g
.
dc
=
yourdomain
,
dc
=
org
treesuffix
:
dc
=
batcave
,
dc
=
gotham
#
default
language
(
a
line
from
config
/
language
)
defaultLanguage
:
en_GB
.
utf8
#
Path
to
external
Script
scriptPath
:
#
Server
of
external
Script
scriptServer
:
#
Access
rights
for
home
directories
scriptRights
:
750
#
Number
of
minutes
LAM
caches
LDAP
searches
.
cachetimeout
:
5
#
LDAP
search
limit
.
searchLimit
:
0
#
Module
settings
modules
:
posixAccount_user_minUID
:
10000
modules
:
posixAccount_user_maxUID
:
30000
modules
:
posixAccount_host_minMachine
:
50000
modules
:
posixAccount_host_maxMachine
:
60000
modules
:
posixGroup_group_minGID
:
10000
modules
:
posixGroup_group_maxGID
:
20000
modules
:
posixGroup_pwdHash
:
SSHA
modules
:
posixAccount_pwdHash
:
SSHA
#
List
of
active
account
types
.
activeTypes
:
user
,
group
types
:
suffix_user
:
ou
=
People
,
dc
=
batcave
,
dc
=
gotham
types
:
attr_user
:
#
uid
;
#
givenName
;
#
sn
;
#
uidNumber
;
#
gidNumber
types
:
modules_user
:
inetOrgPerson
,
posixAccount
,
shadowAccount
types
:
suffix_group
:
ou
=
Groups
,
dc
=
batcave
,
dc
=
gotham
types
:
attr_group
:
#
cn
;
#
gidNumber
;
#
memberUID
;
#
description
types
:
modules_group
:
posixGroup
#
Password
mail
subject
lamProMailSubject
:
Your
password
was
reset
#
Password
mail
text
lamProMailText
:
Dear
@
@
givenName
@@
@
@
sn
@@,+::++::+
your
password
was
reset
to
:
@@
\
newPassword
@@+::++::++::+
Best
regards
+::++::+
deskside
support
+::+
serverDisplayName
:
#
enable
TLS
encryption
useTLS
:
no
#
follow
referrals
followReferrals
:
false
#
paged
results
pagedResults
:
false
referentialIntegrityOverlay
:
false
#
time
zone
timeZone
:
Europe
/
London
scriptUserName
:
scriptSSHKey
:
scriptSSHKeyPassword
:
#
Access
level
for
this
profile
.
accessLevel
:
100
#
Login
method
.
loginMethod
:
list
#
Search
suffix
for
LAM
login
.
loginSearchSuffix
:
dc
=
batcave
,
dc
=
gotham
#
Search
filter
for
LAM
login
.
loginSearchFilter
:
uid
=%
USER
%
#
Bind
DN
for
login
search
.
loginSearchDN
:
#
Bind
password
for
login
search
.
loginSearchPassword
:
#
HTTP
authentication
for
LAM
login
.
httpAuthentication
:
false
#
Password
mail
from
lamProMailFrom
:
#
Password
mail
reply-to
lamProMailReplyTo
:
#
Password
mail
is
HTML
lamProMailIsHTML
:
false
#
Allow
alternate
address
lamProMailAllowAlternateAddress
:
true
jobsBindPassword
:
jobsBindUser
:
jobsDatabase
:
jobsDBHost
:
jobsDBPort
:
jobsDBUser
:
jobsDBPassword
:
jobsDBName
:
jobToken
:
190339140545
pwdResetAllowSpecificPassword
:
true
pwdResetAllowScreenPassword
:
true
pwdResetForcePasswordChange
:
true
pwdResetDefaultPasswordOutput
:
2
twoFactorAuthentication
:
none
twoFactorAuthenticationURL
:
https
://
localhost
twoFactorAuthenticationInsecure
:
twoFactorAuthenticationLabel
:
twoFactorAuthenticationOptional
:
twoFactorAuthenticationCaption
:
tools
:
tool_hide_toolOUEditor
:
false
tools
:
tool_hide_toolProfileEditor
:
false
tools
:
tool_hide_toolSchemaBrowser
:
false
tools
:
tool_hide_toolServerInformation
:
false
tools
:
tool_hide_toolTests
:
false
tools
:
tool_hide_toolPDFEditor
:
false
tools
:
tool_hide_toolFileUpload
:
false
tools
:
tool_hide_toolMultiEdit
:
false
Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this, at (/var/data/config/openldap/openldap.yml
)
git pull
and a docker stack deploy
version: '3'
services:
openldap:
image: osixia/openldap
env_file: /var/data/config/openldap/openldap.env
networks:
- traefik_public
- auth_internal
volumes:
- /var/data/runtime/openldap/:/var/lib/ldap
- /var/data/openldap/openldap/:/etc/ldap/slapd.d
lam:
image: jacksgt/ldap-account-manager
networks:
- auth_internal
volumes:
- /var/data/openldap/lam/config/config.cfg:/var/www/html/config/config.cfg
- /var/data/openldap/lam/config/batcave.conf:/var/www/html/config/batcave.conf
lam-proxy:
image: funkypenguin/oauth2_proxy
env_file: /var/data/config/openldap/openldap.env
networks:
- traefik_public
- auth_internal
deploy:
labels:
- traefik.frontend.rule=Host:lam.batcave.com
- traefik.docker.network=traefik_public
- traefik.port=4180
command: |
-cookie-secure=false
-upstream=http://lam:8080
-redirect-url=https://lam.batcave.com
-http-address=http://0.0.0.0:4180
-email-domain=batcave.com
-provider=github
networks:
# Used to expose lam-proxy to external access, and openldap to keycloak
traefik_public:
external: true
# Used to expose openldap to other apps which want to talk to LDAP, including LAM
auth_internal:
external: true
However, you’re likely to want to use OpenLdap with KeyCloak, whose JBOSS startup script assumes a single interface, and will crash in a ball of if you try to assign multiple interfaces to the container.
Since we’re going to want KeyCloak to be able to talk to OpenLDAP, we have no choice but to leave the OpenLDAP container on the “traefik_public” network. We can, however, create another overlay network (auth_internal, see below), add it to the openldap container, and use it to provide OpenLDAP access to our other stacks.
Create another stack config file (/var/data/config/openldap/auth.yml
) containing just the auth_internal network, and a dummy container:
version: "3.2"
# What is this?
#
# This stack exists solely to deploy the auth_internal overlay network, so that
# other stacks (including keycloak and openldap) can attach to it
services:
scratch:
image: scratch
deploy:
replicas: 0
networks:
- internal
networks:
internal:
driver: overlay
attachable: true
ipam:
config:
- subnet: 172.16.39.0/24
58.5 Serving
Launch OpenLDAP stack
Create the auth_internal overlay network, by running docker stack deploy auth -c /var/data/config/openldap/auth.yml
, then launch the OpenLDAP stack by running docker stack deploy openldap -c /var/data/config/openldap/openldap.yml
Log into your new LAM instance at https://YOUR-FQDN.
On first login, you’ll be prompted to create the “ou=People” and “ou=Group” elements. Proceed to create these.
You’ve now setup your OpenLDAP directory structure, and your administration interface, and hopefully won’t have to interact with the “special” LDAP Account Manager interface much again!
Create your users using the “New User” button.
Development of this recipe is sponsored by The Common Observatory. Thanks guys![
![](/site_images/geek-cookbook/..----images----common_observatory.png)
58.6 Chef’s Notes
- The KeyCloak recipe illustrates how to integrate KeyCloak with your LDAP directory, giving you a cleaner interface to manage users, and a raft of SSO / OAuth features.
hero: Docker-mailserver - A recipe for a self-contained mailserver and friends
59 Mail Server
Many of the recipes that follow require email access of some kind. It’s normally possible to use a hosted service such as SendGrid, or just a gmail account. If (like me) you’d like to self-host email for your stacks, then the following recipe provides a full-stack mail server running on the docker HA swarm.
Of value to me in choosing docker-mailserver were:
- Automatically renews LetsEncrypt certificates
- Creation of email accounts across multiple domains (i.e., the same container gives me mailbox wekan@wekan.example.com, and gitlab@gitlab.example.com)
- The entire configuration is based on flat files, so there’s no database or persistence to worry about
docker-mailserver doesn’t include a webmail client, and one is not strictly needed. Rainloop can be added either as another service within the stack, or as a standalone service. Rainloop will be covered in a future recipe.
59.1 Ingredients
- Docker swarm cluster with persistent shared storage
- Traefik configured per design
- LetsEncrypt authorized email address for domain
- Access to manage DNS records for domains
59.2 Preparation
Setup data locations
We’ll need several directories to bind-mount into our container, so create them in /var/data/docker-mailserver:
cd /var/data
mkdir docker-mailserver
cd docker-mailserver
mkdir {maildata,mailstate,config,letsencrypt,rainloop}
Get LetsEncrypt certificate
Decide on the FQDN to assign to your mailserver. You can service multiple domains from a single mailserver - i.e., bob@dev.example.com and daphne@prod.example.com can both be served by mail.example.com.
The docker-mailserver container can renew our LetsEncrypt certs for us, but it can’t generate them. To do this, we need to run certbot (from a container) to request the initial certs and create the appropriate directory structure.
In the example below, since I’m already using Traefik to manage the LE certs for my web platforms, I opted to use the DNS challenge to prove my ownership of the domain. The certbot client will prompt you to add a DNS record for domain verification.
docker run -ti --rm -v \
"$(pwd)"/letsencrypt:/etc/letsencrypt certbot/certbot \
--manual --preferred-challenges dns certonly \
-d mail.example.com
Get setup.sh
docker-mailserver comes with a handy bash script for managing the stack (which is just really a wrapper around the container.) It’ll make our setup easier, so download it into the root of your configuration/data directory, and make it executable:
curl -o setup.sh \
https://raw.githubusercontent.com/tomav/docker-mailserver/master/setup.sh \
chmod a+x ./setup.sh
### Create email accounts
For every email address required, run ./setup.sh email add <email> <password>
to create the account. The command returns no output.
You can run ./setup.sh email list
to confirm all of your addresses have been created.
Create DKIM DNS entries
Run ./setup.sh config dkim
to create the necessary DKIM entries. The command returns no output.
Examine the keys created by opendkim to identify the DNS TXT records required:
for i in `find config/opendkim/keys/ -name mail.txt`; do \
echo $i; \
cat $i; \
done
You’ll end up with something like this:
config/opendkim/keys/gitlab.example.com/mail.txt
mail._domainkey IN TXT ( "v=DKIM1; k=rsa; "
"p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCYuQqDg2ZG8ZOfI1PvarF1Gcr5cJnCR8BeCj5HYg\
eRohSrxKL5utPEF/AWAxXYwnKpgYN837fu74GfqsIuOhu70lPhGV+O2gFVgpXYWHELvIiTqqO0QgarIN63WE\
2gzE4s0FckfLrMuxMoXr882wuzuJhXywGxOavybmjpnNHhbQIDAQAB" ) ; ----- DKIM key mail for\
gitlab.example.com
[root@ds1 mail]#
Create the necessary DNS TXT entries for your domain(s). Note that although opendkim splits the record across two lines, the actual record should be concatenated on creation. I.e., the DNS TXT record above should read:
"v=DKIM1; k=rsa; p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCYuQqDg2ZG8ZOfI1PvarF1Gcr5c\
JnCR8BeCj5HYgeRohSrxKL5utPEF/AWAxXYwnKpgYN837fu74GfqsIuOhu70lPhGV+O2gFVgpXYWHELvIiTq\
qO0QgarIN63WE2gzE4s0FckfLrMuxMoXr882wuzuJhXywGxOavybmjpnNHhbQIDAQAB"
Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3.2 - because we need to expose mail ports in “host mode”), something like this:
I share (with my patreon patrons) a private “premix” git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just agit pull
and a docker stack deploy
version: '3.2'
services:
mail:
image: tvial/docker-mailserver:latest
ports:
- target: 25
published: 25
protocol: tcp
mode: host
- target: 587
published: 587
protocol: tcp
mode: host
- target: 993
published: 993
protocol: tcp
mode: host
- target: 995
published: 995
protocol: tcp
mode: host
volumes:
- /var/data/docker-mailserver/maildata:/var/mail
- /var/data/docker-mailserver/mailstate:/var/mail-state
- /var/data/docker-mailserver/config:/tmp/docker-mailserver
- /var/data/docker-mailserver/letsencrypt:/etc/letsencrypt
env_file: /var/data/docker-mailserver/docker-mailserver.env
networks:
- internal
deploy:
replicas: 1
rainloop:
image: hardware/rainloop
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:rainloop.example.com
- traefik.docker.network=traefik_public
- traefik.port=8888
volumes:
- /var/data/mailserver/rainloop:/rainloop/data
networks:
traefik_public:
external: true
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.2.0/24
A sample docker-mailserver.env file looks like this:
ENABLE_SPAMASSASSIN=1
ENABLE_CLAMAV=1
ENABLE_POSTGREY=1
ONE_DIR=1
OVERRIDE_HOSTNAME=mail.example.com
OVERRIDE_DOMAINNAME=mail.example.com
POSTMASTER_ADDRESS=admin@example.com
PERMIT_DOCKER=network
SSL_TYPE=letsencrypt
59.3 Serving
Launch mailserver
Launch the mail server stack by running docker stack deploy docker-mailserver -c <path-to-docker-mailserver.yml>
59.4 Chef’s Notes
- One of the elements of this design which I didn’t appreciate at first is that since the config is entirely file-based, setup.sh can be run on any container host, provided it has the shared data mounted. This means that even though docker-mailserver was not designed with docker swarm in mind, it works perfectl with swarm. I.e., from any node, regardless of where the container is actually running, you’re able to add/delete email addresses, view logs, etc.
- If you’re using sieve with Rainloop, take note of the workaround identified by ggilley
60 Minio
Minio is a high performance distributed object storage server, designed for
large-scale private cloud infrastructure.
However, at its simplest, Minio allows you to expose a local filestructure via the Amazon S3 API. You could, for example, use it to provide access to “buckets” (folders) of data on your filestore, secured by access/secret keys, just like AWS S3. You can further interact with your “buckets” with common tools, just as if they were hosted on S3.
Under a more advanced configuration, Minio runs in distributed mode, with features including high-availability, mirroring, erasure-coding, and “bitrot detection”.
![](/site_images/geek-cookbook/..----images----minio.png)
Possible use-cases:
- Sharing files (protected by user accounts with secrets) via HTTPS, either as read-only or read-write, in such a way that the bucket could be mounted to a remote filesystem using common S3-compatible tools, like goofys. Ever wanted to share a folder with friends, but didn’t want to open additional firewall ports etc?
- Simulating S3 in a dev environment
- Mirroring an S3 bucket locally
60.1 Ingredients
- Docker swarm cluster with persistent shared storage
- Traefik configured per design
- DNS entry for the hostname you intend to use, pointed to your keepalived IP
60.2 Preparation
Setup data locations
We’ll need a directory to hold our minio file store, as well as our minio client config, so create a structure at /var/data/minio:
mkdir /var/data/minio
cd /var/data/minio
mkdir -p {mc,data}
Prepare environment
Create minio.env, and populate with the following variables
MINIO_ACCESS_KEY=<some random, complex string>
MINIO_SECRET_KEY=<another random, complex string>
Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
I share (with my patreon patrons) a private “premix” git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just agit pull
and a docker stack deploy
version: '3.1'
services:
app:
image: minio/minio
env_file: /var/data/config/minio/minio.env
volumes:
- /var/data/minio/data:/data
networks:
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:minio.example.com
- traefik.port=9000
command: minio server /data
networks:
traefik_public:
external: true
60.3 Serving
Launch Minio stack
Launch the Minio stack by running docker stack deploy minio -c <path -to-docker-compose.yml>
Log into your new instance at https://YOUR-FQDN, with the access key and secret key you specified in minio.env.
If you created /var/data/minio
, you’ll see nothing. If you referenced existing data, you should see all subdirectories in your existing folder represented as buckets.
If all you need is single-user access to your data, you’re done!
If, however, you want to expose data to multiple users, at different privilege levels, you’ll need the minio client to create some users and (potentially) policies…
Setup minio client
To administer the Minio server, we need the Minio client. While it’s possible to download the minio client and run it locally, it’s just as easy to do it within a small (5Mb) container.
I created an alias on my docker nodes, allowing me to run mc quickly:
alias mc='docker run -it -v /docker/minio/mc/:/root/.mc/ --network traefik_public mi\
nio/mc'
Now I use the alias to launch the client shell, and connect to my minio instance (I could also use the external, traefik-provided URL)
root@ds1:~# mc config host add minio http://app:9000 admin iambatman
mc: Configuration written to `/root/.mc/config.json`. Please update your access cred\
entials.
mc: Successfully created `/root/.mc/share`.
mc: Initialized share uploads `/root/.mc/share/uploads.json` file.
mc: Initialized share downloads `/root/.mc/share/downloads.json` file.
Added `minio` successfully.
root@ds1:~#
Add (readonly) user
Use mc to add a (readonly or readwrite) user, by running mc admin user add minio <access key> <secret key> <access level>
Example:
root@ds1:~# mc admin user add minio spiderman peterparker readonly
Added user `spiderman` successfully.
root@ds1:~#
Confirm by listing your users (admin is excluded from the list):
root@node1:~# mc admin user list minio
enabled spiderman readonly
root@node1:~#
Make a bucket accessible to users
By default, all buckets have no “policies” attached to them, and so can only be accessed by the administrative user. Having created some readonly/read-write users above, you’ll be wanting to grant them access to buckets.
The simplest permission scheme is “on or off”. Either a bucket has a policy, or it doesn’t. (I believe you can apply policies to subdirectories of buckets in a more advanced configuration)
After no policy, the most restrictive policy you can attach to a bucket is “download”. This policy will allow authenticated users to download contents from the bucket. Apply the “download” policy to a bucket by running mc policy download minio/<bucket name>
, i.e.:
root@ds1:# mc policy download minio/comics
Access permission for `minio/comics` is set to `download`
root@ds1:#
Advanced bucketing
There are some clever complexities you can achieve with user/bucket policies, including:
- A public bucket, which requires no authentication to read or even write (for a public dropbox, for example)
- A special bucket, hidden from most users, but available to VIP users by application of a custom “canned policy”
Mount a minio share remotely
Having setup your buckets, users, and policies - you can give out your minio external URL, and user access keys to your remote users, and they can S3-mount your buckets, interacting with them based on their user policy (read-only or read/write)
I tested the S3 mount using goofys, “a high-performance, POSIX-ish Amazon S3 file system written in Go”.
First, I created ~/.aws/credentials, as follows:
[default]
aws_access_key_id
=
spiderman
aws_secret_access_key
=
peterparker
And then I ran (in the foreground, for debugging), goofys --f -debug_s3 --debug_fuse --endpoint=https://traefik.example.com <bucketname> <local mount point>
To permanently mount an S3 bucket using goofys, I’d add something like this to /etc/fstab:
goofys#bucket /mnt/mountpoint fuse _netdev,allow_other,--file-mode=0666\
0 0
60.4 Chef’s Notes
- There are many S3-filesystem-mounting tools available, I just picked Goofys because it’s simple. Google is your friend :)
- Some applications (like NextCloud) can natively mount S3 buckets
- Some backup tools (like Duplicity) can backup directly to S3 buckets
61 Piwik
Piwik is a rich open-source web analytics platform, which can be coupled with commercial plugins for additional features. It’s most simply described as “self-hosted Google Analytics”.
![](/site_images/geek-cookbook/..----images----piwik.png)
61.1 Ingredients
- Docker swarm cluster with persistent shared storage
- Traefik configured per design
61.2 Preparation
Limitation of docker-swarm
The docker-swarm load-balancer is a problem for deploying piwik, since it rewrites the source address of every incoming packet to whichever docker node received the packet into the swarm. Which is a PITA for analytics, since the original source IP of the request is obscured.
The issue is tracked at #25526, and there is a workaround, but it requires running the piwik “app” container on every swarm node…
Prepare environment
Create piwik.env, and populate with the following variables
MYSQL_ROOT_PASSWORD=set-me-and-use-me-when-setting-up-piwik
Setup docker swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
I share (with my patreon patrons) a private “premix” git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just agit pull
and a docker stack deploy
version: '3'
services:
db:
image: mysql
volumes:
- /var/data/piwik/mysql/runtime:/var/lib/mysql
env_file: /var/data/piwik/piwik.env
networks:
- internal
app:
image: piwik:apache
volumes:
- /var/data/piwik/config:/var/www/html/config
networks:
- internal
- traefik
deploy:
mode: global
labels:
- traefik.frontend.rule=Host:piwik.example.com
- traefik.docker.network=traefik
- traefik.port=80
cron:
image: piwik:apache
volumes:
- /var/data/piwik/config:/var/www/html/config
entrypoint: |
bash -c 'bash -s <<EOF
trap "break;exit" SIGHUP SIGINT SIGTERM
while /bin/true; do
su -s "/bin/bash" -c "/usr/local/bin/php /var/www/html/console core:archive"\
www-data
sleep 3600
done
EOF'
networks:
- internal
networks:
traefik:
external: true
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.4.0/24
61.3 Serving
Launch the Piwik stack by running docker stack deploy piwik -c <path -to-docker-compose.yml>
Log into your new instance at https://YOUR-FQDN, and follow the wizard to complete the setup.
hero: A recipe for a sexy view of your Docker Swarm
62 Portainer
Portainer is a lightweight sexy UI for visualizing your docker environment. It also happens to integrate well with Docker Swarm clusters, which makes it a great fit for our stack.
![](/site_images/geek-cookbook/..----images----portainer.png)
This is a “lightweight” recipe, because Portainer is so “lightweight”. But it is shiny…
62.1 Ingredients
- Docker swarm cluster with persistent shared storage
- Traefik configured per design
- DNS entry for the hostname you intend to use, pointed to your keepalived IP
62.2 Preparation
Setup data locations
Create a folder to store portainer’s persistent data:
mkdir /var/data/portainer
Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
I share (with my patreon patrons) a private “premix” git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just agit pull
and a docker stack deploy
version: "3"
services:
app:
image: portainer/portainer
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /var/data/portainer:/data
networks:
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:portainer.funkypenguin.co.nz
- traefik.port=9000
placement:
constraints: [node.role == manager]
command: -H unix:///var/run/docker.sock
networks:
traefik_public:
external: true
62.3 Serving
Launch Portainer stack
Launch the Portainer stack by running docker stack deploy portainer -c <path -to-docker-compose.yml>
Log into your new instance at https://YOUR-FQDN. You’ll be prompted to set your admin user/password.
62.4 Chef’s Notes
1. I wanted to use oauth2_proxy to provide an additional layer of security for Portainer, but the proxy seems to break the authentication mechanism, effectively making the stack so secure, that it can’t be logged into!
63 Realms
Realms is a git-based wiki (like Gollum, but with basic authentication and registration)
![](/site_images/geek-cookbook/..----images----realms.png)
Features include:
- Built with Bootstrap 3.
- Markdown (w/ HTML Support).
- Syntax highlighting (Ace Editor).
- Live preview.
- Collaboration (TogetherJS / Firepad).
- Drafts saved to local storage.
- Handlebars for templates and logic.
Also of note is that the docker image is 1.17GB in size, and the handful of commits to the source GitHub repo in the past year has listed TravisCI build failures. This has many of the hallmarks of an abandoned project, to my mind.
63.1 Ingredients
- Docker swarm cluster with persistent shared storage
- Traefik configured per design
- DNS entry for the hostname you intend to use, pointed to your keepalived IP
63.2 Preparation
Setup data locations
Since we’ll start with a basic Realms install, let’s just create a single directory to hold the realms (SQLite) data:
mkdir /var/data/realms/
Create realms.env, and populate with the following variables (if you intend to use an oauth_proxy to double-secure your installation, which I recommend)
OAUTH2_PROXY_CLIENT_ID=
OAUTH2_PROXY_CLIENT_SECRET=
OAUTH2_PROXY_COOKIE_SECRET=
Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
I share (with my patreon patrons) a private “premix” git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just agit pull
and a docker stack deploy
version: "3"
services:
realms:
image: realms/realms-wiki:latest
env_file: /var/data/config/realms/realms.env
volumes:
- /var/data/realms:/home/wiki/data
networks:
- internal
realms_proxy:
image: funkypenguin/oauth2_proxy:latest
env_file : /var/data/config/realms/realms.env
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:realms.funkypenguin.co.nz
- traefik.docker.network=traefik_public
- traefik.port=4180
volumes:
- /var/data/config/realms/authenticated-emails.txt:/authenticated-emails.txt
command: |
-cookie-secure=false
-upstream=http://realms:5000
-redirect-url=https://realms.funkypenguin.co.nz
-http-address=http://0.0.0.0:4180
-email-domain=funkypenguin.co.nz
-provider=github
-authenticated-emails-file=/authenticated-emails.txt
networks:
traefik_public:
external: true
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.35.0/24
63.3 Serving
Launch Realms stack
Launch the Wekan stack by running docker stack deploy realms -c <path -to-docker-compose.yml>
Log into your new instance at https://YOUR-FQDN, authenticate against oauth_proxy, and you’re immediately presented with Realms wiki, waiting for a fresh edit ;)
63.4 Chef’s Notes
- If you wanted to expose the Realms UI directly, you could remove the oauth2_proxy from the design, and move the traefik_public-related labels directly to the realms container. You’d also need to add the traefik_public network to the realms container.
- The inclusion of Realms was due to the efforts of @gkoerk in our Discord server. Thanks gkoerk!
64 Tiny Tiny RSS
Tiny Tiny RSS is a self-hosted, AJAX-based RSS reader, which rose to popularity as a replacement for Google Reader. It supports geeky advanced features, such as:
- Plugins and themeing in a drop-in fashion
- Filtering (discard all articles with title matching “trump”)
- Sharing articles via a unique public URL/feed
![](/site_images/geek-cookbook/..----images----tiny-tiny-rss.png)
64.1 Ingredients
- Docker swarm cluster with persistent shared storage
- Traefik configured per design
64.2 Preparation
Setup data locations
We’ll need several directories to bind-mount into our container, so create them in /var/data/ttrss:
mkdir /var/data/ttrss
cd /var/data/ttrss
mkdir -p {database,database-dump}
Prepare environment
Create ttrss.env, and populate with the following variables, customizing at least the database password (POSTGRES_PASSWORD and DB_PASS) and the TTRSS_SELF_URL to point to your installation.
# Variables for postgres:latest
POSTGRES_USER=ttrss
POSTGRES_PASSWORD=mypassword
DB_EXTENSION=pg_trgm
# Variables for pg_dump running in postgres/latest (used for db-backup)
PGUSER=ttrss
PGPASSWORD=mypassword
PGHOST=db
BACKUP_NUM_KEEP=3
BACKUP_FREQUENCY=1d
# Variables for funkypenguin/docker-ttrss
DB_USER=ttrss
DB_PASS=mypassword
DB_PORT=5432
DB_PORT_5432_TCP_ADDR=db
DB_PORT_5432_TCP_PORT=5432
TTRSS_SELF_URL=https://ttrss.example.com
TTRSS_REPO=https://github.com/funkypenguin/tt-rss.git
S6_BEHAVIOUR_IF_STAGE2_FAILS=2
Setup docker swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
I share (with my patreon patrons) a private “premix” git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just agit pull
and a docker stack deploy
version: '3'
services:
db:
image: postgres:latest
env_file: /var/data/ttrss/ttrss.env
volumes:
- /var/data/ttrss/database:/var/lib/postgresql/data
networks:
- internal
app:
image: funkypenguin/docker-ttrss
env_file: /var/data/ttrss/ttrss.env
deploy:
labels:
- traefik.frontend.rule=Host:ttrss.funkypenguin.co.nz
- traefik.docker.network=traefik
- traefik.port=8080
networks:
- internal
- traefik
db-backup:
image: postgres:latest
env_file: /var/data/ttrss/ttrss.env
volumes:
- /var/data/ttrss/database-dump:/dump
- /etc/localtime:/etc/localtime:ro
entrypoint: |
bash -c 'bash -s <<EOF
trap "break;exit" SIGHUP SIGINT SIGTERM
sleep 2m
while /bin/true; do
pg_dump -Fc > /dump/dump_\`date +%d-%m-%Y"_"%H_%M_%S\`.psql
(ls -t /dump/dump*.psql|head -n $$BACKUP_NUM_KEEP;ls /dump/dump*.psql)|sor\
t|uniq -u|xargs rm -- {}
sleep $$BACKUP_FREQUENCY
done
EOF'
networks:
- internal
networks:
traefik:
external: true
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.5.0/24
64.3 Serving
Launch TTRSS stack
Launch the TTRSS stack by running docker stack deploy ttrss -c <path -to-docker-compose.yml>
Log into your new instance at https://YOUR-FQDN - the first user you create will be an administrative user.
64.4 Chef’s Notes
There are several TTRSS containers available on docker hub, none of them “official”. I chose x86dev’s container for its features - such as my favorite skins and plugins, and the daily automatic updates from the “rolling release” master. Some of the features of the container I use are due to a PR I submitted:
- Docker swarm looses the docker-compose concept of “dependencies” between containers. In the case of this stack, the application server typically starts up before the database container, which causes the database autoconfiguration scripts to fail, and brings up the app in a broken state. To prevent this, I include “wait-for”, which (combined with “S6_BEHAVIOUR_IF_STAGE2_FAILS=2”), will cause the app container to restart (and attempt to auto-configure itself) until the database is ready.
- The upstream git URL changed recently, but my experience of the new repository is that it’s SO slow, that the initial “git clone” on setup of the container times out. To work around this, I created my own repo, cloned upstream, pushed it into my repo, and pointed the container at my own repo with TTRSS_REPO. I don’t get the latest code changes, but at least the app container starts up. When upstream git is performing properly, I’ll remove TTRSS_REPO to revert back to the “rolling release”.
hero: Read-it-later, mate!
65 Wallabag
Wallabag is a self-hosted webapp which allows you to save URLs to “read later”, similar to Instapaper or Pocket. Like Instapaper (but not Pocket, sadly), Wallabag allows you to annotate any pages you grab for your own reference.
All saved data (pages, annotations, images, tags, etc) are stored on your own server, and can be shared/exported in a variety of formats, including ePub and PDF.
![](/site_images/geek-cookbook/..----images----wallabag.png)
There are plugins for Chrome and Firefox, as well as apps for iOS, Android, etc. Wallabag will also integrate nicely with my favorite RSS reader, Miniflux (for which there is an existing recipe).
Here’s a video which shows off the UI a bit more.
65.1 Ingredients
- Docker swarm cluster with persistent shared storage
- Traefik configured per design
- DNS entry for the hostname you intend to use, pointed to your keepalived IP
65.2 Preparation
Setup data locations
We need a filesystem location to store images that Wallabag downloads from the original sources, to re-display when you read your articles, as well as nightly database dumps (which you should backup), so create something like this:
mkdir -p /var/data/wallabag
cd /var/data/wallabag
mkdir -p {images,db-dump}
Prepare environment
Create wallabag.env, and populate with the following variables. The only variable you have to change is SYMFONY__ENV__DOMAIN_NAME - this must be the URL that your Wallabag instance will be available at (else you’ll have no CSS)
# For the DB container
POSTGRES_PASSWORD=wallabag
POSTGRES_USER=wallabag
# For the wallabag container
SYMFONY__ENV__DATABASE_DRIVER=pdo_pgsql
SYMFONY__ENV__DATABASE_HOST=db
SYMFONY__ENV__DATABASE_PORT=5432
SYMFONY__ENV__DATABASE_NAME=wallabag
SYMFONY__ENV__DATABASE_USER=wallabag
SYMFONY__ENV__DATABASE_PASSWORD=wallabag
SYMFONY__ENV__DOMAIN_NAME=https://wallabag.example.com
SYMFONY__ENV__DATABASE_DRIVER_CLASS=Wallabag\CoreBundle\Doctrine\DBAL\Driver\CustomP\
ostgreSQLDriver
SYMFONY__ENV__MAILER_HOST=127.0.0.1
SYMFONY__ENV__MAILER_USER=~
SYMFONY__ENV__MAILER_PASSWORD=~
SYMFONY__ENV__FROM_EMAIL=wallabag@example.com
SYMFONY__ENV__FOSUSER_REGISTRATION=false
# If you decide to protect wallabag with an oauth_proxy, complete these
OAUTH2_PROXY_CLIENT_ID=
OAUTH2_PROXY_CLIENT_SECRET=
OAUTH2_PROXY_COOKIE_SECRET=
Now create wallabag-backup.env in the same folder, with the following contents. (This is necessary to prevent environment variables required for backup from breaking the DB container)
# For database backups
PGUSER=wallabag
PGPASSWORD=wallabag
PGHOST=db
BACKUP_NUM_KEEP=7
BACKUP_FREQUENCY=1d
Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
I share (with my patreon patrons) a private “premix” git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just agit pull
and a docker stack deploy
version: '3'
services:
wallabag:
image: wallabag/wallabag
env_file: /var/data/config/wallabag/wallabag.env
networks:
- internal
volumes:
- /var/data/wallabag/images:/var/www/wallabag/web/assets/images
wallabag_proxy:
image: a5huynh/oauth2_proxy
env_file: /var/data/config/wallabag/wallabag.env
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:wallabag.example.com
- traefik.docker.network=traefik_public
- traefik.port=4180
volumes:
- /var/data/config/wallabag/authenticated-emails.txt:/authenticated-emails.txt
command: |
-cookie-secure=false
-upstream=http://wallabag:80
-redirect-url=https://wallabag.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
-authenticated-emails-file=/authenticated-emails.txt
db:
image: postgres
env_file: /var/data/config/wallabag/wallabag.env
dns_search:
- hq.example.com
volumes:
- /var/data/runtime/wallabag/data:/var/lib/postgresql/data
networks:
- internal
db-backup:
image: postgres:latest
env_file: /var/data/config/wallabag/wallabag-backup.env
volumes:
- /var/data/wallabag/database-dump:/dump
- /etc/localtime:/etc/localtime:ro
entrypoint: |
bash -c 'bash -s <<EOF
trap "break;exit" SIGHUP SIGINT SIGTERM
sleep 2m
while /bin/true; do
pg_dump -Fc > /dump/dump_\`date +%d-%m-%Y"_"%H_%M_%S\`.psql
(ls -t /dump/dump*.psql|head -n $$BACKUP_NUM_KEEP;ls /dump/dump*.psql)|sort|\
uniq -u|xargs rm -- {}
sleep $$BACKUP_FREQUENCY
done
EOF'
networks:
- internal
redis:
image: redis:alpine
networks:
- internal
import-instapaper:
image: wallabag/wallabag
env_file: /var/data/config/wallabag/wallabag.env
networks:
- internal
command: |
import instapaper
import-pocket:
image: wallabag/wallabag
env_file: /var/data/config/wallabag/wallabag.env
networks:
- internal
command: |
import pocket
networks:
traefik_public:
external: true
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.21.0/24
65.3 Serving
Launch Wallabag stack
Launch the Wallabag stack by running docker stack deploy wallabag -c <path -to-docker-compose.yml>
Log into your new instance at https://YOUR-FQDN, with user “wallabag” and default password “wallabag”.
Enable asynchronous imports
You’ll have noticed redis, plus the pocket/instapaper-importing containers included in the .yml above. Redis is there to allow asynchronous imports, and pocket and instapaper are there since they’re likely the most popular platform you’d want to import from. Other possibilities (you’ll need to adjust the .yml) are readability, firefox, chrome, and wallabag_v1 and wallabag_v2.
Even with all these elements in place, you still need to enable Redis under Internal Settings -> Import, via the admin user in the webUI. Here’s a screenshot to help you find it:
![](/site_images/geek-cookbook/..----images----wallabag_imports.png)
65.4 Chef’s Notes
- If you wanted to expose the Wallabag UI directly (required for the iOS/Android apps), you could remove the oauth2_proxy from the design, and move the traefik-related labels directly to the wallabag container. You’d also need to add the traefik_public network to the wallabag container. I found the iOS app to be unreliable and clunky, so elected to leave my oauth_proxy enabled, and to simply use the webUI on my mobile devices instead. YMMMV.
- I’ve not tested the email integration, but you’d need an SMTP server listening on port 25 (since we can’t change the port) to use it
66 Wekan
Wekan is an open-source kanban board which allows a card-based task and to-do management, similar to tools like WorkFlowy or Trello.
![](/site_images/geek-cookbook/..----images----wekan.jpg)
Wekan allows to create Boards, on which Cards can be moved around between a number of Columns. Boards can have many members, allowing for easy collaboration, just add everyone that should be able to work with you on the board to it, and you are good to go! You can assign colored Labels to cards to facilitate grouping and filtering, additionally you can add members to a card, for example to assign a task to someone.
There’s a video of the developer showing off the app, as well as a functional demo.
For added privacy, this design secures wekan behind an oauth2 proxy, so that in order to gain access to the wekan UI at all, oauth2 authentication (to GitHub, GitLab, Google, etc) must have already occurred.66.1 Ingredients
- Docker swarm cluster with persistent shared storage
- Traefik configured per design
66.2 Preparation
Setup data locations
We’ll need several directories to bind-mount into our container, so create them in /var/data/wekan:
mkdir /var/data/wekan
cd /var/data/wekan
mkdir -p {wekan-db,wekan-db-dump}
Prepare environment
You’ll need to know the following:
- Choose an oauth provider, and obtain a client ID and secret
- Create wekan.env, and populate with the following variables
OAUTH2_PROXY_CLIENT_ID=
OAUTH2_PROXY_CLIENT_SECRET=
OAUTH2_PROXY_COOKIE_SECRET=
MONGO_URL=mongodb://wekandb:27017/wekan
ROOT_URL=https://wekan.example.com
MAIL_URL=smtp://wekan@wekan.example.com:password@mail.example.com:587/
MAIL_FROM="Wekan <wekan@wekan.example.com>"
# Mongodb specific database dump details
BACKUP_NUM_KEEP=7
BACKUP_FREQUENCY=1d
Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
I share (with my patreon patrons) a private “premix” git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just agit pull
and a docker stack deploy
version: '3'
services:
wekandb:
image: mongo:latest
command: mongod --smallfiles --oplogSize 128
networks:
- internal
volumes:
- /var/data/runtime/wekan/database:/data/db
- /var/data/wekan/database-dump:/dump
proxy:
image: a5huynh/oauth2_proxy
env_file: /var/data/config/wekan/wekan.env
networks:
- traefik
- internal
volumes:
- /var/data/oauth_proxy/authenticated-emails.txt:/authenticated-emails.txt
deploy:
labels:
- traefik.frontend.rule=Host:wekan.example.com
- traefik.docker.network=traefik
- traefik.port=4180
command: |
-cookie-secure=false
-upstream=http://wekan:80
-redirect-url=https://wekan.example.com
-http-address=http://0.0.0.0:4180
-email-domain=example.com
-provider=github
-authenticated-emails-file=/authenticated-emails.txt
wekan:
image: wekanteam/wekan:latest
networks:
- internal
env_file: /var/data/config/wekan/wekan.env
db-backup:
image: mongo:latest
env_file : /var/data/config/wekan/wekan.env
volumes:
- /var/data/wekan/database-dump:/dump
- /etc/localtime:/etc/localtime:ro
entrypoint: |
bash -c 'bash -s <<EOF
trap "break;exit" SIGHUP SIGINT SIGTERM
sleep 2m
while /bin/true; do
mongodump -h db --gzip --archive=/dump/dump_\`date +%d-%m-%Y"_"%H_%M_%S\`.mo\
ngo.gz
(ls -t /dump/dump*.mongo.gz|head -n $$BACKUP_NUM_KEEP;ls /dump/dump*.mongo.g\
z)|sort|uniq -u|xargs rm -- {}
sleep $$BACKUP_FREQUENCY
done
EOF'
networks:
- internal
networks:
traefik:
external: true
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.3.0/24
66.3 Serving
Launch Wekan stack
Launch the Wekan stack by running docker stack deploy wekan -c <path -to-docker-compose.yml>
Log into your new instance at https://YOUR-FQDN, with user “root” and the password you specified in gitlab.env.
66.4 Chef’s Notes
1. If you wanted to expose the Wekan UI directly, you could remove the oauth2_proxy from the design, and move the traefik-related labels directly to the wekan container. You’d also need to add the traefik network to the wekan container.
hero: Terminal in a browser, baby!
67 Wetty
Wetty is a responsive, modern terminal, in your web browser. Yes, your browser. When combined with secure authentication and SSL encryption, it becomes a useful tool for quick and easy remote access.
![](/site_images/geek-cookbook/..----images----wetty.png)
67.1 Why would you need SSH in a browser window?
Need shell access to a node with no external access? Deploy Wetty behind an oauth_proxy with a SSL-terminating reverse proxy (traefik), and suddenly you have the means to SSH to your private host from any web browser (protected by your oauth_proxy of course, and your OAuth provider’s 2FA)
Here are some other possible use cases:
- Access to SSH / CLI from an environment where outgoing SSH is locked down, or SSH client isn’t / can’t be installed. (i.e., a corporate network)
- Access to long-running processes inside a tmux session (like irrsi)
- Remote access to a VM / container running Kali linux, for penetration testing
67.2 Ingredients
- Docker swarm cluster with persistent shared storage
- Traefik configured per design
- DNS entry for the hostname you intend to use, pointed to your keepalived IP
67.3 Preparation
Prepare environment
Create wetty.env, and populate with the following variables per the oauth_proxy instructions:
OAUTH2_PROXY_CLIENT_ID=
OAUTH2_PROXY_CLIENT_SECRET=
OAUTH2_PROXY_COOKIE_SECRET=
# To use WeTTY to SSH to a host besides the (mostly useless) alpine container it com\
es with
SSHHOST=batcomputer.batcave.com
SSHUSER=batman
Setup Docker Swarm
Create a docker swarm config file in docker-compose syntax (v3), something like this:
I share (with my patreon patrons) a private “premix” git repository, which includes necessary docker-compose and env files for all published recipes. This means that patrons can launch any recipe with just agit pull
and a docker stack deploy
version: "3"
services:
wetty:
image: krishnasrinivas/wetty
env_file : /var/data/config/wetty/wetty.env
networks:
- internal
proxy:
image: funkypenguin/oauth2_proxy:latest
env_file: /var/data/config/wetty/wetty.env
networks:
- internal
- traefik_public
deploy:
labels:
- traefik.frontend.rule=Host:wetty.funkypenguin.co.nz
- traefik.docker.network=traefik_public
- traefik.port=4180
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/data/config/wetty/authenticated-emails.txt:/authenticated-emails.txt
command: |
-cookie-secure=false
-upstream=http://wetty:3000
-redirect-url=https://wetty.funkypenguin.co.nz
-http-address=http://0.0.0.0:4180
-provider=github
-authenticated-emails-file=/authenticated-emails.txt
networks:
traefik_public:
external: true
internal:
driver: overlay
ipam:
config:
- subnet: 172.16.45.0/24
67.4 Serving
Launch Wetty stack
Launch the Wetty stack by running docker stack deploy wetty -c <path -to-docker-compose.yml>
Browse to your new browser-cli-terminal at https://YOUR-FQDN. Authenticate with your OAuth provider, and then proceed to login, either to the remote host you specified (batcomputer.batcave.com, in the example above), or using user and password “term” to log directly into the Wetty alpine container (from which you can establish egress SSH)
67.5 Chef’s Notes
- You could set SSHHOST to the IP of the “docker0” interface on your host, which is normally 172.17.0.1. (Or run
/sbin/ip route|awk '/default/ { print $3 }'
in the container) This would then provide you the ability to remote-manage your swarm with only web access to Wetty. - The inclusion of Wetty was due to the efforts of @gpulido in our Discord server. Thanks Gabriel!
IV Reference
Now follows useful elements which are not full recipes.
68 OAuth proxy
Some of the platforms we use on our swarm may have strong, proven security to prevent abuse. Techniques such as rate-limiting (to defeat brute force attacks) or even support 2-factor authentication (tiny-tiny-rss or Wallabag support this).
Other platforms may provide no authentication (Traefik’s web UI for example), or minimal, un-proven UI authentication which may have been added as an afterthought.
Still platforms may hold such sensitive data (i.e., NextCloud), that we’ll feel more secure by putting an additional authentication layer in front of them.
This is the role of the OAuth proxy.
68.1 How does it work?
Normally, Traefik proxies web requests directly to individual web apps running in containers. The user talks directly to the webapp, and the webapp is responsible for ensuring appropriate authentication.
When employing the OAuth proxy , the proxy sits in the middle of this transaction - traefik sends the web client to the OAuth proxy, the proxy authenticates the user against a 3rd-party source (GitHub, Google, etc), and then passes authenticated requests on to the web app in the container.
Illustrated below:
![](/site_images/geek-cookbook/geek-cookbook_funkypenguin_co_nz---images---oauth_proxy.png)
The advantage under this design is additional security. If I’m deploying a web app which I expect only myself to require access to, I’ll put the oauth_proxy in front of it. The overhead is negligible, and the additional layer of security is well-worth it.
68.2 Ingredients
68.3 Preparation
OAuth provider
OAuth Proxy currently supports the following OAuth providers:
- Google (default)
- Azure
- GitHub
- GitLab
- MyUSA
Follow the instructions to setup your oauth provider. You need to setup a unique key/secret for each instance of the proxy you want to run, since in each case the callback URL will differ.
Authorized emails file
There are a variety of options with oauth_proxy re which email addresses (authenticated against your oauth provider) should be permitted access. You can permit access based on email domain (*@gmail.com), individual email address (batman@gmail.com), or based on provider-specific groups (i.e., a GitHub organization)
The most restrictive configuration allows access on a per-email address basis, which is illustrated below:
I created /var/data/oauth_proxy/authenticated-emails.txt, and add my own email address to the first line.
Configure stack
You’ll need to define a service for the oauth_proxy in every stack which you want to protect. Here’s an example from the Wekan recipe:
proxy
:
image
:
a5huynh
/
oauth2_proxy
env_file
:
/var/data/wekan/
wekan
.
env
networks
:
-
traefik
-
internal
deploy
:
labels
:
-
traefik
.
frontend
.
rule
=
Host
:
wekan
.
funkypenguin
.
co
.
nz
-
traefik
.
docker
.
network
=
traefik
-
traefik
.
port
=
4180
volumes
:
-
/var/data/oauth_proxy/authenticated-emails.txt:/
authenticated
-
emails
.
txt
command
:
|
-
cookie
-
secure
=
false
-
upstream
=
http
://
wekan
:
80
-
redirect
-
url
=
https
://
wekan
.
funkypenguin
.
co
.
nz
-
http
-
address
=
http
://
0.0
.
0.0
:
4180
-
email
-
domain
=
funkypenguin
.
co
.
nz
-
provider
=
github
-
authenticated
-
emails
-
file
=/
authenticated
-
emails
.
txt
Note above how:
- Labels are required to tell Traefik to forward the traffic to the proxy, rather than the backend container running the app
- An environment file is defined, but..
- The redirect URL must still be passed to the oauth_proxy in the command argument
69 Data layout
The applications deployed in the stack utilize a combination of data-at-rest (static config, files, etc) and runtime data (live database files). The realtime data can’t be backed up with a simple copy-paste, so where we employ databases, we also include containers to perform a regular export of database data to a filesystem location.
So that we can confidently backup all our data, I’ve setup a data layout as follows:
69.1 Configuration data
Configuration data goes into /var/data/config/[recipe name], and is typically only a docker-compose .yml, and a .env file
69.2 Runtime data
Realtime data (typically database files or files-in-use) are stored in /var/data/realtime/[recipe-name], and are excluded from backup (They change constantly, and cannot be safely restored).
69.3 Static data
Static data goes into /var/data/[recipe name], and includes anything that can be safely backed up while a container is running. This includes database exports of the runtime data above.
70 Networks
In order to avoid IP addressing conflicts as we bring swarm networks up/down, we will statically address each docker overlay network, and record the details below:
Network | Range |
---|---|
Traefik | unspecified |
Docker-cleanup | 172.16.0.0/24 |
Mail Server | 172.16.1.0/24 |
Gitlab | 172.16.2.0/24 |
Wekan | 172.16.3.0/24 |
Piwik | 172.16.4.0/24 |
Tiny Tiny RSS | 172.16.5.0/24 |
Huginn | 172.16.6.0/24 |
Unifi | 172.16.7.0/24 |
Kanboard | 172.16.8.0/24 |
Gollum | 172.16.9.0/24 |
Duplicity | 172.16.10.0/24 |
Autopirate | 172.16.11.0/24 |
Nextcloud | 172.16.12.0/24 |
Portainer | 172.16.13.0/24 |
Home-Assistant | 172.16.14.0/24 |
OwnTracks | 172.16.15.0/24 |
Plex | 172.16.16.0/24 |
Emby | 172.16.17.0/24 |
Calibre-Web | 172.16.18.0/24 |
Wallabag | 172.16.19.0/24 |
InstaPy | 172.16.20.0/24 |
Turtle Pool | 172.16.21.0/24 |
MiniFlux | 172.16.22.0/24 |
Gitlab Runner | 172.16.23.0/24 |
Munin | 172.16.24.0/24 |
Bookstack | 172.16.33.0/24 |
Swarmprom | 172.16.34.0/24 |
Realms | 172.16.35.0/24 |
ElkarBackup | 172.16.36.0/24 |
Mayan EDMS | 172.16.37.0/24 |
Shaarli | 172.16.38.0/24 |
OpenLDAP | 172.16.39.0/24 |
MatterMost | 172.16.40.0/24 |
PrivateBin | 172.16.41.0/24 |
Mayan EDMS | 172.16.42.0/24 |
Hack MD | 172.16.43.0/24 |
FlightAirMap | 172.16.44.0/24 |
Wetty | 172.16.45.0/24 |
FileBrowser | 172.16.46.0/24 |
phpIPAM | 172.16.47.0/24 |
Dozzle | 172.16.48.0/24 |
KeyCloak | 172.16.49.0/24 |
Sensu | 172.16.50.0/24 |
Magento | 172.16.51.0/24 |
Graylog | 172.16.52.0/24 |
Harbor | 172.16.53.0/24 |
Harbor-Clair | 172.16.54.0/24 |
71 Introduction
Our HA platform design relies on Atomic OS, which only contains bare minimum elements to run containers.
So how can we use git on this system, to push/pull the changes we make to config files? With a container, of course!
71.1 git-docker
I made a simple container which just basically executes git in the CWD:
To use it transparently, add an alias for the “git” command, or just download it with the rest of the handy aliases:
alias git='docker run -v $PWD:/var/data -v \
/var/data/git-docker/data/.ssh:/root/.ssh funkypenguin/git-docker git'
71.2 Setup SSH key
If you plan to actually push using git, you’ll need to setup an SSH keypair. You could copy across whatever keypair you currently use, but it’s probably more appropriate to generate a specific keypair for this purpose.
Generate your new SSH keypair by running:
mkdir -p /var/data/git-docker/data/.ssh
chmod 600 /var/data/git-docker/data/.ssh
docker run -v /var/data/git-docker/data/.ssh:/root/.ssh funkypenguin/git-docker ssh-\
keygen -t ed25519 -f /root/.ssh/id_ed25519
The output will look something like this:
Generating public/private ed25519 key pair.
Enter passphrase (empty for no passphrase): Enter same passphrase again: Created dir\
ectory '/root/.ssh'.
Your identification has been saved in /root/.ssh/id_ed25519.
Your public key has been saved in /root/.ssh/id_ed25519.pub.
The key fingerprint is:
SHA256:uZtriS7ypx7Q4kr+w++nHhHpcRfpf5MhxP3Wpx3H3hk root@a230749d8d8a
The key's randomart image is:
+--[ED25519 256]--+
| .o . |
| . ..o . |
| + .... ...|
| .. + .o . . E=|
| o .o S . . ++B|
| . o . . . +..+|
| .o .. ... . . |
|o..o..+.oo |
|...=OX+.+. |
+----[SHA256]-----+
Now add the contents of /var/data/git-docker/data/.ssh/id_ed25519.pub to your git account, and off you go - just run “git” from your Atomic host as usual, and pretend that you have the client installed!
72 OpenVPN
Sometimes you need an OpenVPN tunnel between your docker hosts and some other environment. I needed this to provide connectivity between swarm-deployed services like Home Assistant, and my IOT devices within my home LAN.
OpenVPN is one application which doesn’t really work in a swarm-type deployment, since each host will typically require a unique certificate/key to connect to the VPN anyway.
In my case, I needed each docker node to connect via OpenVPN back to a pfsense instance, but there were a few gotchas related to OpenVPN at CentOS Atomic which I needed to address first.
72.1 SELinux for OpenVPN
Yes, SELinux. Install a custom policy permitting a docker container to create tun interfaces, like this:
cat
<<
EOF
>
docker-openvpn
.
te
module
docker-openvpn
1
.
0
;
require
{
type
svirt_lxc_net_t
;
class
tun_socket
create
;
}
#
=============
svirt_lxc_net_t
==============
allow
svirt_lxc_net_t
self
:
tun_socket
create
;
EOF
checkmodule
-M
-m
-o
docker-openvpn
.
mod
docker-openvpn
.
te
semodule_package
-o
docker-openvpn
.
pp
-m
docker-openvpn
.
mod
semodule
-i
docker-openvpn
.
pp
72.2 Insert the tun module
Even with the SELinux policy above, I still need to insert the “tun” module into the running kernel at the host-level, before a docker container can use it to create a tun interface.
Run the following to auto-insert the tun module on boot:
cat << EOF >> /etc/rc.d/rc.local
# Insert the "tun" module so that the vpn-client container can access /dev/net/tun
/sbin/modprobe tun
EOF
chmod 755 /etc/rc.d/rc.local
72.3 Connect the VPN
Finally, for each node, I exported client credentials, and SCP’d them over to the docker node, into /root/my-vpn-configs-here/. I also had to use the NET_ADMIN cap-add parameter, as illustrated below:
docker run -d --name vpn-client \
--restart=always --cap-add=NET_ADMIN --net=host \
--device /dev/net/tun \
-v /root/my-vpn-configs-here:/vpn:z \
ekristen/openvpn-client --config /vpn/my-host-config.ovpn
Now every time my node boots, it establishes a VPN tunnel back to my pfsense host and (by using custom configuration directives in OpenVPN) is assigned a static VPN IP.
73 Troubleshooting
Having difficulty with a recipe? Here are some tips..
73.1 Why is my stack not launching?
Run docker stack ps <stack name> --no-trunc
for more details on why individual containers failed to launching
73.2 Attaching to running container
Need to debug why your oauth2_proxy container can’t talk to its upstream app? Start by identifying which node the proxy container is running on, using docker ps <stack name>
.
SSH to the host node, and attach to the container using docker exec -it <continer id> /bin/bash
(substitute /bin/ash
for /bin/bash
, in the case of an Alpine container), and then try to telnet to your upstream host.
73.3 Watching logs of container
Need to see what a particular container is doing? Run docker service logs -f <stack name>_<container name>
to watch a particular service. As the service dies and is recreated, the logs will continue to be displayed.
73.4 Visually monitoring containers with ctop
For a visual “top-like” display of your container’s activity (as well as a detailed per-container view), try using ctop.
To execute, simply run docker run --rm -ti --name ctop -v /var/run/docker.sock:/var/run/docker.sock quay.io/vektorlab/ctop:latest
Example: