I’m new to self hosting and just starting to experiment with web development. I’ve been reading and cross-referencing several guides, but I’m having trouble figuring out how to put together all the pieces to achieve what I’m looking for. Maybe the perfect tutorial is out there, but I just haven’t found the right search terms.
On my Raspberry Pi 4, I have a few Docker containers already up and running:
- Pi Hole with network-mode set to host so it can handle DHCP too
- Watchtower to keep the Pi Hole up-to-date
- Portainer to check on the status of things
In addition those, I’m planning to host a personal website, a small Matrix server, and a few other things eventually. For portability reasons and my own professional development, I want to go all-in on Docker Compose and keep each piece in its own separate container.
The main thing I’m struggling with is figuring out how to configure nginx-proxy-manager and my Docker networks to expose only the containers I want to expose while keeping my other containers safe. More specifically, how do I handle the conflicting ports between Pi Hole and nginx-proxy-manager without exposing my Pi Hole’s admin page to the public internet? Can I use the same reverse proxy to manage all my local and public services at the same time?
Another piece that I’m feeling unsure about is pointing my domain name to the right IP address and setting up SSL encryption. It feels like there are a lot of ways to mess it up. What do I need to do to keep things safe and secure? How important is something like Cloudflare tunnel?
Sounds like you might want to learn about firewalls before you get too much further along? I had no idea when I set up my first web server, and the machine got hacked within a week of putting it online. For your purposes you can probably set up some simple rules with iptables, but if you ever get serious about a dedicated firewall (which will need at least two network connections but more gives you better flexibility) then you might want to look at something like OPNsense.
As for conflicting ports, the easiest solution is to move the internal server to a non-standard port that you can remember, but you should also consider putting each container on its own local IP address. Add something like dnsmasq to your pi, and you can add local names for each IP (plus it will handle dhcp for your network). Then in your browser you can type a local name like ‘pihole’ to access port 80 on that service, or ‘mydomain.com’ to get to the nginx container.
And for pointing your domain name to the right IP… Do you have a static IP address? Unless you are paying extra for it, you almost certainly do not, in which case you need to look for a DDNS service which will track your current IP and update on the fly.
I will absolutely start doing more research on firewalls, thank you for the suggestion. That’s exactly the kind of obvious thing that I was afraid I would miss.
Dnsmasq is actually already built-in to Pi Hole, I’m pretty sure that’s how it redirects advertiser domains to 0.0.0.0 and handles DHCP. I see that I can add more local domains right from the web interface. I didn’t realize I could give each containers its own local IP addresses, though. That would make getting to local services much more clean and simple.
I don’t have a static IP, and I’m certainly not keen on giving my ISP any more money. I’ll look more into DDNS services too.
Keep in mind that docker can bypass iptables-based firewall like UFW. When in doubt, do a port scan from an external machine to check which ports are actually open to the internet.
I haven’t gotten in to containers yet, but there should be some way to let each one use a unique IP. At the very least, give your Pi multiple addresses and then have the service in each container only listen on its assigned address.
So, docker networking uses it’s own internal DNS. Keep that in mind. You can create (and should) docker networks for your containers. My personal design is to have only nginx exposing port 443 and have it proxy for all the other containers inside those docker networks. I don’t have to expose anything. I also find nginx proper to be much easier to deal with than using NPM or traefik or caddy.
Register a domain if you haven’t already. I did two, one for internal and one for external. If you want something easy to setup, use nginx. I’m sure there are guides out there to add Let’s Encrypt SSL certs to nginx. I personally use Let’s Encrypt with Traefik as my reverse proxy. Traefik has a little bit of a learning curve, but once you have it setup and working, it’s pretty easy to update and move around.
Once you have your reverse proxy working with a SSL cert, you can start looking at different options to expose your containers. Probably the easiest method is to point your domain to your home IP address and on your router setup port forwarding. I’m not a fan of that because it’s probably the most risky exposing ports to the wide internet.
Another option is tunneling, which I think is the best. Cloudflare tunnels is pretty popular and I believe are still free. I have a cheap VPS that I have a Wireguard tunnel setup. With either tunnel option you don’t have to make any changes to your home network or firewall.
Why did you register two separate domains instead of using a wildcard cert from LE and just using subdomains?
To separate my internal and external. Both my domains have wild card certs. I have a VPS that connects to my home lab. External requests hit the VPS first. Internal requests bypass the VPS and go straight to my home lab.
I could use a single domain but then my internal requests would reach out to the VPS just to go back to my home lab. I wanted to avoid that extra hop.
Forgive my stupidity, but couldn’t you just use split-horizon DNS and have your internal DNS resolve to your homelab instead of the VPS? Personally, that’s what I’ve done. So external lookups for sub.domain.tld go one way and internal lookups go to 10.10.10.x.
Yes, I could do a split DNS and achieve the same thing. I didn’t really want to change my DNS settings in my router. I also just like the separation by domain name.
Hi, do you mind giving me some pointers for setting up traefik to use https for my locally hosted services?
I have most of my stuff on a single server (named poseidon), on which I want to separate all the stuff using subdomains (like plex.poseidon). From what I found when searching online it seems like I require a local DNS server for that on which I can enter local domains, in addition to using traefik to specify a rule for the host using a label in the docker-compose. Is that correct?
I also have no idea how to route the subdomains to the services I want.
You just need something that points *.poseidon to your server IP. You can use a host file, local DNS server, or DNSMasq.
Once you have traefik setup, it will have a config where you can setup routers that route the subdomain to a service. Then you have services configured that point to a IP and port.
I based my setup on Techno Tim’s video here and made minor tweaks. Try following that tutorial and see if that gets you started. Feel free to ask any other questions if you run into snags.
Here is an alternative Piped link(s): https://piped.video/watch?v=liV3c9m_OX8
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source, check me out at GitHub.
I solved the same problem you’re having with Cloudflare tunnels. I added a tunnel to my docker/portainer host and then added services via that tunnel. I did buy a domain name through them to facilitate, but worth it imo. Vastly simpler than dealing with port forwarding and all the fun stuff you need to do with your router. Hope this helps
Edit:wrote this before reading your last line. Cloudflare tunnels simplify the process of dynamic dns, port forwarding, and https on apps that don’t support it.