My Proxmox Homelab: Two Mini PCs, 15 Services, Zero Port Forwarding
I wanted to self-host everything I could. Not because cloud services are bad, but because I wanted to understand how things work, control my data, and stop paying monthly subscriptions for things a $200 mini PC can handle.
After a year of running Proxmox, I have 15 services spread across two nodes, a NAS, and zero exposed ports. Here’s what I run and why.
The Hardware
The primary node is an Intel NUC 15 Pro (16 threads, 64 GB RAM, 1 TB NVMe). The secondary is an HP EliteDesk 800 G3 Mini (i5-7600T, 16 GB RAM, 500 GB NVMe). Both are small, quiet, and sip power.
For storage, a Synology DS223j with two 8 TB WD Red Plus drives in RAID 1. Media, documents, and backups all land here over NFS.
I deliberately went with low-power hardware. The NUC runs in powersave mode with turbo boost disabled. It sits in my living room and I don’t want to hear it. The trade-off is worth it: these machines idle at around 10-15W each, which matters when they run 24/7.
Why Proxmox
I looked at running Docker on bare metal, but Proxmox gives me things that are hard to replicate otherwise:
- Snapshots before risky changes. One click, and I can roll back an entire container.
- Live migration between nodes. I moved half my services from the EliteDesk to the NUC without downtime.
- Centralized backup to a dedicated backup server (more on that later).
- Resource isolation. Each service gets its own container with defined CPU and memory limits. One misbehaving service can’t take down the rest.
The web UI is also surprisingly good for monitoring. I can see CPU, memory, disk, and network for every guest at a glance.
LXC vs QEMU: When to Use What
Most of my services run in LXC containers. They’re lightweight, start instantly, and share the host kernel. For things like Pi-hole, Paperless, n8n, or a reverse proxy, there’s no reason to run a full VM.
I use QEMU VMs for two cases:
- The media stack needs its own network namespace for VPN routing. The entire VM’s traffic goes through a WireGuard tunnel, and if the tunnel drops, all network access stops. Easier to enforce at the VM level than with LXC networking.
- Proxmox Backup Server runs as a VM because it’s a separate product with its own kernel and filesystem requirements.
The rule is simple: LXC unless you need a separate kernel or full network isolation.
The Services
Media: Jellyfin + the Arr Suite
This is the core of the homelab. Jellyfin is my Netflix replacement: it streams to every device in the house and remotely via Cloudflare Tunnel. Hardware transcoding on the NUC’s integrated GPU handles anything I throw at it.
The acquisition side is a full pipeline, from request to streaming:
flowchart LR
subgraph "User-facing"
SEERR[Jellyseerr<br/>Requests]
JF[Jellyfin<br/>Streaming]
end
subgraph "Arr VM (behind Mullvad VPN)"
SONARR[Sonarr<br/>TV Shows]
RADARR[Radarr<br/>Movies]
PROWLARR[Prowlarr<br/>Indexers]
BAZARR[Bazarr<br/>Subtitles]
subgraph "VPN-routed (Gluetun)"
QB[qBittorrent]
end
end
NAS[(Synology NAS)]
SEERR --> SONARR
SEERR --> RADARR
SONARR --> PROWLARR
RADARR --> PROWLARR
SONARR --> QB
RADARR --> QB
QB --> NAS
BAZARR --> SONARR
BAZARR --> RADARR
NAS --> JF
The flow works like this:
- Someone opens Jellyseerr and requests a movie or show.
- Sonarr (TV) or Radarr (movies) picks up the request and asks Prowlarr to search indexers for the best available version.
- qBittorrent downloads it, with all traffic routed through Gluetun (a Mullvad WireGuard tunnel). If the VPN drops, the kill switch stops all network access.
- Once downloaded, Sonarr/Radarr renames and organizes the files on the Synology NAS (
the.office.s01e01.hdtv.mkvbecomesThe Office/Season 01/The Office - S01E01 - Pilot.mkv). - Bazarr finds and downloads subtitles automatically.
- Jellyfin picks up the new files and they’re ready to stream.
Strictly speaking, this entire stack exists to download Linux ISOs. The fact that Sonarr manages TV shows and Radarr manages movies is a coincidence I can’t explain.
Everything runs inside a single QEMU VM so the VPN routing stays isolated. A small detail that took a while to get right: a VPN monitor script watches for Gluetun reconnections and automatically restarts qBittorrent when the tunnel comes back up. Without it, qBittorrent would lose its connection after a VPN reconnect and just sit there doing nothing.
Documents: Paperless-ngx + AI OCR
I scan every piece of paper I receive. Bills, contracts, letters: everything goes into Paperless-ngx, which OCRs it, stores it, and makes it searchable.
The interesting part is the AI layer. Two containers work together:
flowchart LR
subgraph Input
SCAN[Scanner]
EMAIL[Email]
UPLOAD[Web Upload]
end
subgraph "Paperless-ngx"
CONSUME[Consumer]
OCR1[Tesseract OCR]
DB[(PostgreSQL)]
WEB[Web UI]
end
subgraph "Paperless-GPT"
GPT[AI Processor]
LLM[Mistral Large]
MISTRAL[Mistral OCR]
end
SCAN --> CONSUME
EMAIL --> CONSUME
UPLOAD --> WEB
CONSUME --> OCR1
OCR1 --> DB
GPT --> WEB
GPT --> LLM
GPT --> MISTRAL
Paperless-ngx ingests documents with basic Tesseract OCR (limited to the first page to avoid CPU overload on long PDFs). Then Paperless-GPT picks them up via workflow tags and does two things:
- Mistral OCR re-processes the full document with much better quality than Tesseract.
- Mistral Large generates a title, assigns tags, identifies the correspondent, and classifies the document type.
The whole pipeline runs on Mistral. I initially used OpenAI’s GPT-5 Nano for the classification step, but switched everything to Mistral for reasons.
The result: I drop a PDF into a folder and a few seconds later it’s fully categorized, searchable, and filed. I barely touch the Paperless UI anymore except to find documents.
The real payoff comes at tax time. In Switzerland, the tax declaration requires you to attach proof for everything: salary statements, insurance premiums, bank interest, medical expenses, donations, rent. Every year it’s the same scramble through drawers and email inboxes. With Paperless, I just filter by the tag “tax” and the date range for the fiscal year, and every document I need is right there. Export, attach, done. What used to take an evening now takes ten minutes.
DNS: Pi-hole + Unbound
Pi-hole handles DNS for the entire network and blocks ads at the DNS level. Behind it, Unbound acts as a recursive resolver, querying root DNS servers directly instead of forwarding to Google or Cloudflare. No third-party DNS provider sees my queries.
A second Pi-hole instance runs on the backup node for redundancy. If the primary goes down for maintenance, DNS keeps working.
Remote Access: WireGuard + Cloudflare Tunnel
Two layers, for two different use cases.
WireGuard gives me full LAN access from my phone or laptop when I’m away. Two taps and I’m on my home network, with Pi-hole filtering included. I use PiVPN to manage peers.
Cloudflare Tunnel exposes specific services to the internet without opening any ports on my router. A small cloudflared container maintains an outbound connection to Cloudflare’s edge, and Cloudflare routes traffic back through it. Google Zero Trust auth sits in front, so only approved Google accounts can reach anything.
The Cloudflare Tunnel runs on the backup node, so even if the primary is down for maintenance, external access stays up.
Automation: n8n
n8n is a self-hosted alternative to Zapier. I use it for small automations that connect services together. It has a visual workflow editor and supports webhooks, scheduled triggers, and a huge list of integrations.
Monitoring: Prometheus + Grafana + Loki
flowchart TB
subgraph "Monitoring Container"
PROM[Prometheus]
GRAF[Grafana]
LOKI[Loki]
ALERT[Alertmanager]
end
subgraph "Exporters"
NE[Node Exporters<br/>on every host]
CA[cAdvisor<br/>container metrics]
BB[Blackbox<br/>HTTP probes]
end
subgraph "Log Agents"
ALLOY[Grafana Alloy<br/>on every host]
end
NE --> PROM
CA --> PROM
BB --> PROM
PROM --> GRAF
PROM --> ALERT
ALLOY --> LOKI
LOKI --> GRAF
Every host runs a node exporter (system metrics) and Grafana Alloy (log shipping to Loki). Prometheus scrapes 35 targets. Blackbox probes HTTP endpoints to catch services that are up but not responding.
The alerts I care about:
- VPN container down or tunnel disconnected. If Gluetun dies, downloads are unprotected.
- Disk space low on NAS. I’ve been burned by this before.
- Any service unreachable. Blackbox catches HTTP failures.
- High memory on containers. Catches memory leaks before they cause OOM kills.
Grafana dashboards give me an overview of the whole cluster: NAS activity, per-container resource usage, and Synology health (temperatures, disk SMART status).
The Two-Node Cluster
I started with just the EliteDesk. It worked fine, but any maintenance meant everything went down. Adding the NUC as a second node solved this.
The two nodes form a Proxmox cluster with two_node: 1 quorum, which means either node can keep running if the other goes offline. Without this flag, a two-node cluster loses quorum when one node disappears, and you can’t manage anything.
What runs where:
The NUC handles all the compute-heavy services: Jellyfin (with GPU transcoding), the media VM, Paperless, monitoring, n8n, and the development container. The EliteDesk runs lighter infrastructure: Cloudflare tunnel, WireGuard, Pi-hole redundancy, reverse proxy, and the backup server VM.
Live migration between nodes works well. When I upgraded the NUC’s RAM, I migrated everything to the EliteDesk, did the upgrade, and migrated back. Total downtime: zero.
Backups: 3-2-1
flowchart LR
subgraph "Proxmox Cluster"
VMS[All VMs and Containers]
end
subgraph "Backup Server VM"
PBS[Proxmox Backup Server]
end
subgraph "Local"
NAS[Synology NAS]
end
subgraph "Offsite"
S3[Scaleway S3<br/>Object Storage]
end
VMS -->|Daily / Weekly| PBS
PBS -->|Primary| NAS
PBS -->|Encrypted sync| S3
Proxmox Backup Server runs as a VM on the backup node. Every container and VM is backed up on a schedule:
- Daily at 2 AM: critical services (Paperless, monitoring, n8n, development).
- Weekly on Sundays: everything else (media, DNS, VPN, Jellyfin).
Backups go to the Synology NAS (fast restores) and sync encrypted to Scaleway S3 (offsite disaster recovery). Three copies, two storage types, one offsite. The PBS VM itself is the only thing not backed up, for obvious reasons: it can’t back itself up to itself.
Retention: 14 daily, 4 weekly, 2 monthly for critical services. 4 weekly, 2 monthly for the rest.
The homelab runs Proxmox VE on two nodes with 15+ services, all behind WireGuard and Cloudflare Zero Trust. No ports exposed, encrypted offsite backups, and about 25W total idle power draw.