Every developer has projects sitting on their local machine that nobody else gets to see. Not because the idea was bad. Because getting from localhost to a real URL felt like more effort than the idea was worth.
Register a domain. Provision a server. Set up SSL. Write deployment scripts. By the time you’ve done all that, the motivation’s gone.
I decided to fix that once.
The goal was simple. Have an idea on a Saturday, push some code, and have it running on a real HTTPS URL before the weekend’s out. No manual steps after the push. No SSH-ing into servers to restart things. No thinking about certificates. And as close to free as possible.
Spoiler: I got there. It’s running on hardware that costs me nothing.
The free bit
Oracle Cloud has a free tier most people don’t know about. Unlike AWS or Azure, where free tiers expire after 12 months, Oracle’s Always Free tier is permanent. Two ARM-based VMs with 1GB RAM each, real public IP addresses, enough bandwidth for personal projects. No credit card after initial signup, no expiry, no surprise bills.
It’s not enough to run a startup on, but for personal projects it’s good. And two VMs is enough to build something interesting. One for running apps, one for the tooling that deploys them.
The architecture
Two servers, each doing one job, with Cloudflare in front.
Server 1 is the app server. Everything I build runs here as a Docker container. Traefik sits in front of everything and handles routing and SSL automatically. When a new container starts with the right configuration, Traefik provisions a Let’s Encrypt certificate and starts routing project-name.gerardbeckerleg.com to it. No manual certificate management, ever.
Server 2 is the git and CI/CD server. Forgejo lives here, a lightweight self-hosted git platform. Think GitHub, but running on your own hardware. When I push code, a Forgejo Actions runner picks it up, SSHes into Server 1, and deploys the app.
Cloudflare sits in front of the app server. A wildcard DNS record proxied through Cloudflare means every app subdomain gets CDN caching, DDoS protection, and edge SSL termination automatically. Anything I need direct SSH access to is DNS only, since Cloudflare can’t proxy SSH traffic.
The workflow from my end:
- Create repo in Forgejo
- Clone it locally
- Write code, run it locally
- Push to main
- It’s live
No other steps.
Why Forgejo and not just GitHub?
You could use GitHub. GitHub Actions can SSH into your server just as well. If you don’t care about self-hosting your code, there’s nothing wrong with that.
I went with Forgejo because I wanted the whole thing on my own infrastructure. It’s also lightweight. Forgejo and its CI runner sit comfortably in under 400MB RAM, which matters on a 1GB VM.
Forgejo is a community fork of Gitea, which itself was born from Gogs. Mature, actively maintained, and the Actions system is compatible with GitHub Actions syntax, so most workflows you find online work with minimal changes.
The deployment pipeline
The runner connects to the app server over SSH using a dedicated deploy user, clones the repo, builds and starts the container, then purges the Cloudflare cache for that subdomain. The script that runs on every push (with secrets and hostnames as variables):
set -e
ssh-keyscan -p ${SSH_PORT} ${GIT_HOST} >> ~/.ssh/known_hosts
if [ -d "${APP_DIR}/${APP_NAME}/.git" ]; then
cd ${APP_DIR}/${APP_NAME}
git pull
else
git clone git@${GIT_HOST}:${GIT_USER}/${APP_NAME}.git ${APP_DIR}/${APP_NAME}
fi
cd ${APP_DIR}/${APP_NAME}
docker rm -f ${APP_NAME} 2>/dev/null || true
docker compose down
docker compose up -d --build
docker image prune -f
docker builder prune -f
curl -s -X POST "https://api.cloudflare.com/client/v4/zones/${CF_ZONE_ID}/purge_cache" \
-H "Authorization: Bearer ${CF_API_TOKEN}" \
-H "Content-Type: application/json" \
--data "{\"files\":[\"https://${APP_NAME}.${DOMAIN}\"]}"
set -e at the top means any failure stops the whole thing immediately rather than silently carrying on. The docker rm -f before docker compose down handles orphaned containers from previous failed runs. The cache purge at the end ensures Cloudflare serves the new version immediately rather than stale content.
The Cloudflare token, zone ID, hostnames, and SSH port are stored as org-level secrets in Forgejo, so they’re available to every project without ever being in the code.
For concurrent pushes, a concurrency block in the workflow cancels any in-progress run when a newer push comes in. You’re never waiting for a stale build to finish.
The deployment magic
The part that makes this feel like actual magic is Traefik. Rather than manually configuring Nginx or Apache every time you want to add a new site, Traefik watches Docker for new containers. When a container starts with a few label lines, Traefik handles everything:
labels:
- "traefik.enable=true"
- "traefik.http.routers.my-app.rule=Host(`my-app.gerardbeckerleg.com`)"
- "traefik.http.routers.my-app.entrypoints=websecure"
- "traefik.http.routers.my-app.tls.certresolver=myresolver"
- "traefik.http.services.my-app.loadbalancer.server.port=8080"
That’s it. Traefik sees the container, talks to Let’s Encrypt, gets a certificate, and starts routing. Cloudflare picks it up via the proxied wildcard record. Within about 60 seconds of a push, the app is live on HTTPS with CDN caching in front of it.
You can confirm Cloudflare is working by checking the response headers in browser devtools. A cf-cache-status: HIT header means the request was served from Cloudflare’s edge without touching the origin server at all.
What a new project looks like
Every project needs three files alongside the actual code.
A Dockerfile that describes how to build the app.
A docker-compose.yml that tells Docker how to run it and includes the Traefik labels for routing.
A workflow file at .forgejo/workflows/deploy.yml that tells the CI runner what to do on push.
To prove the setup isn’t language-specific, I’ve tested it with a .NET 8 minimal API and a Go HTTP server. The Go build is noticeably faster. The .NET SDK image is about 300MB and compiles slowly on ARM. Go compiles in a few seconds and the final image is a few megabytes. First-time deploys are slow for both because Docker needs to pull base images, but after that everything is cached and subsequent deploys run in under two minutes.
Things that went wrong
Setting this up wasn’t smooth, and it’s worth being honest about that.
Traefik threw a Docker API version error on first start. Traefik v3 defaults to an old API version for backwards compatibility and needed an environment variable to override it. Ten minutes to debug, one line to fix. Using v2.11 instead sidesteps this entirely.
The Oracle VM went completely unresponsive twice during setup. SSH wasn’t responding, Oracle’s own dashboard reported the instance as unresponsive. I used Oracle’s diagnostic reboot option, which is a hard reset at the hypervisor level. Came back clean both times. Free tier ARM instances are on shared hardware and Oracle doesn’t guarantee stability. Adding a swap file helped. The .NET build is memory-hungry on a 1GB VM and without swap it can OOM silently.
The cert resolver naming mismatch took an embarrassingly long time to spot. My existing Traefik setup had a resolver called myresolver. The new app container had certresolver=letsencrypt. Traefik registered the router, showed it as active in the dashboard, but silently skipped certificate provisioning because the resolver name didn’t match anything in its config. The fix was a one-word change.
The Forgejo runner registration API changed. The current version ignores CLI flags entirely and only accepts configuration via environment variables. Took a few attempts to figure out, since the error messages weren’t helpful.
Git clone was silently falling back to stale code. The initial clone happened over HTTPS, which meant every subsequent git pull would hang waiting for credentials that would never arrive. Because the directory already existed, the script would hit the git pull branch, fail silently, and then build whatever stale code was there. The fix was to delete the directory and re-clone over SSH, with the deploy user’s public key registered in Forgejo.
Private reusable workflows don’t work with the current act_runner. I tried centralising the deploy logic in a shared workflows repo, so each project calls it with a single line. Works fine when the repo is public. With a private repo, the runner’s internal git library ignores all credential helpers, .netrc, and .gitconfig mounts entirely. Keeping the deployment script inline in each project works fine and is easier to reason about anyway.
Cloudflare broke SSH. Adding Cloudflare’s proxied wildcard record meant traffic to the app server hostname started routing through Cloudflare, including SSH. Cloudflare doesn’t proxy SSH, so the connection timed out. The fix was to make any hostname I needed SSH access to a DNS only record, while keeping the * wildcard proxied for app subdomains. Bonus: the proxied wildcard means individual app hostnames don’t expose origin IPs.
None of these were showstoppers, but they’re the kind of things that eat an afternoon if you’re not expecting them.
Security
A few things worth noting for anyone doing something similar.
The app server runs a dedicated deploy user with Docker group membership. The CI runner never touches the default user or has sudo access. If the deploy key were compromised, the blast radius is limited to redeploying apps.
Forgejo is configured with REQUIRE_SIGNIN_VIEW = true, so the git server isn’t browsable without an account. Self-registration is disabled.
SSH only accepts public key authentication, on a non-standard port, with root login disabled. Within two minutes of the server coming online, there were already bots trying root, test, and admin logins on port 22. They get nowhere.
MFA is enabled on the Forgejo account via TOTP. The git server is the entry point to the entire deployment pipeline, so it felt like the obvious place to add a second factor.
Email notifications are handled via Resend’s SMTP relay, so Forgejo can send password reset emails and action notifications to a real inbox.
What it supports
Anything that runs in Docker, which is basically everything. Static sites, .NET Core APIs, Go services, Node apps, Python services. If it has a Dockerfile, it’ll deploy.
For data persistence, SQLite databases survive deployments by mounting a Docker volume. For projects that need a proper relational database, Postgres can be added as a second service in the docker-compose.yml.
The subdomain naming convention is just the repo name. A repo called recipe-tracker ends up at recipe-tracker.gerardbeckerleg.com. Trivial to have dozens of small projects running simultaneously, with no naming collisions or manual routing config.
The cost
Server 1: £0. Oracle Always Free ARM VM. Server 2: £0. Oracle Always Free ARM VM. Domain: ~£10/year. SSL certificates: £0. Let’s Encrypt, automated. CI/CD: £0. Self-hosted runner. CDN and DDoS protection: £0. Cloudflare free tier. Email: £0. Resend free tier.
The only real cost is the domain name. Everything else is free indefinitely.
Is it production-ready?
No, and that’s not the point. There’s no high availability, no load balancing, no automated backups. If Oracle’s Sydney data centre has a bad day, my sites go down.
But for personal projects and proof of concepts, none of that matters. What matters is that the friction between having an idea and having it running on a real URL is now low enough that I’ll actually finish things. That’s the whole point.
The graveyard of half-built ideas is getting smaller.
What’s next
A proper backup strategy for the Forgejo data and any persistent app databases. Right now, if the infra server dies, I lose my repos, which is not ideal.
A personal website optimised for how AI search tools surface content, which has been on the to-do list long enough to be embarrassing.
More stupid ideas.