A walkthrough of migrating this blog off AWS (CodeCommit + CodeBuild + S3 + CloudFront) onto a self-hosted stack using Forgejo for git and CI/CD. Most of the config was written by Claude Code.
The old AWS stack was four services doing what two containers now do. The migration was part of a consolidation onto Oracle Cloud free tier VMs. The target infrastructure was already there: a Forgejo instance on one VM and a Docker/Traefik apps server on another.
Architecture Overview
Browser → Cloudflare (CDN + DNS) → Traefik (TLS + routing) → nginx:alpine (static files)
↑
Forgejo Actions runner
(Hugo build + deploy)
Two servers:
| Server | Role |
|---|---|
| Git server | Forgejo 7 + act_runner |
| Apps server | Docker containers behind Traefik v2.3 |
The apps server is an Oracle Cloud ARM VM with 1GB RAM. Not enough to run a Hugo Extended build without trouble. Hugo compiles SCSS via libsass and it is memory hungry enough to cause problems on a constrained box. So the build happens on the CI runner and the compiled output gets shipped over to the apps server.
The Dockerfile
The image gets built on the apps server rather than the CI runner (see gotchas). It just wraps nginx:alpine around the pre-compiled static files:
FROM nginx:alpine
COPY nginx.conf /etc/nginx/conf.d/default.conf
COPY public/ /usr/share/nginx/html/
EXPOSE 80
The nginx config handles Hugo’s clean URLs (/posts/my-post/ maps to /posts/my-post/index.html), gzip compression, cache headers for static assets, and blocks config files from being served publicly:
server {
listen 80;
server_name _;
root /usr/share/nginx/html;
index index.html;
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_types text/plain text/css text/xml application/json application/javascript
application/rss+xml application/atom+xml image/svg+xml;
location ~* \.(css|js|woff|woff2|ttf|otf|eot)$ {
expires 1y;
add_header Cache-Control "public, immutable";
}
location ~* \.(jpg|jpeg|png|gif|webp|avif|ico|svg)$ {
expires 30d;
add_header Cache-Control "public";
}
location ~* \.(md|yml|yaml|toml)$ {
deny all;
return 404;
}
location ~ /\. {
deny all;
return 404;
}
location / {
try_files $uri $uri/ $uri.html =404;
}
}
The docker-compose.yml (Traefik labels)
All apps on the server follow the same pattern: join the external web network, add Traefik labels, never bind host ports directly.
networks:
web:
external: true
services:
web:
build: .
container_name: my-blog
restart: unless-stopped
networks:
- web
labels:
- "traefik.enable=true"
- "traefik.http.routers.my-blog.rule=Host(`blog.example.com`)"
- "traefik.http.routers.my-blog.entrypoints=websecure"
- "traefik.http.routers.my-blog.tls.certresolver=myresolver"
- "traefik.http.services.my-blog.loadbalancer.server.port=80"
Worth noting: Traefik router and service names need to use hyphens. Dots are treated as special characters and will cause routing to silently fail.
The Pipeline
name: Deploy
on:
push:
branches:
- main
jobs:
build-and-deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout source
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Install Hugo v0.147.3
run: |
wget -q -O /tmp/hugo.tar.gz \
https://github.com/gohugoio/hugo/releases/download/v0.147.3/hugo_extended_0.147.3_linux-amd64.tar.gz
tar -xzf /tmp/hugo.tar.gz -C /tmp
mv /tmp/hugo /usr/local/bin/hugo
hugo version
- name: Build Hugo site
run: hugo --minify
- name: Set up SSH key
run: |
mkdir -p ~/.ssh
printf '%s\n' "${{ secrets.DEPLOY_SSH_KEY }}" > ~/.ssh/id_rsa
chmod 600 ~/.ssh/id_rsa
ssh-keyscan -H apps.example.com >> ~/.ssh/known_hosts
- name: Stream build artifacts to apps server
run: |
tar -czf - Dockerfile nginx.conf docker-compose.yml public/ | \
ssh -i ~/.ssh/id_rsa [email protected] \
"mkdir -p /opt/apps/my-blog && \
cd /opt/apps/my-blog && \
tar -xzf -"
- name: Build container and restart
uses: https://github.com/appleboy/[email protected]
with:
host: apps.example.com
username: deploy
key: ${{ secrets.DEPLOY_SSH_KEY }}
command_timeout: 10m
script: |
set -e
cd /opt/apps/my-blog
docker compose down
docker compose up -d --build
docker image prune -f
- name: Purge Cloudflare cache
run: |
curl -s -X POST \
"https://api.cloudflare.com/client/v4/zones/${{ secrets.CF_ZONE_ID }}/purge_cache" \
-H "Authorization: Bearer ${{ secrets.CF_API_TOKEN }}" \
-H "Content-Type: application/json" \
--data '{"files":["https://blog.example.com"]}'
Secrets required in Forgejo: DEPLOY_SSH_KEY, CF_ZONE_ID, CF_API_TOKEN.
Gotchas
Hugo Extended is required. The hyde-hyde theme compiles SCSS at build time via libsass. The standard Hugo binary fails with TOCSS: failed to transform. You need the hugo_extended_* release asset, not the plain one.
The act_runner has no sudo. The runner container runs as root already, so sudo mv just fails with command not found. Drop the sudo and run mv directly.
The act_runner has no Docker. The runner job container does not have the Docker CLI available. Building and pushing an image from the CI side is not possible without setting up Docker-in-Docker, which is extra faff. The simpler approach is to ship the build output to the apps server and let Docker do its thing there. The Docker build on the apps server is dead cheap: nginx:alpine plus a directory copy. No compilation, no dependencies, and well within the 1GB RAM limit.
Use tar | ssh rather than rsync or scp. The deploy user on the apps server cannot install packages, and rsync was not available on the remote end. scp works but tar piped over SSH is cleaner. One command, no extra packages needed on either end, streams straight through without writing a temp file.
Hugo skips future-dated posts. Hugo treats the date field in frontmatter as a publish date and will not build posts dated in the future. If a post goes missing after a green build, check the date.
Claude Code’s Role
The whole migration was done in one Claude Code session. It audited the existing repos first to pick up the exact Traefik label syntax, certresolver name, network names, and deployment patterns before writing a single line. That meant the generated config slotted straight in without any manual tweaking.
The pipeline took a few goes to get right as the runner constraints showed up one at a time: no sudo, then no Docker, then no rsync on the remote. Each failure came back as an error message and got fixed in one edit.
Start to finish, first prompt to working deployment, was about 30 minutes.
