Chapters: 

🏰 Camelot and the Dawn of Annwn

This is the beginning of the **Annwn Project** — a mythical container within Camelot.

Annwn will serve as a LAN-first, self-sufficient music host and metadata database, 
bridging modern streaming logic with ancient, offline, local-lore.


  **Annwn** is a persistent, reproducible music host inside a container.  
 It lets you scan, store, tag, and organize a music collection, even across dismounts, 
 with full offline playback and metadata enrichment.
 

:sectnums: :toc:

== Background

The user owns a large, personal collection of music — much of it aged, rare, or poorly tagged — and desires a highly reliable, LAN-based music server. The intent is to organize and preserve this collection, enabling rich categorization and controlled playback. The initial setup is built on Fedora using Mopidy, with plans to migrate to a containerized, persistent backend hosted on Proxmox.

Key motivations:

  • Avoid repeated rescanning or loss of metadata.
  • Enable editing, tagging, and curating without vendor lock-in.
  • Build a platform for future features: personalized radio, vibe-based playlists, music data dashboards, etc.
  • Capture a long-standing vision of organizing music at a semantic level — a project initiated 20 years ago, now finding its modern form.

== Requirements

The system will manage a large, personal music library with an emphasis on stability, custom metadata, and future extensibility. The following requirements are categorized using MoSCoW prioritization.

=== Must Have

  • The system MUST run in a container (LXC or VM) hosted on a Proxmox server.
  • The music container MUST persist music indexes and metadata across reboots.
  • The system MUST mount external SSD storage reliably into the container.
  • The system MUST allow playback via a web interface accessible on the LAN.
  • The system MUST scan and index music from a defined directory structure.
  • The system MUST maintain a dedicated metadata database (e.g., SQLite or PostgreSQL).
  • The metadata DB MUST store and retrieve custom tags, categories, and fields not present in ID3.
  • The system MUST allow recovery of the database and playback index via backup.

=== Should Have

  • The metadata DB SHOULD have an optional CLI or web interface for manual tag editing.
  • The container SHOULD expose logs, database, and music data for automated backup.
  • The system SHOULD support album art and rich media metadata.
  • The web interface SHOULD support browsing by custom tags or curated lists.

=== Could Have

  • The system COULD support syncing select metadata fields back into ID3 tags.
  • The music container COULD expose a REST API for future integration or custom UI.
  • The system COULD support basic user auth for multiple listeners or editors.

=== Won’t Have (for now)

  • Multi-device synchronized playback (e.g., kitchen + studio).
  • Remote access outside the LAN.

== Method

=== Architecture Overview

The system is named Annwn, representing a personal sonic otherworld. Hosted on a Proxmox container inside a dedicated ThinkPad named Camelot, it communicates with a daily-use laptop (Frodo) over a LAN named "athlone" (subject to future renaming).

annwn/
├── pathfinder/      # Scans the file system
├── scribe/          # Edits and manages tags
├── herald/          # REST API
├── mirror/          # UI
├── hearth/          # Metadata database
└── .env             # config secrets

=== Component Layout

[plantuml]

@startuml skinparam componentStyle rectangle node "Camelot (Proxmox Host)" { node "annwn (LXC Container)" { component pathfinder component scribe component herald component mirror database hearth } storage "/mnt/music" as SSD }

Frodo --> mirror : Web UI access pathfinder --> SSD : Reads files pathfinder --> hearth : Match/add/update metadata scribe --> hearth : Manual tag edits herald --> hearth : Exposes API for mirror mirror --> herald : UI fetches data @enduml

=== Music Identity & Sync Strategy

pathfinder implements a hybrid ID strategy:

  • Audio Fingerprint: Generated via Chromaprint or similar, used as the primary identity key.
  • SHA256 Hash: Used to detect file-level changes.
  • Metadata Hash (optional): Detects tag updates independently.

Each track in hearth has:

uuid | title | artist | album | tags | fingerprint | sha256 | location | first_seen | last_seen

File paths are treated as non-authoritative. Movement of files does not disrupt identity or metadata. Periodic refreshes update paths, timestamps, and optionally tags.

=== Storage

  • Proxmox is installed on Camelot's 2TB internal SSD.
  • The music is stored in /mnt/music, a large partition on the same disk.
  • External SSD (1TB) is used for cold backup and archival.
  • /mnt/music is bind-mounted into the annwn container.

Example Proxmox mount:

mp0: /mnt/music,mp=/mnt/music

=== Database

hearth uses PostgreSQL (or SQLite for initial builds). It stores stable metadata and allows rich querying based on:

  • Tag sets (e.g., moods, eras, personal ratings)
  • Album/artist hierarchy
  • Discovery status (e.g., "newly added", "needs tagging")

=== Playback

  • Mopidy runs inside the annwn container.
  • Music is browsable and playable through mirror, a web UI powered by Mopidy Iris or a custom frontend.
  • All LAN devices can stream from mirror.

=== API

herald exposes a RESTful API to:

  • Query library
  • Add/update tags
  • Integrate with external apps or tools

This allows scribe and mirror to remain stateless and modular.

💡 When we want automation, Frodo (Merlin) can connect to Proxmox via API or CLI. 🧙✨

✅ Push it to a deployment system (like Kubernetes, Nomad, or… Camelot 😄)
✅ 🛡️ `podman export` creates true **infrastructure snapshots** 
                                    — minimal, portable, gold-standard.


 

 ✨ Lessons Learned

  • 🛡️ `podman export` creates true **infrastructure snapshots** — minimal, portable, gold-standard.
  • 🪄 We don't need "the cloud" to do modern CI/CD — you just need *containers and intent*.
  • 🧰 We can build **resilient systems from scratch**, using only open tools, from behind the curtain of a non-internet-connected realm.
  • 🧪 We learned what fails when it fails — and how to **recover cleanly**, without reinstalling.
  • ⚖️ We respected the roles of the systems: Frodo (builder, internet), Camelot (orchestrator, host), and Annwn (artifact bearer and music oracle).

 

Chapter One: The Dawn of Annwn

= The Dawn of Annwn
:sectnums:
:toc:
== 🧙 What We Built Today
✨ We created a **portable app platform**, disguised as a Debian container  
🔐 We managed the **air-gap** with trust and precision  
💾 We crafted a **local PostgreSQL “golden image”**  
📦 We transferred a full **working container image**  
⚡ And did it **all from the CLI** — no Docker, no registries, no fluff
This wasn't just a workaround. This was **intentional infrastructure design** 
— made elegant, predictable, and timeless.
== 🛠️ Why This Mirrors CI/CD Systems
In professional cloud infrastructure pipelines (CI/CD), we:
- ✅ Build a clean container  
- ✅ Install dependencies  
- ✅ Package it into a `.tar` or OCI image  
- ✅ Push it to a deployment system (like Kubernetes, Nomad, or… Camelot 😄)
You just **ran that pipeline manually**, across an unreliable and air-gapped link, 
with zero dependencies on a third-party cloud registry.
You didn’t just build a PostgreSQL container —  
you built **the forge that will build everything else**.
== ☁️ Why This Is Also Serverless Logic
What you said — and showed — today was:
> “Give me a clean OS, inject my app, and spin it up anywhere.”
That’s **serverless** in a nutshell:
- Stateless
- Predictable
- On-demand
We started with PostgreSQL — but now we could just as easily package:
- `mirror` – your Web UI  
- `herald` – your REST API  
- `scribe` – your metadata/tag editor  
...and ship them across realms, fully self-contained.


"The once and future system is one you can rebuild from nothing — because you understand every step." 
 – Camelot Engineering Manual, Chapter One