discord: simplify Docker workflow with Makefile and update documentation

Replace complex Docker commands with simple Make targets for building,
running, and managing the Discord bot container. This makes it easier
for developers to get started without memorizing lengthy Docker flags.

Also removes outdated CLAUDE.md and adds AGENTS.md files to guide AI
agents working on conversation, database, actors, and sandbox modules.
pull/13354/head
Ryan Vogel 2026-02-14 15:39:09 -05:00
parent 46cc9e7567
commit 292ff126c4
7 changed files with 312 additions and 103 deletions

View File

@ -1,88 +0,0 @@
# Discord Bot Package
Discord bot that provisions Daytona sandboxes running OpenCode sessions in threads.
## Architecture
Bun + TypeScript (ESM, strict mode) with Effect for all business logic. SQLite persistence via `@effect/sql`.
- `src/index.ts` — startup, layer composition, graceful shutdown
- `src/config.ts` — env schema (Effect Schema + branded types)
- `src/conversation/` — pure conversation service (Inbox/Outbox ports, turn logic, ConversationLedger for dedup/replay)
- `src/discord/` — Discord.js adapter (message handler, turn routing, formatting)
- `src/sandbox/` — sandbox lifecycle (SandboxProvisioner, OpenCode client, ThreadAgentPool)
- `src/sessions/store.ts` — SQLite-backed session store
- `src/lib/actors/` — ActorMap (per-key serialized execution with idle timeouts)
- `src/db/` — database client, schema init, migrations
- `src/http/health.ts` — health/readiness HTTP server
- `src/types.ts` — shared branded types and data classes
## Effect Conventions
- Services use `Context.Tag("@discord/<Name>")`
- Errors use `Schema.TaggedError` with `Schema.Defect` for defect-like causes
- Use `Effect.gen(function*() { ... })` for composition
- Use `Effect.fn("ServiceName.method")` for named/traced effects
- Layer composition: `Layer.mergeAll`, `Layer.provide`, `Layer.provideMerge`
- Use `Schema.Class` for data types with multiple fields
- Use branded schemas (`Schema.brand`) for single-value IDs
- Construct branded values and Schema.Class instances with `.make()`
- Module pattern for utilities: namespace for types, const for implementation (e.g. `ActorMap.make()`, `ActorMap.ActorMap<K>`)
## Type Safety
- **No `any`** — use `unknown` at untrusted boundaries, narrow with Schema decoding
- **No `as` casts** — prefer Schema decode, type guards, or restructuring
- **Non-null assertions (`!`) are banned** — use Option, optional chaining, or early returns
- **Use `Option<T>` instead of `T | null`** — Effect's Option type for absent values from stores/lookups
- **Branded types everywhere**`ThreadId`, `ChannelId`, `GuildId`, `SandboxId`, `SessionId` from `src/types.ts`
- **Accept branded types in function signatures** — don't accept `string` and `.make()` inside; push branding to the boundary
- `as const` is fine (const assertion, not a cast)
## Branded Types
All branded ID schemas live in `src/types.ts`:
- `ThreadId`, `ChannelId`, `GuildId` — Discord identifiers
- `SandboxId` — Daytona sandbox identifier
- `SessionId` — OpenCode session identifier
Brand at the system boundary (Discord event parsing, schema classes), then pass branded types through all internal code.
## Testing
- `bun test` — run all tests
- `bun test path/to/file.test.ts` — single file
- Test helpers in `src/test/effect.ts`
- Colocate tests as `*.test.ts` next to implementation
## Build & Check
- `bun run typecheck` — type checking
- `bun run build` — production build
- `bun run check` — combined
## Local Debug CLIs
- `bun run conversation:cli` — interactive local conversation shell
- `/channel` to return to channel mode
- `/threads` to list known threads with indexes
- `/pick [n]` to select a thread by index
- `/thread [id|n]` to jump to a thread by id or index
- channel auto-switch only follows newly seen threads (prevents jumping to old active threads)
- `bun run conversation:ctl` — non-interactive JSON CLI for agents/automation
- `active`
- `status --thread <id>`
- `logs --thread <id> [--lines 120]`
- `pause --thread <id>`
- `destroy --thread <id>`
- `resume --thread <id> [--channel <id> --guild <id>]`
- `restart --thread <id>`
- `send --thread <id> --text "<message>" [--follow --wait-ms 180000 --logs-every-ms 2000 --lines 80]`
## Session Lifecycle
- Session mapping (`threadId` -> `sandboxId` -> `sessionId`) is authoritative
- Resume existing sandbox/session before creating replacements
- Recreate only when sandbox is truly unavailable/destroyed
- If session changes, replay Discord thread history as context

View File

@ -0,0 +1,33 @@
IMAGE ?= opencode-discord:local
CONTAINER ?= opencode-discord-local
ENV_FILE ?= .env
DATA_DIR ?= data
HOST_PORT ?= 8787
.PHONY: docker-build docker-run docker-stop docker-restart docker-logs docker-status
docker-build:
docker build -t $(IMAGE) -f Dockerfile .
docker-run:
mkdir -p $(DATA_DIR)
docker rm -f $(CONTAINER) >/dev/null 2>&1 || true
docker run -d \
--name $(CONTAINER) \
--env-file $(ENV_FILE) \
-e DATABASE_PATH=/data/discord.sqlite \
-p $(HOST_PORT):8787 \
-v $(CURDIR)/$(DATA_DIR):/data \
$(IMAGE)
docker-stop:
docker stop $(CONTAINER) >/dev/null 2>&1 || true
docker rm $(CONTAINER) >/dev/null 2>&1 || true
docker-restart: docker-stop docker-run
docker-logs:
docker logs -f $(CONTAINER)
docker-status:
docker ps --filter "name=$(CONTAINER)" --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}"

View File

@ -46,26 +46,35 @@ bun run dev
### 4. Run with Docker
Build the image from the package directory (or from repo root using the same path as context):
From the repo root, use the built-in Make targets:
```bash
cp packages/discord/.env.example packages/discord/.env
# Fill in required values in packages/discord/.env
make -C packages/discord docker-build
make -C packages/discord docker-run
make -C packages/discord docker-status
```
SQLite data is persisted locally in `packages/discord/data`.
Useful commands:
```bash
make -C packages/discord docker-logs
make -C packages/discord docker-stop
```
If you prefer plain Docker commands instead of Make, run:
```bash
docker build -t opencode-discord packages/discord
```
Create an env file from the template and set the required values (`DISCORD_TOKEN`, `DAYTONA_API_KEY`, `OPENCODE_ZEN_API_KEY`):
```bash
cp packages/discord/.env.example .env
```
Run the container with a persistent volume for SQLite data:
```bash
docker run --name opencode-discord \
--env-file .env \
docker run --name opencode-discord-local \
--env-file packages/discord/.env \
-e DATABASE_PATH=/data/discord.sqlite \
-p 8787:8787 \
-v opencode-discord-data:/data \
-v $(pwd)/packages/discord/data:/data \
opencode-discord
```

View File

@ -0,0 +1,83 @@
# Conversation Module
Transport-agnostic conversation engine. Discord-specific code lives in `implementations/discord/`, not here.
## Hexagonal Architecture (Ports & Adapters)
The conversation service depends on 5 port interfaces, NOT concrete implementations:
- `Inbox``Stream.Stream<Inbound>` of incoming events
- `Outbox` — publishes `Action` (send/reply/typing) and wraps effects with typing indicators
- `History` — rehydrates thread context when sessions change
- `Threads` — resolves channel messages to thread targets (creates Discord threads)
- `ConversationLedger` — durable dedup, state checkpointing, offset tracking
The `Conversation` service (`services/conversation.ts`) consumes these ports. Implementations are swapped at the Layer level:
- `implementations/discord/` provides all 5 ports for production via `DiscordConversationServices.portLayer`
- `implementations/local/` provides all 5 for the local CLI via `makeTui`
- Tests use `ConversationLedger.noop` and `Outbox.noop` in-memory stubs
## Event Flow (Non-Obvious)
1. Discord `messageCreate``onMessage` callback → `Runtime.runPromise(runtime)(ingestMessage(msg))`
- This bridges callback-land into Effect. The runtime is captured once at Layer construction.
2. `ingestMessage``ledger.admit(event)` (dedup) → `input.offer(event)` (Queue)
3. `Inbox.events` = `Stream.fromQueue(input)` — consumed by `Conversation.run`
4. `Conversation.run` maps each event through `turn()` with `{ concurrency: "unbounded", unordered: true }`
5. `turn()` serializes per-key via `ActorMap` (`keyOf` = `thread:<id>` or `channel:<id>`)
6. Key insight: **unbounded concurrency across threads, serial within each thread**
## Ledger Checkpointing (Crash Recovery)
The `ConversationLedger` stores intermediate state so retries don't re-call the LLM:
- `admit` → inserts with status `pending`, returns `false` if already seen (dedup)
- `start` → atomically moves `pending``processing`, increments `attempts`, returns snapshot
- `setTarget` → caches resolved `thread_id`/`channel_id`
- `setPrompt` → caches the (possibly rehydrated) prompt text + `session_id`
- `setResponse` → caches the LLM response text
- `complete` → marks `completed`
- `retry` → resets to `pending` with `last_error`
On restart: `replayPending()` resets `processing``pending` and returns all pending events.
On recovery: if `response_text` is already set, the turn skips the LLM call and just re-publishes.
## Offset Tracking
`ConversationLedger.getOffset`/`setOffset` persist the last-seen Discord message ID per source (`channel:<id>` or `thread:<id>`). On startup, `recoverMissedMessages` in the Discord adapter fetches messages after the stored offset to catch anything missed while offline.
## Error Union Pattern
`ConversationError` is a `Schema.Union` of 6 tagged errors, each with a `retriable: boolean` field. The retry schedule (`turnRetry`) checks `error.retriable` via `Schedule.whileInput`. Non-retriable errors trigger a user-visible "try again" message before failing.
## `portLayer` Pattern (Multi-Service Layer)
`DiscordConversationServices.portLayer` uses `Layer.scopedContext` to provide **4 services in a single Layer** by building a `Context` manually:
```ts
return Context.empty().pipe(
Context.add(Inbox, inbox),
Context.add(Outbox, outbox),
Context.add(History, history),
Context.add(Threads, threads),
)
```
This is the pattern for providing multiple related ports from one implementation module.
## Turn Routing
`TurnRouter` in `src/discord/turn-routing.ts` decides whether to respond to unmentioned thread messages:
- Mode `off`: always respond
- Mode `heuristic`: regex-based rules, default respond on uncertainty
- Mode `ai`: calls Haiku via `@effect/ai-anthropic` with `max_tokens: 10` for RESPOND/SKIP
- Heuristic runs first in `ai` mode; AI is only called when heuristic returns `null`
## Files That Must Change Together
- Adding a new `Inbound` event kind → `model/schema.ts` + `implementations/discord/index.ts` + `implementations/local/index.ts`
- Adding a new `Action` kind → `model/schema.ts` + both implementations' `publish`/outbox handling
- Adding a new error type → `model/errors.ts` + update `ConversationError` union + handle in `conversation.ts`
- Adding a new port service → `services/` interface + both `implementations/` + wire in `src/index.ts` layer chain

View File

@ -0,0 +1,54 @@
# Database Module
SQLite via `@effect/sql-sqlite-bun` with Effect's `Migrator` system.
## SqliteDb Tag — Not Just SqlClient
`SqliteDb` (`client.ts`) is a custom `Context.Tag` wrapping `Client.SqlClient`. It's NOT a direct re-export. The layer:
1. Uses `Layer.unwrapEffect` to read `AppConfig.databasePath` at construction time
2. Provides `SqliteClient.layer({ filename })` underneath
3. Sets `PRAGMA busy_timeout = 5000` on initialization
This means `SqliteDb` is what services depend on, not raw `Client.SqlClient`.
## Migration System
Uses `@effect/sql/Migrator` with `Migrator.fromRecord` (not file-based).
Migrations are imported as modules in `init.ts` and keyed by name.
Each migration is idempotent:
- `CREATE TABLE IF NOT EXISTS`
- Checks existing columns via `PRAGMA table_info(...)`, only adds missing ones via `ALTER TABLE`
- Creates indexes with `IF NOT EXISTS`
## Schema Initialization at Service Level
`initializeSchema` is called by BOTH `SessionStore.layer` and `ConversationLedger.layer` individually. It's idempotent, but this means schema init runs multiple times — once per service that needs the DB. The pattern is:
```ts
yield * db(initializeSchema.pipe(Effect.provideService(Client.SqlClient, sql)))
```
Note: `initializeSchema` needs `Client.SqlClient` in its requirements, so each caller provides it manually.
## SqlSchema Typed Queries
`SessionStore` uses `@effect/sql`'s `SqlSchema` module for type-safe queries:
- `SqlSchema.void({ Request, execute })` — for writes (insert/update)
- `SqlSchema.findOne({ Request, Result, execute })` — returns `Option<Result>`
- `SqlSchema.findAll({ Request, Result, execute })` — returns `ReadonlyArray<Result>`
The `Request` and `Result` schemas handle encode/decode automatically. Column aliasing (`thread_id AS threadId`) maps snake_case DB columns to camelCase TS fields.
## Adding a New Migration
1. Create `src/db/migrations/NNNN_name.ts` exporting a default `Effect.gen` that uses `yield* Client.SqlClient`
2. Import and register it in `src/db/init.ts` in the `Migrator.fromRecord({...})` call
3. Both files must change together
## Status Timestamp Pattern
`SessionStore` uses a dynamic `statusSet` helper that updates status-specific timestamp columns (`paused_at`, `resumed_at`, etc.) based on the new status value — a single UPDATE touches the right column via CASE expressions.

View File

@ -0,0 +1,41 @@
# ActorMap
Per-key serialized execution primitive. Think of it as a `Map<K, SerialQueue>` with optional idle timeouts and persistent state.
## Core Semantics
- `run(key, effect)` enqueues work onto the key's serial queue. Creates the actor (fiber + queue) on first access.
- Effects for the **same key** execute sequentially (FIFO). Effects for **different keys** run concurrently.
- `run` returns a `Deferred` result — the caller suspends until the work completes on the actor's fiber.
- `touch: false` option skips resetting the idle timer (used for bookkeeping reads that shouldn't extend session lifetime)
## State Management
`ActorMap<K, S>` supports optional per-key state (`Ref<Option<S>>`):
- `load(key)` hook runs on actor creation to hydrate from persistence (e.g. `SessionStore`)
- `save(key, state)` hook runs after `run` completes if state changed (reference equality check: `stateBefore !== stateAfter`)
- `run` can accept a function `(state: Ref<Option<S>>) => Effect<A, E>` instead of a bare Effect — this gives the callback access to the actor's state ref
## Idle Timeout Mechanics
When `idleTimeout` + `onIdle` are configured:
- Each `run` (with `touch: true`, the default) replaces the key's timer fiber in a `FiberMap`
- Timer fires `onIdle(key)` after the idle duration — typically pauses the sandbox and calls `actors.remove(key)`
- `cancelIdle(key)` cancels the timer without removing the actor
## Internal Structure
- `FiberMap<K>` for worker fibers (one per actor)
- `FiberMap<K>` for idle timer fibers (one per actor)
- `SynchronizedRef<Map<K, Entry<S>>>` for the actor registry
- `Queue.unbounded<Job>` per actor for the serial work queue
- Jobs use `Effect.uninterruptibleMask` + `Deferred` for safe completion signaling
## Gotchas
- `remove(key)` cancels all pending work (interrupts via `Deferred.interrupt`) and shuts down the queue. The key can be re-created by a subsequent `run`.
- State save is best-effort: `options.save` errors are silently caught (`Effect.catchAll(() => Effect.void)`)
- `load` errors are also silently caught — returns `Option.none()` on failure
- The `run` overload detection uses `Effect.isEffect(effectOrFn)` to distinguish bare effects from state-accessing functions

View File

@ -0,0 +1,77 @@
# Sandbox Module
Manages Daytona sandbox lifecycle and the OpenCode server running inside each sandbox.
## Three-Layer Architecture
1. **DaytonaService** (`daytona.ts`) — thin wrapper around `@daytonaio/sdk`. Creates/starts/stops/destroys sandboxes, executes commands, gets preview links. All methods return `Effect` with typed errors.
2. **SandboxProvisioner** (`provisioner.ts`) — orchestrates sandbox + OpenCode session lifecycle. Handles provision, resume, health checks, send-failure recovery.
3. **ThreadAgentPool** (`pool.ts`) — per-thread concurrency layer. Wraps provisioner with `ActorMap<ThreadId>` for serialized access per thread. Manages idle timeouts and cleanup loops.
## Sandbox Creation Flow
`provision()` uses `Effect.acquireUseRelease`:
- **acquire**: `daytonaService.create()` — creates sandbox with `Image.base("node:22-bookworm-slim")` + custom setup
- **use**: clones opencode repo, writes auth/config JSON via env vars, starts `opencode serve`, waits for health, creates session
- **release on failure**: destroys the sandbox (cleanup), marks session as errored
The `discordBotImage` in `daytona.ts` uses Daytona's `Image.base().runCommands().workdir()` builder — NOT a Dockerfile. It installs git, curl, gh CLI, opencode-ai, and bun globally.
## OpenCode Server Communication
`OpenCodeClient` (`opencode-client.ts`) uses `@effect/platform`'s `HttpClient`:
- Each request uses `scopedClient(preview)` which prepends the sandbox preview URL and adds `x-daytona-preview-token` header
- `HttpClient.filterStatusOk` auto-rejects non-2xx responses as `ResponseError`
- `mapErrors` helper converts `HttpClientError` + `ParseResult.ParseError``OpenCodeClientError`
- Health polling: `waitForHealthy` retries every 2s up to `maxWaitMs / 2000` attempts
## `PreviewAccess` — The Connectivity Token
`PreviewAccess` (defined in `types.ts`) carries `previewUrl` + `previewToken`. It's extracted from Daytona's `getPreviewLink(4096)` response (port 4096 is OpenCode's serve port). The token may also be embedded in the URL as `?tkn=``parsePreview` normalizes this.
`PreviewAccess.from(source)` factory works with any object having those two fields — used with `SandboxHandle`, `SessionInfo`.
## Resume Flow (Non-Obvious)
`provisioner.resume()` does NOT just restart. It:
1. Calls `daytonaService.start()` (re-starts the stopped Daytona sandbox)
2. Runs `restartOpenCodeServe` — a shell command that pkills old opencode processes and re-launches
3. Waits for health (120s default)
4. Calls `findOrCreateSessionId` — tries to find existing session by title (`Discord thread <threadId>`), creates new if not found
5. Returns `Resumed` or `ResumeFailed { allowRecreate }``allowRecreate: false` means "don't try recreating, something is fundamentally wrong"
## Send Failure Classification
`classifySendError` in provisioner maps HTTP status codes to recovery strategies:
- 404 → `session-missing` (session deleted, mark error)
- 0 or 5xx → `sandbox-down` (pause sandbox for later resume)
- body contains "sandbox not found" / "is the sandbox started" → `sandbox-down`
- anything else → `non-recoverable` (no automatic recovery)
## ThreadAgentPool — The ActorMap Bridge
`ThreadAgentPool` creates `ActorMap<ThreadId, SessionInfo>` with:
- `idleTimeout`: from config `sandboxTimeout` (default 30min)
- `onIdle`: pauses the sandbox and removes the actor
- `load`: reads from `SessionStore` on first access
- `save`: writes to `SessionStore` after state changes
`runtime(threadId, stateRef)` creates a `Runtime` object with `current/ensure/send/pause/destroy` methods. `runRuntime` submits work to the actor queue via `actors.run(threadId, (state) => ...)`.
## Background Cleanup Loop
Forked with `Effect.forkScoped` on `Schedule.spaced(config.cleanupInterval)`:
- Pauses stale-active sessions (no activity for `sandboxTimeout + graceMinutes`)
- Destroys expired-paused sessions (paused longer than `pausedTtlMinutes`)
## Files That Must Change Together
- Adding a new Daytona operation → `daytona.ts` + add error type in `errors.ts` if needed
- Changing sandbox setup (image, commands) → `daytona.ts` image builder + `provisioner.ts` exec commands
- Adding a new pool operation → `pool.ts` interface + wire into `conversation/services/conversation.ts`