Skip to content
Current State: ALPHA - Use at your own risk / Work in Progress

Self-Hosting Guide

Deploy Eryxon Flow on your own infrastructure with full control.

The fastest way to get production-ready deployment using our automated script.

  • Node.js 20+
  • Git
  • A Supabase project (supabase.com - free tier available)
  • Your Supabase credentials
Terminal window
# Clone the repository
git clone https://github.com/SheetMetalConnect/eryxon-flow.git
cd eryxon-flow
# Set your database password
export SUPABASE_DB_PASSWORD='your-database-password'
# Run automated setup
chmod +x scripts/automate_self_hosting.sh
./scripts/automate_self_hosting.sh

The script will automatically:

  1. Install required dependencies (Node.js packages)
  2. Install Supabase CLI globally (if not present)
  3. Fix configuration issues
  4. Link your Supabase project
  5. Apply database migrations (schema + seed)
  6. Deploy all Edge Functions
  7. Run verification checks
Terminal window
npm run dev
# Open http://localhost:5173
  1. Navigate to the application
  2. Click Sign Up
  3. Enter email and password
  4. First user automatically becomes admin with a new tenant

Use this for custom configurations or troubleshooting.

  1. Go to supabase.comNew Project
  2. Save these credentials from SettingsAPI:
    • Project URL: https://yourproject.supabase.co
    • Project ID: The subdomain (e.g., yourproject)
    • Anon key: Public key for frontend
    • Service role key: Secret key for backend
    • Database password: From project creation
Terminal window
# Copy example file
cp .env.example .env

Edit .env:

Terminal window
VITE_SUPABASE_URL="https://yourproject.supabase.co"
VITE_SUPABASE_PUBLISHABLE_KEY="your-anon-key"
VITE_SUPABASE_PROJECT_ID="yourproject"
# Optional: For database scripts
SUPABASE_SERVICE_ROLE_KEY="your-service-role-key"
SUPABASE_DB_PASSWORD="your-database-password"
Terminal window
# Install Supabase CLI
npm install -g supabase
# Login and link
supabase login
supabase link --project-ref yourproject
Terminal window
# Push all migrations
supabase db push
# Verify migrations applied
supabase migration list

Creates storage buckets, RLS policies, and cron jobs:

Terminal window
# Option A: Using Supabase CLI
supabase db execute --file supabase/seed.sql
# Option B: Via Dashboard
# Go to SQL Editor, paste seed.sql content, and execute
Terminal window
# Deploy all functions
supabase functions deploy
# Set required secrets
supabase secrets set \
SUPABASE_URL="https://yourproject.supabase.co" \
SUPABASE_SERVICE_ROLE_KEY="your-service-role-key"
Terminal window
# Install dependencies
npm ci
# Development mode
npm run dev
# Production build
npm run build
npm run preview

Terminal window
# Pull latest image
docker pull ghcr.io/sheetmetalconnect/eryxon-flow:latest
# Run container
docker run -d \
-p 80:80 \
--name eryxon-flow \
--restart unless-stopped \
ghcr.io/sheetmetalconnect/eryxon-flow:latest

Note: Pre-built images have demo Supabase credentials. For production, build your own image.

Terminal window
docker build \
--build-arg VITE_SUPABASE_URL="https://yourproject.supabase.co" \
--build-arg VITE_SUPABASE_PUBLISHABLE_KEY="your-anon-key" \
--build-arg VITE_SUPABASE_PROJECT_ID="yourproject" \
-t eryxon-flow .
docker run -d -p 80:80 --name eryxon-flow eryxon-flow

Create docker-compose.yml:

version: '3.8'
services:
eryxon-flow:
image: ghcr.io/sheetmetalconnect/eryxon-flow:latest
# Or use your custom build:
# build:
# context: .
# args:
# VITE_SUPABASE_URL: ${VITE_SUPABASE_URL}
# VITE_SUPABASE_PUBLISHABLE_KEY: ${VITE_SUPABASE_PUBLISHABLE_KEY}
# VITE_SUPABASE_PROJECT_ID: ${VITE_SUPABASE_PROJECT_ID}
container_name: eryxon-flow
restart: unless-stopped
ports:
- "80:80"
healthcheck:
test: ["CMD", "wget", "--quiet", "--tries=1", "--spider", "http://localhost/health"]
interval: 30s
timeout: 10s
retries: 3

Start:

Terminal window
docker compose up -d

Includes Caddy reverse proxy for automatic HTTPS:

version: '3.8'
services:
app:
image: ghcr.io/sheetmetalconnect/eryxon-flow:latest
container_name: eryxon-flow
restart: unless-stopped
expose:
- "80"
caddy:
image: caddy:alpine
container_name: caddy
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile:ro
- caddy_data:/data
- caddy_config:/config
depends_on:
- app
volumes:
caddy_data:
caddy_config:

Create Caddyfile:

your-domain.com {
reverse_proxy app:80
header {
X-Frame-Options "DENY"
X-Content-Type-Options "nosniff"
X-XSS-Protection "1; mode=block"
Referrer-Policy "strict-origin-when-cross-origin"
}
}

Save the above Docker Compose configuration as docker-compose.prod.yml, then deploy:

Terminal window
docker compose -f docker-compose.prod.yml up -d

Best for edge deployment with global CDN.

  1. Connect Repository

  2. Configure Build

    • Build command: npm run build
    • Output directory: dist
  3. Set Environment Variables

    Terminal window
    VITE_SUPABASE_URL=https://yourproject.supabase.co
    VITE_SUPABASE_PUBLISHABLE_KEY=your-anon-key
    VITE_SUPABASE_PROJECT_ID=yourproject
  4. Deploy

    • Cloudflare handles SSL, CDN, and global distribution automatically

Enable automated email invitations:

Terminal window
supabase secrets set \
RESEND_API_KEY="re_your_api_key" \
APP_URL="https://your-domain.com" \
EMAIL_FROM="Eryxon <noreply@your-domain.com>"

Add bot protection to auth forms:

  1. Create widget at Cloudflare Turnstile
  2. Add to .env:
    Terminal window
    VITE_TURNSTILE_SITE_KEY="your-site-key"
  3. Configure secret key in Supabase AuthenticationCaptcha Protection

Improve Edge Function performance:

Terminal window
supabase secrets set \
UPSTASH_REDIS_REST_URL="https://your-redis.upstash.io" \
UPSTASH_REDIS_REST_TOKEN="your-token"

For server-side CAD file processing (optional):

Terminal window
VITE_CAD_SERVICE_URL="https://your-cad-service.example.com"
VITE_CAD_SERVICE_API_KEY="your-api-key"

If not configured, browser-based processing is used.

The MCP server is NOT part of the deployment stack. It’s an optional local tool for Claude Desktop integration.

What it does:

  • Allows Claude Desktop to interact with your database using natural language
  • Provides 55 tools for managing jobs, parts, operations via AI

Quick start:

Terminal window
cd mcp-server
npm install && npm run build
export SUPABASE_URL="https://your-project.supabase.co"
export SUPABASE_SERVICE_KEY="your-service-key"
npm start

Complete setup instructions: See MCP Setup Guide for:

  • Local development setup
  • Cloud deployment (Railway, Fly.io, Docker)
  • Claude Desktop configuration
  • All 55 available tools

Note: Your self-hosted application works perfectly without the MCP server. It’s only for developers who want AI assistant integration via Claude Desktop.


Run the verification script to check your setup:

Terminal window
bash scripts/verify-setup.sh

Checks:

  • ✅ Environment variables
  • ✅ Supabase connectivity
  • ✅ Database tables
  • ✅ Storage buckets (see note below)
  • ✅ Dependencies
  • ✅ Production build

Note: Storage bucket check may report FAIL (HTTP 400) even when buckets exist. This is expected because the buckets are private (public: false) and the verification script uses the Anon Key, which cannot list private buckets. Verify manually via SQL:

SELECT * FROM storage.buckets;

Required buckets: parts-images, issues, parts-cad, batch-images


Terminal window
git pull origin main
npm ci
Terminal window
supabase db push
supabase functions deploy
Terminal window
npm run build
# For Docker:
docker compose build --no-cache
docker compose up -d

  1. Environment Variables

    • Always use VITE_SUPABASE_PROJECT_ID (not hardcoded)
    • Template literals need backticks not quotes: `https://${var}`
    • Validate all environment variables before using them
  2. Database Migrations

    • Always run migrations in order
    • The 20260127235000_enhance_batch_management.sql migration adds:
      • blocked status to batch_status enum
      • parent_batch_id column for batch nesting
      • nesting_image_url and layout_image_url columns
    • Never skip migrations
  3. Storage Buckets

    • Private buckets require signed URLs (not public URLs)
    • We use createSignedUrl() with 1-year expiry for batch images
    • Buckets needed: parts-images, issues, parts-cad, batch-images
  4. Edge Functions

    • Must be redeployed after code changes
    • Check logs if APIs return 502: supabase functions logs
    • Verify secrets are set: supabase secrets list
    • If experiencing 15s+ timeouts or cold start issues:
      • Functions use consolidated handlers to avoid deep module resolution
      • Import map (import_map.json) enables @shared/* path aliases
      • Circular dependencies in _shared/ folder can cause startup delays
  5. SQL Syntax

    • ✅ Use IF EXISTS ... THEN ... END IF blocks
    • ❌ Don’t use PERFORM ... WHERE EXISTS (invalid syntax)
  6. Authentication Trigger

    • The on_auth_user_created trigger must exist on auth.users
    • Without it, new signups won’t get profiles/tenants
    • Migration 20260127232000_add_missing_auth_trigger.sql ensures this
  • .env file is in .gitignore (never commit)
  • Service role key is kept secret
  • Database password is strong (16+ characters)
  • RLS policies are applied (via migrations)
  • Storage bucket policies restrict access properly
  • HTTPS is enabled in production (use Caddy or Cloudflare)
  • Enable Redis caching for high-traffic deployments
  • Use Cloudflare Pages for global edge distribution
  • Configure proper database indexes (included in migrations)
  • Monitor Edge Function execution times in Supabase dashboard

If URLs aren’t interpolating correctly, you’re using single quotes instead of backticks:

// Wrong
const url = 'https://${projectId}.supabase.co';
// Correct
const url = `https://${projectId}.supabase.co`;

Private buckets require signed URLs, not public URLs. Use createSignedUrl() with appropriate expiry.

The on_auth_user_created trigger must exist on auth.users. Without it, new signups won’t get profiles/tenants. Migration 20260127232000_add_missing_auth_trigger.sql ensures this.

Import map might not be deployed. Redeploy all functions:

Terminal window
supabase functions deploy

Migrations Fail with “Type Already Exists”

Section titled “Migrations Fail with “Type Already Exists””

Database has partial state from previous attempts. For fresh setups only (DESTRUCTIVE):

DROP SCHEMA IF EXISTS public CASCADE;
CREATE SCHEMA public;
GRANT USAGE ON SCHEMA public TO postgres, anon, authenticated, service_role;
GRANT ALL ON SCHEMA public TO postgres;
GRANT ALL ON SCHEMA public TO service_role;

Then re-run: supabase db push

Test the health endpoint:

Terminal window
curl https://yourproject.supabase.co/functions/v1/api-jobs \
-H "Authorization: Bearer YOUR_ANON_KEY"

Should return JSON (not 404/502).

Verify pg_cron extension is enabled:

SELECT * FROM pg_extension WHERE extname = 'pg_cron';

If empty, run seed.sql to schedule jobs.