Platform Admin

← Tenants

Haulx Platform Admin

Slug Company Status Health Domain Store Created

Click a row to view details and resource stats.

Loading platform resources...
Loading account-wide usage...

Pending Provisioning Requests

Company Email Phone Industry Slug Requested Actions

No pending requests.

New tenant

Calls the provisioning worker POST /provision.

Loading logs...
Platform Overview Cloudflare Infra Multi-Tenancy CI/CD Pipeline Build Process Provisioning Store Enablement

1. Platform Overview

Haulx Platform is a multi-tenant SaaS monorepo that produces two white-labeled products for hauling and recycling companies: an Ops Dashboard (dispatch, jobs, equipment, GPS/ELD, invoicing, AI) with a companion Mobile Crew App, and an optional E-commerce Storefront with its own admin panel. A shared Platform layer handles tenant provisioning, registry, and this admin UI.

The entire stack runs on Cloudflare's edge network -- Pages for static hosting + serverless functions, D1 for SQL, KV for session/cache, R2 for file storage, and Workers for scheduled tasks. There is no traditional origin server.

graph TB
  subgraph mono["haulx-platform monorepo"]
    direction TB
    subgraph apps["apps/ (npm workspaces)"]
      ops["apps/ops
Ops Dashboard + Mobile"] store["apps/store
Storefront + Store Admin"] end subgraph plat["platform/"] admin["platform/admin
Platform Admin UI"] landing["platform/landing
Marketing Landing"] provision["platform/provision
Provisioning Worker"] end subgraph infra["scripts/ + .github/"] deploy["scripts/deploy.js
Root deploy orchestrator"] deployAll["scripts/deploy-all.js
Multi-tenant deploy"] gha[".github/workflows/
CI/CD pipelines"] end end ops -->|"Cloudflare Pages"| cfOps["Per-Tenant
Ops Dashboard"] store -->|"Cloudflare Pages"| cfStore["Per-Tenant
Storefront + Admin"] admin -->|"Cloudflare Pages"| cfAdmin["Platform Admin"] provision -->|"Cloudflare Worker"| cfProv["Provisioning API"] landing -->|"Cloudflare Pages"| cfLand["Landing Page"]

Monorepo Structure

haulx-platform/ apps/ ops/ -- @haulx-platform/ops (npm workspace) templates/ -- Dashboard_*.html, Mobile_*.html source files functions/ -- Pages Functions (api.js, _d1.js, _r2.js, etc.) d1/ -- Schema, seed, migrations workers/cron/ -- Scheduled worker (ELD, Ecwid, archival) scripts/ -- build.js, deploy.js, migrate.js dist/ -- Build output (DO NOT EDIT) wrangler.toml -- Pages config: D1, R2, KV bindings store/ -- @haulx-platform/store (npm workspace) templates/ -- Storefront_*.html, Admin_*.html source files workers/ -- storefront-api.js, admin-api.js, d1-backend.js d1/ -- Store schema and seed scripts/ -- build.js, deploy.js, setup-functions.js dist-storefront/ -- Storefront build output dist-admin/ -- Store admin build output wrangler-storefront.toml wrangler-admin.toml platform/ admin/ -- This admin UI (index.html + functions/api.js) landing/ -- Static marketing site provision/ -- Provisioning Worker (worker.js) scripts/ deploy.js -- Orchestrates ops + store + cron deploy deploy-all.js -- Multi-tenant deploy from registry migrate-all.js -- Multi-tenant D1 migrations .github/workflows/ deploy-ops.yml -- CI: canary + deploy-all for ops deploy-store.yml -- CI: canary + deploy-all for store package.json -- Root: workspaces, build/deploy scripts

2. Cloudflare Infrastructure

Every tenant runs entirely on the Cloudflare edge. There is no centralized origin server, no EC2, no container orchestration. Each Cloudflare product serves a specific role in the stack:

ServiceRoleBindingDetails
Pages Static hosting + serverless functions N/A Serves HTML/CSS/JS. functions/ directory auto-deploys as Pages Functions (like Workers but scoped to the Pages project). Each tenant gets separate Pages projects for ops and optionally storefront + admin.
Pages Functions Serverless API layer functions/api.js Handles POST /api with action-based routing. Has access to D1, KV, R2 via context.env. Cold starts are sub-50ms at the edge.
D1 SQLite database at the edge DB Per-tenant isolated database. Stores jobs, equipment, tickets, invoices, customers, etc. Replicated globally. Schema managed via d1/schema.sql.
KV Key-value store for sessions/cache KV Auth sessions, magic-link tokens, feature flags, cached API responses. Eventually-consistent with low-latency reads.
R2 Object storage (S3-compatible) BUCKET Stores uploaded images (load tickets, fuel receipts, equipment photos). Zero egress fees. Accessible from Pages Functions.
Workers Scheduled/background tasks N/A haulx-platform-cron worker runs on cron triggers: log archival, ELD data sync, Ecwid order sync, RouteMate updates.
graph LR
  subgraph edge["Cloudflare Edge (per tenant)"]
    pages["Pages Project
(static assets)"] funcs["Pages Functions
(serverless API)"] d1["D1 Database
(SQLite)"] kv["KV Namespace
(sessions/cache)"] r2["R2 Bucket
(images/files)"] cron["Cron Worker
(scheduled tasks)"] end browser["Browser"] -->|"GET /"| pages browser -->|"POST /api"| funcs funcs -->|"SQL queries"| d1 funcs -->|"get/put sessions"| kv funcs -->|"upload/serve images"| r2 cron -->|"scheduled jobs"| d1 cron -->|"sync data"| kv subgraph shared["Shared Platform Resources"] registry["platform-registry D1"] provWorker["Provisioning Worker"] adminUI["Platform Admin"] end adminUI -->|"manage tenants"| registry provWorker -->|"create resources"| edge provWorker -->|"register tenant"| registry

Wrangler Configuration

Each app has its own wrangler.toml that declares resource bindings. The key files:

Config FilePages ProjectBindings
apps/ops/wrangler.tomlhaulx-dashboardD1 haulx-dashboard-db, R2 haulx-images, KV
apps/ops/workers/cron/wrangler.tomlWorker haulx-platform-cronSame D1 + KV, cron triggers
apps/store/wrangler-storefront.tomlhaulx-onlineD1 haulx-online-db, KV CACHE
apps/store/wrangler-admin.tomlhaulx-adminSame D1, R2 haulx-images, KV CACHE
platform/admin/wrangler.tomlhaulx-platform-adminD1 platform-registry
platform/provision/wrangler.tomlWorker haulx-platform-provisioningD1 platform-registry, env vars

3. Multi-Tenancy Model

Haulx uses a fully isolated, per-tenant resource model. Each tenant gets their own Cloudflare Pages project, D1 database, KV namespace, and R2 bucket. There is no shared database -- tenants are physically isolated at the infrastructure level. This provides strong security boundaries and allows per-tenant scaling, monitoring, and teardown.

graph TB
  subgraph platform["Platform Layer"]
    registryDB["platform-registry D1
(tenant metadata)"] provisionW["Provisioning Worker"] adminPanel["Platform Admin"] deployScript["deploy-all.js"] end subgraph tenantA["Tenant: Haulx Recycling"] pagesA["Pages: haulx-dashboard"] d1A["D1: haulx-dashboard-db"] kvA["KV: haulx-ops-kv"] r2A["R2: haulx-ops-assets"] storeA["Pages: haulx-online"] storeAdminA["Pages: haulx-admin"] storeD1A["D1: haulx-online-db"] end subgraph tenantB["Tenant: Acme Hauling"] pagesB["Pages: acme-dashboard"] d1B["D1: acme-ops-db"] kvB["KV: acme-ops-kv"] r2B["R2: acme-ops-assets"] end registryDB -->|"tracks"| tenantA registryDB -->|"tracks"| tenantB deployScript -->|"reads registry"| registryDB deployScript -->|"deploys to each"| tenantA deployScript -->|"deploys to each"| tenantB

Tenant Registry Schema

The platform-registry D1 database stores all tenant metadata. The provisioning worker writes to this on tenant creation, and deploy-all.js reads from it to deploy across all tenants.

ColumnTypePurpose
idTEXT PKUUID for the tenant
slugTEXT UNIQUEURL-safe identifier (derived from company name)
company_nameTEXTDisplay name
owner_emailTEXTPrimary contact
industryTEXTUsed for tagline/branding
statusTEXTactive or suspended
ops_domainTEXTCustom domain or {slug}.haulxplatform.com
ops_pages_projectTEXTCloudflare Pages project name for ops
ops_d1_database_idTEXTD1 database UUID
ops_kv_namespace_idTEXTKV namespace UUID
ops_r2_bucketTEXTR2 bucket name
store_enabledBOOLEANWhether storefront is provisioned
store_domainTEXTStore custom domain (if enabled)
store_pages_projectTEXTStorefront Pages project name
store_admin_pages_projectTEXTStore admin Pages project name
store_d1_database_idTEXTStore D1 database UUID

Build-Time Tenant Branding

Tenant identity is injected at build time via environment variables, not at runtime. Each tenant's code is built separately with its own values, producing a unique static bundle per tenant.
Environment VariableInjected IntoExample
TENANT_NAMEPage titles, headers, brandingHaulx Recycling
TENANT_DOMAINAPI base URLs, linksdashboard.haulxrecycling.com
TENANT_TAGLINEMobile app subtitleHaulage & Recycling Operations
PLATFORM_VERSIONFooter version badge2.1.0

4. CI/CD Pipeline (GitHub Actions)

Deployment is fully automated via GitHub Actions. Pushing to main triggers path-based workflows that follow a canary-first deployment strategy: deploy to the Haulx tenant first, run a health check, and only then deploy to all other tenants.

flowchart TD
  push["Push to main branch"] --> pathCheck{"Path-based trigger"}
  pathCheck -->|"apps/ops/** changed"| opsWf["deploy-ops.yml"]
  pathCheck -->|"apps/store/** changed"| storeWf["deploy-store.yml"]

  opsWf --> canaryOps["Job: canary"]
  canaryOps --> buildOps["Build ops for Haulx tenant
TENANT_NAME=Haulx
TENANT_DOMAIN=dashboard.haulxrecycling.com"] buildOps --> deployCanary["wrangler pages deploy
--project-name=haulx-dashboard"] deployCanary --> healthCheck["Health Check
POST /api action=health"] healthCheck -->|"200 OK"| deployAllOps["Job: deploy-all
node deploy-all.js --app ops --exclude haulx"] healthCheck -->|"Not 200"| failPipeline["Pipeline FAILS
Other tenants NOT deployed"] deployAllOps --> readReg["Read tenant list from
platform-registry D1"] readReg --> batchDeploy["Deploy in batches of 3
Build per tenant, wrangler pages deploy"] storeWf --> canaryStore["Job: canary"] canaryStore --> buildStore["Build store for Haulx tenant"] buildStore --> deploySf["Deploy storefront + admin"] deploySf --> deployAllStore["Job: deploy-all
node deploy-all.js --app store --exclude haulx"]

Workflow Details

WorkflowTrigger PathCanary ProjectHealth Check
deploy-ops.yml apps/ops/**, scripts/deploy-all.js haulx-dashboard POST https://dashboard.haulxrecycling.com/api with {"action":"health"}
deploy-store.yml apps/store/**, scripts/deploy-all.js haulx-storefront + haulx-admin None (deploy-only)

Required Secrets

SecretUsed ByPurpose
CLOUDFLARE_API_TOKENBoth workflowsWrangler authentication for Pages deploy
CLOUDFLARE_ACCOUNT_IDBoth workflowsTarget Cloudflare account
REGISTRY_D1_IDdeploy-all jobD1 database ID for reading tenant list

Smart Deploys (Local)

For local development, each app has a hash-based smart deploy system. The deploy scripts (apps/ops/scripts/deploy.js, apps/store/scripts/deploy.js) maintain a .deploy-cache.json file that stores SHA-256 hashes of all tracked source files. On each run:

flowchart LR
  run["npm run deploy"] --> loadCache["Load .deploy-cache.json"]
  loadCache --> hashFiles["SHA-256 hash
all tracked source files"] hashFiles --> compare{"Any hash
changed?"} compare -->|"No"| skip["Skip deploy
(nothing changed)"] compare -->|"Yes"| identify["Identify changed targets
(dashboard, mobile, etc.)"] identify --> incBuild["Incremental build
--only=dashboard,mobile"] incBuild --> wrangler["wrangler pages deploy"] wrangler --> updateCache["Update .deploy-cache.json
with new hashes"]
Use npm run deploy:force or pass --force to bypass hash checking and deploy everything regardless of changes.

5. Build Process

Each app has a custom scripts/build.js that assembles source templates into deployable HTML. The build system uses a server-side include pattern borrowed from Google Apps Script, where templates reference other templates via <?!= include('TemplateName'); ?> directives.

flowchart TD
  subgraph source["Source Templates (apps/ops/templates/)"]
    main["Dashboard.html
(entry point)"] css["Dashboard_CSS.html"] header["Dashboard_Header.html"] jobs["Dashboard_Jobs.html"] dispatch["Dashboard_Dispatch.html"] jsFile["Dashboard_JS.html"] more["...18 more templates"] end main -->|"include"| css main -->|"include"| header main -->|"include"| jobs main -->|"include"| dispatch main -->|"include"| jsFile main -->|"include"| more subgraph buildStep["build.js processing"] resolve["Resolve all includes
recursively"] inject["Inject google.script.run shim
into head"] overlay["Inject crew-login overlay
after body"] tenant["Replace TENANT_NAME,
TENANT_DOMAIN, etc."] end main --> resolve resolve --> inject inject --> overlay overlay --> tenant subgraph output["Build Output (apps/ops/dist/)"] indexHtml["index.html"] mobileHtml["mobile.html"] estimateHtml["estimate.html"] manifest["manifest.json"] funcsDir["functions/ (copied)"] end tenant --> indexHtml tenant --> mobileHtml tenant --> estimateHtml tenant --> manifest tenant --> funcsDir

Ops Build Targets

TargetEntry TemplateOutput FileIncludes
dashboard Dashboard.html dist/index.html Dashboard_CSS, Dashboard_Header, Dashboard_Equipment, Dashboard_Dumpsters, Dashboard_Dispatch, Dashboard_Jobs, Dashboard_JobForm, Dashboard_JS, Dashboard_AdminJS, Dashboard_AI, Dashboard_ReviewQueue, Dashboard_Invoicing, Dashboard_Fuel, Dashboard_Tracker, Dashboard_Admin, Dashboard_Estimates, Dashboard_Orders, Dashboard_Profitability, Dashboard_Messages
mobile Mobile.html dist/mobile.html Mobile_CSS, Mobile_JS, Mobile_MyDay, Mobile_ActiveJob, Mobile_LoadTicket, Mobile_DumpTruckTicket, Mobile_Dumpsters, Mobile_DumpsterActions, Mobile_FuelReceipt, Mobile_ClosedTickets, Mobile_Tracker, Mobile_Estimate
estimateEstimate_Public.htmldist/estimate.htmlStandalone
manifestGenerateddist/manifest.jsonN/A

Store Build Targets

TargetOutput DirectoryExtra Processing
storefront dist-storefront/ CSS/JS extraction + minification via clean-css and terser. Outputs hashed asset files (assets/storefront.<hash>.css, storefront.<hash>.js). Storefront API shim injected.
admin dist-admin/ Admin shell + admin partials assembled. Admin API shim injected.

Injected Content (Ops)

The ops build.js injects two pieces of content that live in the build script itself, not in any template:

InjectionLocationPurpose
google.script.run shim Inside <head> Replaces Apps Script's server-call API with fetch('/api') calls. Allows the same codebase to run on both Apps Script and Cloudflare Pages.
Crew-login overlay After <body> Phone-number based crew authentication for the mobile app. Injected as an HTML overlay with its own styles and logic.

6. Provisioning Flow

New tenants are created via the provisioning worker (platform/provision/worker.js). It creates all required Cloudflare resources via the Cloudflare API, runs the database schema and seed, and registers the tenant in the platform registry. If any step fails, it rolls back all previously created resources.

sequenceDiagram
  participant Client as Admin UI / API
  participant PW as Provisioning Worker
  participant CF as Cloudflare API
  participant Reg as platform-registry D1

  Client->>PW: POST /provision
{company_name, owner_email, industry} PW->>PW: Validate input, generate slug PW->>Reg: Check slug uniqueness Reg-->>PW: OK (no conflict) rect rgb(30, 40, 30) Note over PW,CF: Resource Creation (with rollback) PW->>CF: Create D1 database ({slug}-ops-db) CF-->>PW: database_id PW->>CF: Run schema SQL PW->>CF: Run seed SQL (admin user, defaults) PW->>CF: Create KV namespace ({slug}-ops-kv) CF-->>PW: namespace_id PW->>CF: Create R2 bucket ({slug}-ops-assets) CF-->>PW: bucket created PW->>CF: Create Pages project ({slug}-dashboard) CF-->>PW: project created end PW->>Reg: INSERT INTO tenants
(slug, ids, domain, status=active) PW-->>Client: 201 {tenant_id, slug, domain}

Provisioning Worker Config

Binding / VariableSourcePurpose
REGISTRY_DBwrangler.toml D1 bindingRead/write tenant records
CF_ACCOUNT_IDwrangler.toml varTarget Cloudflare account for resource creation
CF_API_TOKENWrangler secretAuth token with D1/KV/R2/Pages create permissions

Rollback Behavior

If any resource creation step fails, the worker calls cleanupResources() which deletes previously created resources in reverse order (D1, KV, R2, Pages). This prevents orphaned resources from accumulating in the Cloudflare account.

Default Tenant Domain

Newly provisioned tenants get a default domain of {slug}.haulxplatform.com. Custom domains can be configured later via the Cloudflare dashboard or API.

7. Store Enablement

The storefront is an optional add-on for tenants. It is not provisioned by default -- it must be explicitly enabled via the Platform Admin UI or API. Enabling a store creates additional Cloudflare resources (separate Pages projects and D1 database) for the storefront and store admin.

flowchart TD
  trigger["Admin clicks 'Enable Storefront'"] --> apiCall["POST /api
action: enableStorefront, slug"] apiCall --> provCall["Platform Admin calls
Provisioning Worker
POST /provision-store"] provCall --> createD1["Create D1: {slug}-store-db
Run store schema + seed"] provCall --> createKV["Create KV: {slug}-store-kv"] createD1 --> createSfPages["Create Pages: {slug}-storefront"] createKV --> createAdminPages["Create Pages: {slug}-store-admin"] createSfPages --> updateReg["Update registry:
store_enabled = true
store_pages_project, store_d1_database_id, etc."] createAdminPages --> updateReg updateReg --> done["Tenant now has ops + store"]

Store Deploy Architecture

The store uses a unique deployment pattern because the storefront and store admin share workers but need different Pages Functions. The deploy script temporarily creates a functions/ directory, deploys, then removes it:

flowchart LR
  subgraph sfDeploy["Storefront Deploy"]
    sf1["setupStorefrontFunctions()"]
    sf2["Create temp functions/
- storefront-api.js
- _shared/ (d1-backend, d1-router, kv-cache)"] sf3["wrangler pages deploy
dist-storefront/
--project-name=haulx-online"] sf4["cleanFunctions()
Remove temp functions/"] sf1 --> sf2 --> sf3 --> sf4 end subgraph adminDeploy["Store Admin Deploy"] ad1["setupAdminFunctions()"] ad2["Create temp functions/
- api/[[path]].js (admin-api)
- _shared/"] ad3["wrangler pages deploy
dist-admin/
--project-name=haulx-admin"] ad4["cleanFunctions()"] ad1 --> ad2 --> ad3 --> ad4 end

Store Worker Files

Worker FileUsed ByPurpose
workers/storefront-api.jsStorefrontProduct catalog, cart, checkout, blog APIs
workers/admin-api.jsStore AdminProduct CRUD, order management, analytics
workers/d1-backend.jsBothShared D1 query helpers
workers/d1-router.jsBothRoute-based D1 query dispatch
workers/kv-cache.jsBothKV-based response caching layer