| Slug | Company | Status | Health | Domain | Store | Created |
|---|
Click a row to view details and resource stats.
| Company | Phone | Industry | Slug | Requested | Actions |
|---|
No pending requests.
Calls the provisioning worker POST /provision.
Haulx Platform is a multi-tenant SaaS monorepo that produces two white-labeled products for hauling and recycling companies: an Ops Dashboard (dispatch, jobs, equipment, GPS/ELD, invoicing, AI) with a companion Mobile Crew App, and an optional E-commerce Storefront with its own admin panel. A shared Platform layer handles tenant provisioning, registry, and this admin UI.
The entire stack runs on Cloudflare's edge network -- Pages for static hosting + serverless functions, D1 for SQL, KV for session/cache, R2 for file storage, and Workers for scheduled tasks. There is no traditional origin server.
graph TB
subgraph mono["haulx-platform monorepo"]
direction TB
subgraph apps["apps/ (npm workspaces)"]
ops["apps/ops
Ops Dashboard + Mobile"]
store["apps/store
Storefront + Store Admin"]
end
subgraph plat["platform/"]
admin["platform/admin
Platform Admin UI"]
landing["platform/landing
Marketing Landing"]
provision["platform/provision
Provisioning Worker"]
end
subgraph infra["scripts/ + .github/"]
deploy["scripts/deploy.js
Root deploy orchestrator"]
deployAll["scripts/deploy-all.js
Multi-tenant deploy"]
gha[".github/workflows/
CI/CD pipelines"]
end
end
ops -->|"Cloudflare Pages"| cfOps["Per-Tenant
Ops Dashboard"]
store -->|"Cloudflare Pages"| cfStore["Per-Tenant
Storefront + Admin"]
admin -->|"Cloudflare Pages"| cfAdmin["Platform Admin"]
provision -->|"Cloudflare Worker"| cfProv["Provisioning API"]
landing -->|"Cloudflare Pages"| cfLand["Landing Page"]
Every tenant runs entirely on the Cloudflare edge. There is no centralized origin server, no EC2, no container orchestration. Each Cloudflare product serves a specific role in the stack:
| Service | Role | Binding | Details |
|---|---|---|---|
| Pages | Static hosting + serverless functions | N/A | Serves HTML/CSS/JS. functions/ directory auto-deploys as Pages Functions (like Workers but scoped to the Pages project). Each tenant gets separate Pages projects for ops and optionally storefront + admin. |
| Pages Functions | Serverless API layer | functions/api.js |
Handles POST /api with action-based routing. Has access to D1, KV, R2 via context.env. Cold starts are sub-50ms at the edge. |
| D1 | SQLite database at the edge | DB |
Per-tenant isolated database. Stores jobs, equipment, tickets, invoices, customers, etc. Replicated globally. Schema managed via d1/schema.sql. |
| KV | Key-value store for sessions/cache | KV |
Auth sessions, magic-link tokens, feature flags, cached API responses. Eventually-consistent with low-latency reads. |
| R2 | Object storage (S3-compatible) | BUCKET |
Stores uploaded images (load tickets, fuel receipts, equipment photos). Zero egress fees. Accessible from Pages Functions. |
| Workers | Scheduled/background tasks | N/A | haulx-platform-cron worker runs on cron triggers: log archival, ELD data sync, Ecwid order sync, RouteMate updates. |
graph LR
subgraph edge["Cloudflare Edge (per tenant)"]
pages["Pages Project
(static assets)"]
funcs["Pages Functions
(serverless API)"]
d1["D1 Database
(SQLite)"]
kv["KV Namespace
(sessions/cache)"]
r2["R2 Bucket
(images/files)"]
cron["Cron Worker
(scheduled tasks)"]
end
browser["Browser"] -->|"GET /"| pages
browser -->|"POST /api"| funcs
funcs -->|"SQL queries"| d1
funcs -->|"get/put sessions"| kv
funcs -->|"upload/serve images"| r2
cron -->|"scheduled jobs"| d1
cron -->|"sync data"| kv
subgraph shared["Shared Platform Resources"]
registry["platform-registry D1"]
provWorker["Provisioning Worker"]
adminUI["Platform Admin"]
end
adminUI -->|"manage tenants"| registry
provWorker -->|"create resources"| edge
provWorker -->|"register tenant"| registry
Each app has its own wrangler.toml that declares resource bindings. The key files:
| Config File | Pages Project | Bindings |
|---|---|---|
apps/ops/wrangler.toml | haulx-dashboard | D1 haulx-dashboard-db, R2 haulx-images, KV |
apps/ops/workers/cron/wrangler.toml | Worker haulx-platform-cron | Same D1 + KV, cron triggers |
apps/store/wrangler-storefront.toml | haulx-online | D1 haulx-online-db, KV CACHE |
apps/store/wrangler-admin.toml | haulx-admin | Same D1, R2 haulx-images, KV CACHE |
platform/admin/wrangler.toml | haulx-platform-admin | D1 platform-registry |
platform/provision/wrangler.toml | Worker haulx-platform-provisioning | D1 platform-registry, env vars |
Haulx uses a fully isolated, per-tenant resource model. Each tenant gets their own Cloudflare Pages project, D1 database, KV namespace, and R2 bucket. There is no shared database -- tenants are physically isolated at the infrastructure level. This provides strong security boundaries and allows per-tenant scaling, monitoring, and teardown.
graph TB
subgraph platform["Platform Layer"]
registryDB["platform-registry D1
(tenant metadata)"]
provisionW["Provisioning Worker"]
adminPanel["Platform Admin"]
deployScript["deploy-all.js"]
end
subgraph tenantA["Tenant: Haulx Recycling"]
pagesA["Pages: haulx-dashboard"]
d1A["D1: haulx-dashboard-db"]
kvA["KV: haulx-ops-kv"]
r2A["R2: haulx-ops-assets"]
storeA["Pages: haulx-online"]
storeAdminA["Pages: haulx-admin"]
storeD1A["D1: haulx-online-db"]
end
subgraph tenantB["Tenant: Acme Hauling"]
pagesB["Pages: acme-dashboard"]
d1B["D1: acme-ops-db"]
kvB["KV: acme-ops-kv"]
r2B["R2: acme-ops-assets"]
end
registryDB -->|"tracks"| tenantA
registryDB -->|"tracks"| tenantB
deployScript -->|"reads registry"| registryDB
deployScript -->|"deploys to each"| tenantA
deployScript -->|"deploys to each"| tenantB
The platform-registry D1 database stores all tenant metadata. The provisioning worker writes to this on tenant creation, and deploy-all.js reads from it to deploy across all tenants.
| Column | Type | Purpose |
|---|---|---|
id | TEXT PK | UUID for the tenant |
slug | TEXT UNIQUE | URL-safe identifier (derived from company name) |
company_name | TEXT | Display name |
owner_email | TEXT | Primary contact |
industry | TEXT | Used for tagline/branding |
status | TEXT | active or suspended |
ops_domain | TEXT | Custom domain or {slug}.haulxplatform.com |
ops_pages_project | TEXT | Cloudflare Pages project name for ops |
ops_d1_database_id | TEXT | D1 database UUID |
ops_kv_namespace_id | TEXT | KV namespace UUID |
ops_r2_bucket | TEXT | R2 bucket name |
store_enabled | BOOLEAN | Whether storefront is provisioned |
store_domain | TEXT | Store custom domain (if enabled) |
store_pages_project | TEXT | Storefront Pages project name |
store_admin_pages_project | TEXT | Store admin Pages project name |
store_d1_database_id | TEXT | Store D1 database UUID |
| Environment Variable | Injected Into | Example |
|---|---|---|
TENANT_NAME | Page titles, headers, branding | Haulx Recycling |
TENANT_DOMAIN | API base URLs, links | dashboard.haulxrecycling.com |
TENANT_TAGLINE | Mobile app subtitle | Haulage & Recycling Operations |
PLATFORM_VERSION | Footer version badge | 2.1.0 |
Deployment is fully automated via GitHub Actions. Pushing to main triggers path-based workflows that follow a canary-first deployment strategy: deploy to the Haulx tenant first, run a health check, and only then deploy to all other tenants.
flowchart TD
push["Push to main branch"] --> pathCheck{"Path-based trigger"}
pathCheck -->|"apps/ops/** changed"| opsWf["deploy-ops.yml"]
pathCheck -->|"apps/store/** changed"| storeWf["deploy-store.yml"]
opsWf --> canaryOps["Job: canary"]
canaryOps --> buildOps["Build ops for Haulx tenant
TENANT_NAME=Haulx
TENANT_DOMAIN=dashboard.haulxrecycling.com"]
buildOps --> deployCanary["wrangler pages deploy
--project-name=haulx-dashboard"]
deployCanary --> healthCheck["Health Check
POST /api action=health"]
healthCheck -->|"200 OK"| deployAllOps["Job: deploy-all
node deploy-all.js --app ops --exclude haulx"]
healthCheck -->|"Not 200"| failPipeline["Pipeline FAILS
Other tenants NOT deployed"]
deployAllOps --> readReg["Read tenant list from
platform-registry D1"]
readReg --> batchDeploy["Deploy in batches of 3
Build per tenant, wrangler pages deploy"]
storeWf --> canaryStore["Job: canary"]
canaryStore --> buildStore["Build store for Haulx tenant"]
buildStore --> deploySf["Deploy storefront + admin"]
deploySf --> deployAllStore["Job: deploy-all
node deploy-all.js --app store --exclude haulx"]
| Workflow | Trigger Path | Canary Project | Health Check |
|---|---|---|---|
deploy-ops.yml |
apps/ops/**, scripts/deploy-all.js |
haulx-dashboard |
POST https://dashboard.haulxrecycling.com/api with {"action":"health"} |
deploy-store.yml |
apps/store/**, scripts/deploy-all.js |
haulx-storefront + haulx-admin |
None (deploy-only) |
| Secret | Used By | Purpose |
|---|---|---|
CLOUDFLARE_API_TOKEN | Both workflows | Wrangler authentication for Pages deploy |
CLOUDFLARE_ACCOUNT_ID | Both workflows | Target Cloudflare account |
REGISTRY_D1_ID | deploy-all job | D1 database ID for reading tenant list |
For local development, each app has a hash-based smart deploy system. The deploy scripts (apps/ops/scripts/deploy.js, apps/store/scripts/deploy.js) maintain a .deploy-cache.json file that stores SHA-256 hashes of all tracked source files. On each run:
flowchart LR run["npm run deploy"] --> loadCache["Load .deploy-cache.json"] loadCache --> hashFiles["SHA-256 hash
all tracked source files"] hashFiles --> compare{"Any hash
changed?"} compare -->|"No"| skip["Skip deploy
(nothing changed)"] compare -->|"Yes"| identify["Identify changed targets
(dashboard, mobile, etc.)"] identify --> incBuild["Incremental build
--only=dashboard,mobile"] incBuild --> wrangler["wrangler pages deploy"] wrangler --> updateCache["Update .deploy-cache.json
with new hashes"]
npm run deploy:force or pass --force to bypass hash checking and deploy everything regardless of changes.
Each app has a custom scripts/build.js that assembles source templates into deployable HTML. The build system uses a server-side include pattern borrowed from Google Apps Script, where templates reference other templates via <?!= include('TemplateName'); ?> directives.
flowchart TD
subgraph source["Source Templates (apps/ops/templates/)"]
main["Dashboard.html
(entry point)"]
css["Dashboard_CSS.html"]
header["Dashboard_Header.html"]
jobs["Dashboard_Jobs.html"]
dispatch["Dashboard_Dispatch.html"]
jsFile["Dashboard_JS.html"]
more["...18 more templates"]
end
main -->|"include"| css
main -->|"include"| header
main -->|"include"| jobs
main -->|"include"| dispatch
main -->|"include"| jsFile
main -->|"include"| more
subgraph buildStep["build.js processing"]
resolve["Resolve all includes
recursively"]
inject["Inject google.script.run shim
into head"]
overlay["Inject crew-login overlay
after body"]
tenant["Replace TENANT_NAME,
TENANT_DOMAIN, etc."]
end
main --> resolve
resolve --> inject
inject --> overlay
overlay --> tenant
subgraph output["Build Output (apps/ops/dist/)"]
indexHtml["index.html"]
mobileHtml["mobile.html"]
estimateHtml["estimate.html"]
manifest["manifest.json"]
funcsDir["functions/ (copied)"]
end
tenant --> indexHtml
tenant --> mobileHtml
tenant --> estimateHtml
tenant --> manifest
tenant --> funcsDir
| Target | Entry Template | Output File | Includes |
|---|---|---|---|
dashboard |
Dashboard.html |
dist/index.html |
Dashboard_CSS, Dashboard_Header, Dashboard_Equipment, Dashboard_Dumpsters, Dashboard_Dispatch, Dashboard_Jobs, Dashboard_JobForm, Dashboard_JS, Dashboard_AdminJS, Dashboard_AI, Dashboard_ReviewQueue, Dashboard_Invoicing, Dashboard_Fuel, Dashboard_Tracker, Dashboard_Admin, Dashboard_Estimates, Dashboard_Orders, Dashboard_Profitability, Dashboard_Messages |
mobile |
Mobile.html |
dist/mobile.html |
Mobile_CSS, Mobile_JS, Mobile_MyDay, Mobile_ActiveJob, Mobile_LoadTicket, Mobile_DumpTruckTicket, Mobile_Dumpsters, Mobile_DumpsterActions, Mobile_FuelReceipt, Mobile_ClosedTickets, Mobile_Tracker, Mobile_Estimate |
estimate | Estimate_Public.html | dist/estimate.html | Standalone |
manifest | Generated | dist/manifest.json | N/A |
| Target | Output Directory | Extra Processing |
|---|---|---|
storefront |
dist-storefront/ |
CSS/JS extraction + minification via clean-css and terser. Outputs hashed asset files (assets/storefront.<hash>.css, storefront.<hash>.js). Storefront API shim injected. |
admin |
dist-admin/ |
Admin shell + admin partials assembled. Admin API shim injected. |
The ops build.js injects two pieces of content that live in the build script itself, not in any template:
| Injection | Location | Purpose |
|---|---|---|
google.script.run shim |
Inside <head> |
Replaces Apps Script's server-call API with fetch('/api') calls. Allows the same codebase to run on both Apps Script and Cloudflare Pages. |
| Crew-login overlay | After <body> |
Phone-number based crew authentication for the mobile app. Injected as an HTML overlay with its own styles and logic. |
New tenants are created via the provisioning worker (platform/provision/worker.js). It creates all required Cloudflare resources via the Cloudflare API, runs the database schema and seed, and registers the tenant in the platform registry. If any step fails, it rolls back all previously created resources.
sequenceDiagram participant Client as Admin UI / API participant PW as Provisioning Worker participant CF as Cloudflare API participant Reg as platform-registry D1 Client->>PW: POST /provision
{company_name, owner_email, industry} PW->>PW: Validate input, generate slug PW->>Reg: Check slug uniqueness Reg-->>PW: OK (no conflict) rect rgb(30, 40, 30) Note over PW,CF: Resource Creation (with rollback) PW->>CF: Create D1 database ({slug}-ops-db) CF-->>PW: database_id PW->>CF: Run schema SQL PW->>CF: Run seed SQL (admin user, defaults) PW->>CF: Create KV namespace ({slug}-ops-kv) CF-->>PW: namespace_id PW->>CF: Create R2 bucket ({slug}-ops-assets) CF-->>PW: bucket created PW->>CF: Create Pages project ({slug}-dashboard) CF-->>PW: project created end PW->>Reg: INSERT INTO tenants
(slug, ids, domain, status=active) PW-->>Client: 201 {tenant_id, slug, domain}
| Binding / Variable | Source | Purpose |
|---|---|---|
REGISTRY_DB | wrangler.toml D1 binding | Read/write tenant records |
CF_ACCOUNT_ID | wrangler.toml var | Target Cloudflare account for resource creation |
CF_API_TOKEN | Wrangler secret | Auth token with D1/KV/R2/Pages create permissions |
If any resource creation step fails, the worker calls cleanupResources() which deletes previously created resources in reverse order (D1, KV, R2, Pages). This prevents orphaned resources from accumulating in the Cloudflare account.
{slug}.haulxplatform.com. Custom domains can be configured later via the Cloudflare dashboard or API.
The storefront is an optional add-on for tenants. It is not provisioned by default -- it must be explicitly enabled via the Platform Admin UI or API. Enabling a store creates additional Cloudflare resources (separate Pages projects and D1 database) for the storefront and store admin.
flowchart TD trigger["Admin clicks 'Enable Storefront'"] --> apiCall["POST /api
action: enableStorefront, slug"] apiCall --> provCall["Platform Admin calls
Provisioning Worker
POST /provision-store"] provCall --> createD1["Create D1: {slug}-store-db
Run store schema + seed"] provCall --> createKV["Create KV: {slug}-store-kv"] createD1 --> createSfPages["Create Pages: {slug}-storefront"] createKV --> createAdminPages["Create Pages: {slug}-store-admin"] createSfPages --> updateReg["Update registry:
store_enabled = true
store_pages_project, store_d1_database_id, etc."] createAdminPages --> updateReg updateReg --> done["Tenant now has ops + store"]
The store uses a unique deployment pattern because the storefront and store admin share workers but need different Pages Functions. The deploy script temporarily creates a functions/ directory, deploys, then removes it:
flowchart LR
subgraph sfDeploy["Storefront Deploy"]
sf1["setupStorefrontFunctions()"]
sf2["Create temp functions/
- storefront-api.js
- _shared/ (d1-backend, d1-router, kv-cache)"]
sf3["wrangler pages deploy
dist-storefront/
--project-name=haulx-online"]
sf4["cleanFunctions()
Remove temp functions/"]
sf1 --> sf2 --> sf3 --> sf4
end
subgraph adminDeploy["Store Admin Deploy"]
ad1["setupAdminFunctions()"]
ad2["Create temp functions/
- api/[[path]].js (admin-api)
- _shared/"]
ad3["wrangler pages deploy
dist-admin/
--project-name=haulx-admin"]
ad4["cleanFunctions()"]
ad1 --> ad2 --> ad3 --> ad4
end
| Worker File | Used By | Purpose |
|---|---|---|
workers/storefront-api.js | Storefront | Product catalog, cart, checkout, blog APIs |
workers/admin-api.js | Store Admin | Product CRUD, order management, analytics |
workers/d1-backend.js | Both | Shared D1 query helpers |
workers/d1-router.js | Both | Route-based D1 query dispatch |
workers/kv-cache.js | Both | KV-based response caching layer |