Earthbond Deployment Repository
Earthbond is now treated as the primary deployment project in this repository.
This repository is the canonical GitHub source for the Earthbond app, while the running server remains the local Docker-based runtime environment.
The repository still contains a workspace-level projects/ split so Earthbond and WIICCO site assets are no longer mixed conceptually:
Deployment source of truth:
- remote repository:
git@github.com:robby1312/earthbond.git
- active deployment branch:
main
Metadata-first geodetic data platform POC with strict CRS rules, audit lineage, and tenant schema isolation.
Implemented in this phase
- Monorepo structure for backend APIs, workers, admin/client web UIs, contracts, and DB assets.
- PostgreSQL + PostGIS migration chain (
0001-0008) including: - shared schemas (
core,crs,audit,ops) - tenant provisioning function (
core.provision_tenant_schema) - immutable audit ledger and evidence tables
- strict CRS policy seed and default retention (365 days)
- upload sessions, job results, and scraper schedule persistence
- Docker Compose full stack and local macOS developer workflow.
- LAZ/LAS processing hardening:
- explicit LAZ backend support via
lazrs - strict CRS + epoch + vertical validation with quarantine path
- exact CRS->ECEF conversion with fixed ECEF anchor + local ENU coordinates
- Export workflows:
- queue Autodesk-friendly
PTSoutput (/jobs/export/autodesk) - filtered
CSV/JSONLexports for ML training pipelines - classification summary artifact generation per export
- Open-data auto-search workflow:
- canonical
queries/results_normalized/shortlist/download_queueartifact generation - weighted ranking (overlap, resolution, recency, coverage, license, format-readiness)
- auto AOI derivation from processed LiDAR centroid metadata
- optional background downloads +
summary.jsonandrisk_report.json - Readable metadata + visualization:
- metadata history list (
/projects/{project_id}/metadata) - metadata detail (
/projects/{project_id}/metadata/{job_id}) - client-side WebGL viewer (Three.js) with:
- multi-file rendering from selected metadata rows
- class/elevation/intensity filters + color modes
- point-budget presets up to 1M points
- point inspection and dense-cluster camera focus
- Admin storage management:
- paginated storage listing (
/storage/objects) - object deletion (
/storage/object) - prefix delete with dry-run and safety controls (
/storage/prefix) - optional DB cleanup marks related
upload_sessions/pointcloud_tilesrows asdeleted - Multi-file grid indexing (vector DB style):
- assign uploads to
grid_idat upload/ingest time - automatic tile indexing into PostGIS (
ops.pointcloud_tiles) - grid definitions and aggregation in
ops.pointcloud_grids - endpoints:
/projects/{project_id}/grids,/projects/{project_id}/grids/{grid_id}/tiles,/projects/{project_id}/grids/{grid_id}/summary
Key directories
apps/control-plane-api: tenant provisioning and control-plane endpointsapps/data-plane-api: ingest/catalog/CRS/analysis/evidence endpointsapps/api-gateway: proxy gateway to control/data APIsapps/worker-ingest,apps/worker-crs,apps/worker-audit: asynchronous job processorsapps/admin-web,apps/client-web: React/Vite POC frontendsdb/migrations: Alembic migrationscontracts/openapi,contracts/events: API/event contracts
Prerequisites
- Python 3.11+
- Node.js 20+
- Docker + Docker Compose
Local macOS setup
- Bootstrap:
scripts/dev/bootstrap_mac.sh
- Start infra:
docker compose up -d postgres minio minio-init
- Activate virtual environment:
source .venv/bin/activate
- Apply migrations:
DATABASE_URL=postgresql+psycopg://earthbond:earthbond@localhost:5433/earthbond alembic -c db/alembic.ini upgrade head
- Run APIs and workers (separate terminals):
uvicorn --app-dir apps/control-plane-api/src control_plane_api.main:app --reload --port 8081uvicorn --app-dir apps/data-plane-api/src data_plane_api.main:app --reload --port 8082uvicorn --app-dir apps/api-gateway/src api_gateway.main:app --reload --port 8080DATABASE_URL=postgresql+psycopg://earthbond:earthbond@localhost:5433/earthbond python3 apps/worker-ingest/src/worker_ingest/main.pyDATABASE_URL=postgresql+psycopg://earthbond:earthbond@localhost:5433/earthbond python3 apps/worker-crs/src/worker_crs/main.pyDATABASE_URL=postgresql+psycopg://earthbond:earthbond@localhost:5433/earthbond python3 apps/worker-audit/src/worker_audit/main.py
- Run web apps:
npm run dev --prefix apps/admin-webnpm run dev --prefix apps/client-web
Docker full stack
cp .env.example .envdocker compose up --build
Services:
- Gateway:
http://localhost:8080 - Control Plane API:
http://localhost:8081 - Data Plane API:
http://localhost:8082 - Admin Web:
http://localhost:3100 - Client Web:
http://localhost:3101 - WAN Edge Web:
http://localhost - Public HTTPS Proxy:
https://app.wiicco.com(after DNS +443forwarding) - Validators Dashboard:
http://localhost:3101/dashboard.html - Toquis Viewer:
http://localhost:3101/viewer.html - Manuals HTML (served by client web):
http://localhost:3101/manuals/index.html - MinIO API:
http://localhost:19000 - MinIO Console:
http://localhost:19001
WAN deployment
Use a DNS A record from your chosen subdomain to the public IP assigned by your ISP/router.
Recommended exposure model:
earthbond.yourdomain.com-> this machine public IP -> port forward443to host port443- app entrypoint served by
public-proxy->edge-web - client UI at
/ - admin UI at
/admin/ - same-origin APIs at
/data/*and/control/*
Do not expose these directly to WAN unless you are deliberately securing them separately:
- PostgreSQL
5433 - MinIO API
19000 - MinIO Console
19001 - internal API ports
8080,8081,8082
Current hardened compose behavior:
- only
443binds on all interfaces 3100,3101,8080,8081,8082,19000,19001, and5433bind to127.0.0.1only/admin/is protected by an extra HTTP basic-auth gate at the public proxy
If you are using a public hostname or WAN IP, update .env:
S3_PUBLIC_ENDPOINTto your reachable object endpoint if you intend to download raw artifacts externallyCORS_ALLOW_ORIGINSto include your public hostname if you ever call APIs directly from another origin
Example for your current static IP:
earthbond.example.com->47.180.2.49- open
http://earthbond.example.com/ - admin at
http://earthbond.example.com/admin/ - for TLS set
PUBLIC_HOSTNAMEandACME_EMAILin.env, then browsehttps://earthbond.example.com/
Validation
make validate-contractsmake validate-migrationsmake validate-docsmake build-ui-manualsmake testmake smokemake smoke-extendedmake smoke-pointcloudmake smoke-pointcloud-quarantinemake fetch-ca-well-packmake build-ca-stage2-queuemake run-ca-pack-workflowmake run-welllog-known-result-auditpython3 -m compileall apps libs scripts tests db/migrations/versions
Point Cloud Output Notes
- Processing results are persisted in
ops.job_results.result(JSONB). - Upload metadata and reference links (e.g. sidecar role + related upload) are stored in
ops.upload_sessions.metadata. - Export jobs return artifact object keys under:
<project_id>/derived/<upload_id or job_id>/autodesk/*.pts|*.asc|*.csv|*.jsonl- Autodesk import path for this POC supports:
PTSfor point cloudsASC(ESRI ASCII DEM) for 2D terrain in Civil 3D workflows- 2D terrain preview endpoint:
GET /maps/terrain/preview- returns hillshade raster + contour GeoJSON + WGS84 bounds for map overlays
- if source CRS cannot be inferred from job metadata, pass
source_crs(for exampleEPSG:26910) - For combining multiple LAZ files into one logical area, use the same
grid_idfor each upload. - Grid/tile vector index is stored in PostGIS and supports class/bbox tile filtering via API.
Notes
- Proxy-auth headers are required (
x-proxy-subject;x-proxy-tenantfor tenant-scoped endpoints). - Raw access is modeled via analysis token issuance in
POST /analysis/jobs. - Evidence pack generation is performed by
worker-auditfrom queued jobs.
Additional design docs
docs/INDEX.md:- master documentation index and stage-folder map
docs/architecture/SYSTEM_INTERACTION_MAP.md:- current connected vs disconnected runtime interaction map
docs/architecture/WELLLOG_POV_V0_2_INTEGRATION.md:- integration delta from client
Earthbond_POV_v0.2into this POC - required DB/API/worker/UI additions for forgotten well log recovery + bypassed pay triage
docs/operations/WELLLOG_POC_OPERATOR_MANUAL.md:- step-by-step API workflow to run normalization, interpretation, and bypassed-pay classification
docs/operations/WELLLOG_TRACEABILITY_AND_AUDIT.md:- trace signatures, drift-localization logic, and deterministic known-result audit workflow
docs/architecture/VISION_PHASE_STATUS.md:- current implementation status against modular phase vision
docs/architecture/POV_TECHNICAL_QA_EVALUATION_IMPLEMENTATION_V1.md:- Q1-Q12 implementation blueprint with formulas, risk controls, and phased value strategy
docs/operations/CA_WELL_TEST_PACK_GUIDE.md:- real California oil/drilling/well-log test pack fetch + validation + audit workflow