// chapter 01 — start here

Welcome to Zenska.ph
backend development

This handbook covers everything from local setup to AWS migration. Read it once, refer to it daily. If something isn't here, ask a senior before guessing.

What is Zenska?

Zenska.ph is a trust-first multi-vendor beauty marketplace in the Philippines. Only verified, authorized sellers are allowed. Customers get AI-powered skin analysis to find the right products. We support COD (cash on delivery), GCash, and card payments with pickup logistics via J&T and Ninja Van. Our moat is trust — Shopee and Lazada cannot become curated marketplaces. We can, and we are.

Commission
8%
Flat per successful order
COD limit per cart
₱2,000
Hard cap, server-enforced
Vendor approval
2–5
Working days
The one rule every developer must memorize
!
WordPress / WooCommerce is read-only. No frontend code, no mobile app, no script ever writes directly to WooCommerce. All writes — orders, accounts, skin profiles, settlements — go through our Node.js API. This single rule is what makes the gradual WordPress migration possible without breaking anything.
Current tech landscape
Already live — don't touch
  • WordPress + WooCommerce on Hostinger — product catalog, vendor listings
  • Dokan plugin — vendor dashboards, COD tagging per product
  • Vendor onboarding already active
  • AWS S3 — product images and videos already stored here
  • Cloudflare — DNS and CDN in front of Hostinger
We're building now
  • Node.js API on Railway — shared backend for web + future mobile
  • PostgreSQL on Railway — our own database, replaces WC DB eventually
  • Redis on Railway — sessions, cart state, rate limiting
  • Typesense Cloud — fast product search replacing WordPress search
  • AI skin service — separate Railway service, quiz + ingredient matching
// chapter 02 — planning

AI-assisted development
timeline

Using Claude and GitHub Copilot aggressively cuts development time by 40–60%. Here's exactly what that looks like for a 4-person team building the Zenska backend.

Without AI vs. with AI — the real numbers
Without AI tools
16–18
Weeks to full Phase 1
With Claude + Copilot
8–10
Weeks to full Phase 1
Time saved
~45%
On boilerplate & integrations
Feature-by-feature timeline estimate
Feature / module Without AI With Claude + Copilot What AI handles
Project setup + Railway deploy 3 days 4 hours Boilerplate, Railway config, GitHub Actions CI/CD setup
Prisma schema design 4 days 1 day Claude generates full schema from description, you review and tweak
Auth (JWT, OTP, refresh) 5 days 1.5 days Copilot writes route handlers, Claude debugs edge cases
Typesense setup + WC sync 4 days 1 day Full sync script generated by Claude in one prompt
Order service + COD logic 6 days 2.5 days Claude writes multi-vendor split logic, Copilot fills repetitive handlers
PayMongo integration 5 days 1.5 days Claude reads PayMongo docs and writes integration code on request
Logistics APIs (J&T, Ninja Van) 5 days 2 days Copilot generates API client wrappers, Claude handles webhook parsing
AI skin service (quiz + matching) 8 days 3 days Claude designs the ingredient matrix, writes matching algorithm
SMS + email notifications 3 days 0.5 days Copy-paste from Claude output, replace credentials
Postman collection (all endpoints) 3 days 4 hours Claude generates full Postman JSON collection on request
Unit tests (Jest) 5 days 1.5 days Copilot writes test cases from function signatures automatically
S3 pre-signed URL uploads 2 days 3 hours Standard Claude output, well-documented pattern
Sprint-by-sprint plan (8 weeks total)
SprintSenior 1 — BackendSenior 2 — Frontend/MobileJunior 1Junior 2
Week 1–2 Railway setup, Prisma schema, auth endpoints WordPress JS integration layer, search UI WC product setup, Dokan COD tagging AI service skeleton, ingredient table seed
Week 3–4 Order service, COD validation, vendor split logic Typesense search UI, skin quiz frontend Product CSV import, Postman collection start J&T + Ninja Van API integration
Week 5–6 PayMongo integration, webhook handling Checkout UI, order tracking pages QA flows 1–5, Postman complete SMS/email notifications, QA flows 6–10
Week 7–8 Integration testing, Railway performance tuning Frontend integration testing, bug fixes Regression testing, documentation Unit tests for AI service, load testing
How to use Claude effectively — specific prompts that work
Use these exact prompt patterns with Claude
  • For boilerplate: "Write a Fastify route handler for POST /api/orders that validates with Zod, checks COD eligibility, creates a Prisma record, and returns our standard {success, data} response format."
  • For integrations: "Write a Node.js function that calls the PayMongo API to create a GCash payment intent. Use axios. Include error handling and return the checkout URL."
  • For debugging: Paste the exact error + the function code + "What is wrong and how do I fix it?"
  • For schema design: "Design a PostgreSQL schema for a multi-vendor marketplace with users, vendors, products, orders, order_items, and settlements. Use Prisma schema syntax."
  • For testing: "Write Jest unit tests for this COD eligibility function. Test all edge cases: total over limit, non-COD items, new account limit, and happy path."
How to use GitHub Copilot effectively
  • Write the function name and JSDoc comment first — Copilot reads your comment and generates the body. The more descriptive your comment, the better the output.
  • Tab-complete repetitive patterns — once you write one Fastify route, Copilot generates the next one from the pattern. Perfect for bulk endpoint creation.
  • Use Copilot Chat for inline explanations — highlight unfamiliar code, ask "explain this". Faster than Google for library-specific questions.
  • Never accept without reading — Copilot gets Prisma queries wrong about 30% of the time. Always read before pressing Tab, especially for database operations.
  • Don't use Copilot for security-critical code — JWT verification, payment webhook signature checking, and COD fraud logic must be written manually and reviewed by a senior.
// chapter 03 — planning

Development phases

Three phases. The website stays live throughout. We build alongside it and migrate gradually. Customers never experience downtime.

✓ Done
Website live
WP + WooCommerce. Vendors onboarding. Products listed. S3 for media.
→ Phase 1 — Now
Build backend
Node.js on Railway. PostgreSQL. Redis. Typesense. Auth. AI skin service.
→ Phase 2 — Month 4–9
Migrate web
Move checkout, orders off WooCommerce. Web hits our Node.js API.
→ Phase 3 — Month 10+
Mobile app
React Native. Same Node.js API. No new backend needed.
Week-by-week detail
Week 1–2
Project skeleton + Railway deploy
Node.js project pushed to GitHub, connected to Railway auto-deploy. PostgreSQL and Redis plugins added. Environment variables set. Prisma migrations running. Hello-world endpoint live at railway domain.
Week 3–4
Auth + Typesense search live
Customer registration, login, JWT, OTP. Typesense Cloud provisioned, product sync from WooCommerce running every 30 minutes. Search endpoint live. WordPress search box replaced by Typesense in week 4.
Week 5–6
Orders, AI skin, logistics
Separate Railway service for AI skin. Quiz flow, ingredient table, recommendations endpoint. COD validation logic. Order creation API. J&T and Ninja Van pickup integration. SMS notifications via Semaphore.
Week 7–8
Payments + integration testing
PayMongo — GCash, Mastercard, Visa. Email notifications via Resend. Full end-to-end test of all 10 flows from the flow document. Load test Railway. Postman collection complete. Phase 1 done.
// chapter 04 — phase 1

Project setup

Follow these steps exactly on day one. Don't skip steps or reorder them.

Tools to install before anything else
Node.js 20 LTS npm 10+ Postman desktop app Railway CLI VS Code Prisma VS Code extension ESLint + Prettier pgAdmin or TablePlus Git GitHub Desktop (juniors)
Clone and run locally
terminal
# Clone the repo git clone https://github.com/zenska/backend.git cd backend # Install all dependencies npm install # Copy env template — NEVER commit the real .env cp .env.example .env # Now open .env and fill in your local values # Run Prisma migrations (creates tables in local Postgres) npx prisma migrate dev # Seed ingredient compatibility data npx prisma db seed # Start development server with hot reload npm run dev # → Server running at http://localhost:3000
Git branch rules
!
Never push directly to main. Not juniors, not seniors. Every change goes through a Pull Request. Main branch = production.
git workflow
# Create your feature branch — always from staging git checkout staging git pull origin staging git checkout -b feature/yourname-what-you-built # Good branch name examples: feature/jay-typesense-sync feature/maria-auth-endpoints feature/carlo-cod-validation # Branch hierarchy: main ← production · auto-deploys to Railway · seniors merge here only staging ← test here first · all PRs target this branch feature/* ← your daily work
Complete .env file
Never commit .env to GitHub. It is already in .gitignore. Add production secrets only in Railway's Variables tab, never anywhere else.
.env.example — copy this, fill in real values
# ── Server ── NODE_ENV=development PORT=3000 # ── Database (Railway auto-fills these in production) ── DATABASE_URL="postgresql://postgres:password@localhost:5432/zenska_dev" REDIS_URL="redis://localhost:6379" # ── Auth ── JWT_SECRET="run: openssl rand -hex 32" JWT_EXPIRES_IN="7d" # ── WooCommerce (read-only REST API) ── WC_BASE_URL="https://zenska.ph/wp-json/wc/v3" WC_CONSUMER_KEY="ck_..." WC_CONSUMER_SECRET="cs_..." # ── Typesense ── TYPESENSE_HOST="xxx.a1.typesense.net" TYPESENSE_PORT="443" TYPESENSE_PROTOCOL="https" TYPESENSE_API_KEY="your-admin-api-key" TYPESENSE_SEARCH_KEY="your-search-only-key" # ── PayMongo ── PAYMONGO_SECRET_KEY="sk_test_..." PAYMONGO_PUBLIC_KEY="pk_test_..." PAYMONGO_WEBHOOK_SECRET="whsk_..." # ── SMS (Semaphore PH) ── SEMAPHORE_API_KEY="your-key" SEMAPHORE_SENDER_NAME="ZENSKA" # ── Email (Resend) ── RESEND_API_KEY="re_..." EMAIL_FROM="orders@zenska.ph" # ── AWS S3 ── AWS_BUCKET_NAME="zenska-media" AWS_REGION="ap-southeast-1" AWS_ACCESS_KEY_ID="AKIA..." AWS_SECRET_ACCESS_KEY="..."
// chapter 05 — phase 1

Railway &
databases

Railway hosts our Node.js backend, PostgreSQL, and Redis. Setup takes 30 minutes. Here's the complete walkthrough.

Step 1 — Create the project on Railway
  • 1
    Go to railway.app → click New Project → Deploy from GitHub repo → select zenska/backend
  • 2
    Click + New → Database → Add PostgreSQL. Railway provisions Postgres 15 in ~10 seconds. The DATABASE_URL variable is auto-injected into your service.
  • 3
    Click + New → Database → Add Redis. Same process — REDIS_URL auto-injected.
  • 4
    Go to your Node.js service → Variables tab → add all remaining env vars (JWT_SECRET, TYPESENSE keys, PayMongo keys, etc.)
  • 5
    Every push to main branch auto-deploys in ~60 seconds. Watch the Deploy tab for build logs.
Railway CLI — useful commands
terminal
# Install Railway CLI npm install -g @railway/cli # Login railway login # Link your local project to Railway railway link # Run Prisma migrations on production Railway database railway run npx prisma migrate deploy # Open a shell into your production container railway shell # View live production logs railway logs # Get a specific environment variable value railway variables get DATABASE_URL
Prisma schema — core tables
prisma/schema.prisma
generator client { provider = "prisma-client-js" } datasource db { provider = "postgresql" url = env("DATABASE_URL") } model User { id String @id @default(cuid()) email String @unique phone String? @unique password String name String createdAt DateTime @default(now()) orders Order[] skinProfile SkinProfile? } model Order { id String @id @default(cuid()) userId String user User @relation(fields: [userId], references: [id]) wcOrderRef String? // WC order ID — bridge field status OrderStatus @default(PENDING) paymentMethod String // COD | GCASH | CARD totalAmount Float codEligible Boolean @default(false) createdAt DateTime @default(now()) items OrderItem[] settlement Settlement? } model OrderItem { id String @id @default(cuid()) orderId String order Order @relation(fields: [orderId], references: [id]) wcProductId Int vendorId String name String price Float quantity Int } model SkinProfile { id String @id @default(cuid()) userId String @unique user User @relation(fields: [userId], references: [id]) skinType String concerns String[] lastScanned DateTime @default(now()) } model Settlement { id String @id @default(cuid()) orderId String @unique order Order @relation(fields: [orderId], references: [id]) vendorId String grossAmount Float commission Float // 8% of grossAmount netAmount Float settledAt DateTime? status String @default("pending") } enum OrderStatus { PENDING CONFIRMED SHIPPED DELIVERED CANCELLED REFUNDED }
Running migrations
terminal
# Development — creates migration file + applies it to local DB npx prisma migrate dev --name add-settlements # Production on Railway — applies pending migrations only railway run npx prisma migrate deploy # View all data in browser GUI npx prisma studio # → Opens at http://localhost:5555 # Reset local database (NEVER on production) npx prisma migrate reset
// chapter 06 — phase 1 · new

Connecting services
from your codebase

This chapter shows exactly how PostgreSQL, Redis, and Typesense connect from your Node.js code to Railway and Typesense Cloud. Every connection pattern, every config file, every gotcha.

How connections work on Railway
i
Railway services in the same project talk to each other over a private internal network. Your Node.js service connects to Postgres and Redis using the DATABASE_URL and REDIS_URL environment variables that Railway auto-injects. You never hardcode these. You never need to know the IP address. Railway handles it.
PostgreSQL — via Prisma ORM lib/prisma.js
1
Install the dependencies
terminal
npm install @prisma/client npm install -D prisma
2
Create the singleton client — import this in every module that needs the database. Never create a new PrismaClient() inside a module.
src/lib/prisma.js
import { PrismaClient } from '@prisma/client' // Reuse the same instance across hot-reloads in development const globalForPrisma = globalThis const prisma = globalForPrisma.prisma ?? new PrismaClient({ log: process.env.NODE_ENV === 'development' ? ['query', 'error', 'warn'] : ['error'] }) if (process.env.NODE_ENV !== 'production') globalForPrisma.prisma = prisma export default prisma
3
Use it in any module — the DATABASE_URL env var does the connection work
src/modules/auth/auth.service.js
import prisma from '../../lib/prisma.js' export async function createUser(email, hashedPassword, name) { return prisma.user.create({ data: { email, password: hashedPassword, name } }) } export async function findUserByEmail(email) { return prisma.user.findUnique({ where: { email } }) } export async function getUserOrders(userId) { return prisma.order.findMany({ where: { userId }, include: { items: true }, orderBy: { createdAt: 'desc' } }) }
4
On Railway, the DATABASE_URL is already set. Locally, set it in your .env pointing to your local Postgres instance.
.env — local development only
# Local Postgres (install Postgres locally or use Docker) DATABASE_URL="postgresql://postgres:yourpassword@localhost:5432/zenska_dev" # Railway auto-provides this in production — looks like: # DATABASE_URL="postgresql://postgres:xK9mP...@containers-us-west.railway.app:5894/railway"
Redis — via ioredis lib/redis.js
1
Install ioredis
terminal
npm install ioredis
2
Create the singleton Redis client
src/lib/redis.js
import Redis from 'ioredis' const redis = new Redis(process.env.REDIS_URL, { maxRetriesPerRequest: 3, enableReadyCheck: false, retryStrategy(times) { if (times > 3) return null // stop retrying return Math.min(times * 200, 1000) } }) redis.on('error', (err) => console.error('Redis error:', err.message)) redis.on('connect', () => console.log('Redis connected')) export default redis
3
Use it for sessions, caching, rate limiting, and OTP codes
examples — how to use Redis in modules
import redis from '../../lib/redis.js' // Save a JWT refresh token (expires in 7 days) await redis.set(`refresh:${userId}`, refreshToken, 'EX', 60 * 60 * 24 * 7) // Get it back const token = await redis.get(`refresh:${userId}`) // Save OTP code (expires in 5 minutes) await redis.set(`otp:${phone}`, otpCode, 'EX', 300) // Rate limiting — increment a counter const calls = await redis.incr(`ratelimit:${ip}`) if (calls === 1) await redis.expire(`ratelimit:${ip}`, 60) if (calls > 100) throw new Error('Rate limit exceeded') // Cache search results for 5 minutes const cacheKey = `search:${query}:${filters}` const cached = await redis.get(cacheKey) if (cached) return JSON.parse(cached) // ... run search ... await redis.set(cacheKey, JSON.stringify(results), 'EX', 300) // Delete a key (logout — invalidate refresh token) await redis.del(`refresh:${userId}`)
4
Local Redis setup — install Redis locally or use Docker
terminal — local Redis options
# Option A: Install Redis directly (macOS) brew install redis brew services start redis # Option A: Install Redis (Ubuntu/WSL) sudo apt install redis-server sudo service redis start # Option B: Docker (works on all systems) docker run -d -p 6379:6379 redis:alpine # Test your connection redis-cli ping # → PONG (it's working) # Your .env for local: # REDIS_URL="redis://localhost:6379"
Typesense — via typesense Node.js client lib/typesense.js
1
Sign up at cloud.typesense.org → create a cluster in Singapore region → copy your credentials
2
Install the client
terminal
npm install typesense
3
Create the client — uses your Typesense Cloud credentials from env vars
src/lib/typesense.js
import Typesense from 'typesense' export const tsClient = new Typesense.Client({ nodes: [{ host: process.env.TYPESENSE_HOST, port: Number(process.env.TYPESENSE_PORT), protocol: process.env.TYPESENSE_PROTOCOL }], apiKey: process.env.TYPESENSE_API_KEY, connectionTimeoutSeconds: 5, retryIntervalSeconds: 0.1, numRetries: 3 }) // Product collection schema — run once to create the index export const PRODUCTS_SCHEMA = { name: 'products', fields: [ { name: 'id', type: 'string' }, { name: 'name', type: 'string' }, { name: 'vendor_name', type: 'string', facet: true }, { name: 'category', type: 'string', facet: true }, { name: 'price', type: 'float', facet: true }, { name: 'cod_eligible', type: 'bool', facet: true }, { name: 'skin_type', type: 'string[]', facet: true }, { name: 'rating', type: 'float' }, { name: 'image_url', type: 'string', index: false }, ], default_sorting_field: 'rating' } // Helper to create the collection if it doesn't exist export async function ensureCollection() { try { await tsClient.collections('products').retrieve() console.log('Typesense collection exists') } catch (e) { await tsClient.collections().create(PRODUCTS_SCHEMA) console.log('Typesense collection created') } }
4
Initialize the collection when your app starts
src/app.js
import Fastify from 'fastify' import { ensureCollection } from './lib/typesense.js' const app = Fastify({ logger: true }) const start = async () => { await ensureCollection() // creates Typesense index if needed await app.listen({ port: process.env.PORT || 3000, host: '0.0.0.0' }) } start()
5
Searching from a route — the full search handler
src/modules/search/search.routes.js
import { tsClient } from '../../lib/typesense.js' import redis from '../../lib/redis.js' export async function searchHandler(req, reply) { const { q = '*', cod, category, max_price, skin_type, page = '1' } = req.query // Build cache key from query params const cacheKey = `search:${q}:${cod}:${category}:${max_price}:${skin_type}:${page}` const cached = await redis.get(cacheKey) if (cached) return reply.send(JSON.parse(cached)) // Build filter string const filters = [ cod === 'true' ? 'cod_eligible:true' : null, category ? `category:=${category}` : null, max_price ? `price:<${max_price}` : null, skin_type ? `skin_type:=[${skin_type}]`: null, ].filter(Boolean).join(' && ') const results = await tsClient.collections('products').documents().search({ q, query_by: 'name,vendor_name,category', filter_by: filters || undefined, sort_by: 'rating:desc', per_page: 24, page: Number(page), typo_tokens_threshold: 1, highlight_full_fields: 'name' }) const response = { hits: results.hits.map(h => h.document), total: results.found, page: results.page } // Cache for 5 minutes await redis.set(cacheKey, JSON.stringify(response), 'EX', 300) return reply.send(response) }
6
Syncing products from WooCommerce into Typesense — runs every 30 minutes
src/modules/search/sync.job.js
import { tsClient } from '../../lib/typesense.js' import { wcGet } from '../../lib/woocommerce.js' export async function syncProductsToTypesense() { console.log('Starting WC → Typesense sync...') let page = 1 let totalSynced = 0 while (true) { const products = await wcGet(`/products?per_page=100&page=${page}&status=publish`) if (!products.length) break const docs = products.map(p => ({ id: String(p.id), name: p.name, vendor_name: getMeta(p, '_dokan_vendor_name') || '', category: p.categories[0]?.name || 'General', price: parseFloat(p.price) || 0, cod_eligible: getMeta(p, '_cod_eligible') === 'yes', skin_type: p.tags.filter(t => ['oily','dry','combination','sensitive','normal'].includes(t.slug)).map(t => t.slug), rating: parseFloat(p.average_rating) || 0, image_url: p.images[0]?.src || '', })) await tsClient.collections('products').documents().import(docs, { action: 'upsert' }) totalSynced += docs.length page++ } console.log(`Sync complete: ${totalSynced} products`) } function getMeta(product, key) { return product.meta_data?.find(m => m.key === key)?.value } // Schedule in app.js using node-cron: // import cron from 'node-cron' // cron.schedule('*/30 * * * *', syncProductsToTypesense)
Verify all connections are working
src/routes/health.js — add this endpoint first
import prisma from '../lib/prisma.js' import redis from '../lib/redis.js' import { tsClient } from '../lib/typesense.js' app.get('/health', async (req, reply) => { const checks = {} // Check Postgres try { await prisma.$queryRaw`SELECT 1` checks.postgres = 'ok' } catch { checks.postgres = 'error' } // Check Redis try { await redis.ping() checks.redis = 'ok' } catch { checks.redis = 'error' } // Check Typesense try { await tsClient.health.retrieve() checks.typesense = 'ok' } catch { checks.typesense = 'error' } const allOk = Object.values(checks).every(v => v === 'ok') reply.status(allOk ? 200 : 503).send({ status: allOk ? 'healthy' : 'degraded', checks }) }) // Test it: GET http://localhost:3000/health // Expected: { "status": "healthy", "checks": { "postgres": "ok", "redis": "ok", "typesense": "ok" } }
Build this health endpoint in Week 1 before anything else. It's the fastest way to confirm all three services are reachable from your Node.js code. Check it after every Railway deploy.
// chapter 07 — phase 1

Folder structure

One codebase, clean domain separation. Each folder owns one business area. Nothing crosses boundaries without going through the API layer.

Complete folder map
zenska-backend/ ├── src/ │ ├── modules/ ← one folder per business domain │ │ ├── auth/ │ │ │ ├── auth.routes.js ← POST /auth/login, /register, /refresh, /logout │ │ │ ├── auth.service.js ← hash password, compare, issue JWT │ │ │ ├── auth.schema.js ← Zod validation for all auth inputs │ │ │ └── auth.test.js ← Jest unit tests │ │ │ │ │ ├── catalog/ │ │ │ ├── catalog.routes.js ← GET /products, /products/:id, /categories │ │ │ ├── catalog.service.js← reads from WooCommerce REST API (read-only) │ │ │ └── catalog.schema.js │ │ │ │ │ ├── orders/ │ │ │ ├── orders.routes.js ← POST /orders, GET /orders, GET /orders/:id │ │ │ ├── orders.service.js ← COD check, multi-vendor split, commission calc │ │ │ ├── orders.schema.js │ │ │ └── orders.test.js ← tests for COD logic especially │ │ │ │ │ ├── payments/ │ │ │ ├── payments.routes.js← POST /payments/intent, POST /payments/webhook │ │ │ ├── payments.service.js← PayMongo API, webhook signature verify │ │ │ └── payments.schema.js │ │ │ │ │ ├── search/ │ │ │ ├── search.routes.js ← GET /search?q=&cod=&category=&skin_type= │ │ │ ├── search.service.js ← Typesense queries + Redis cache │ │ │ └── sync.job.js ← WC → Typesense cron, runs every 30 min │ │ │ │ │ ├── skin/ │ │ │ ├── skin.routes.js ← POST /skin/analyze, GET /skin/routine │ │ │ ├── skin.service.js ← quiz logic, ingredient compatibility matching │ │ │ └── skin.seed.js ← 200+ ingredient pairs: safe / conflict / caution │ │ │ │ │ ├── vendors/ │ │ │ ├── vendors.routes.js ← GET /vendors/:id, GET /vendors/:id/products │ │ │ └── vendors.service.js← reads WC, writes settlements to Postgres │ │ │ │ │ └── notifications/ │ │ ├── sms.service.js ← Semaphore PH wrapper: sendSMS(phone, message) │ │ └── email.service.js ← Resend wrapper: sendEmail(to, subject, html) │ │ │ ├── middleware/ │ │ ├── auth.middleware.js ← verify JWT, attach req.user = { id, email } │ │ ├── validate.middleware.js← run Zod schema, return 400 on invalid input │ │ ├── rateLimit.middleware.js← Redis-based: 100 req/min per IP │ │ └── errorHandler.js ← catch all errors, return { success:false, error:{} } │ │ │ ├── lib/ ← singleton clients — import from here, never re-create │ │ ├── prisma.js ← single PrismaClient instance │ │ ├── redis.js ← single ioredis instance │ │ ├── typesense.js ← Typesense client + schema + ensureCollection() │ │ ├── s3.js ← AWS S3 client + getPresignedUploadUrl(key) │ │ └── woocommerce.js ← WC REST API axios wrapper (READ ONLY) │ │ │ └── app.js ← Fastify app, register plugins + routes + hooks │ ├── prisma/ │ ├── schema.prisma ← THE database schema, never edit migrations manually │ ├── seed.js ← ingredient compatibility data + test users │ └── migrations/ ← auto-generated by Prisma, commit these to git │ ├── ai-service/ ← SEPARATE Railway service deployment │ ├── app.js │ └── src/ │ └── skin-analyzer.js │ ├── .env.example ← template — safe to commit, no real values ├── .env ← NEVER commit. Already in .gitignore. ├── .gitignore ├── package.json └── railway.toml ← start command: "node src/app.js"
Critical rule about lib/ files
Never create a new PrismaClient(), Redis(), or Typesense.Client() inside a module file. Always import from lib/. Creating multiple instances causes connection pool exhaustion — a production crash that is very hard to debug.
// chapter 08 — phase 1

Typesense search

Replaces WordPress search with instant, typo-tolerant, filterable product search. Customers filter by COD, skin type, price, and brand — all in under 50ms.

i
Full connection code for Typesense is in the Connecting services chapter. This chapter covers the cloud setup and sync logic only.
Typesense Cloud setup (one-time)
  • 1
    Go to cloud.typesense.org → create account → New Cluster → choose Singapore region
  • 2
    When cluster is ready, go to API Keys → copy the Admin API Key and the Search-Only API Key. Use Admin key server-side only. Use Search-Only key if ever exposing search to browser JS.
  • 3
    Copy Host (looks like xxx.a1.typesense.net), Port (443), Protocol (https) → paste all into Railway Variables
  • 4
    Your collection is created automatically on first app start via the ensureCollection() function in lib/typesense.js
WooCommerce → Typesense sync job

The full sync code is in the Connecting services chapter. Schedule it in app.js:

terminal — install cron scheduler
npm install node-cron
src/app.js — add the sync schedule
import cron from 'node-cron' import { syncProductsToTypesense } from './modules/search/sync.job.js' // Run sync every 30 minutes cron.schedule('*/30 * * * *', () => { syncProductsToTypesense().catch(console.error) }) // Also run once on startup to populate on fresh deploy syncProductsToTypesense().catch(console.error)
Connecting WordPress search to Typesense

Add this JavaScript to your WordPress theme's functions.php or a custom plugin. No WordPress plugin needed — it's a pure JS override of the search form.

WordPress theme — search override
// Add to your theme's JS file (enqueued via functions.php) document.addEventListener('DOMContentLoaded', () => { const searchInput = document.querySelector('.search-field') const resultsContainer = document.getElementById('search-results-dropdown') if (!searchInput) return let debounceTimer searchInput.addEventListener('input', (e) => { clearTimeout(debounceTimer) debounceTimer = setTimeout(async () => { const q = e.target.value.trim() if (q.length < 2) { resultsContainer.innerHTML = ''; return } const res = await fetch(`https://api.zenska.ph/api/search?q=${encodeURIComponent(q)}&per_page=5`) const { hits } = await res.json() resultsContainer.innerHTML = hits.map(p => ` <a href="/product/${p.id}" class="search-result-item"> <img src="${p.image_url}" width="40" height="40"> <div> <div class="sr-name">${p.name}</div> <div class="sr-price">₱${p.price.toLocaleString()}</div> </div> </a>`).join('') }, 250) // 250ms debounce }) })
// chapter 09 — phase 1

Postman &
API documentation

Postman is the contract between backend and everyone else. An endpoint is not done until it has a Postman entry with a working example and documented response.

Team workspace setup
  • 1
    Go to postman.com → New Team Workspace → name it "Zenska API"
  • 2
    Invite all 4 developers. Seniors can edit. Juniors can view and run.
  • 3
    Create a Collection "Zenska Backend v1" with folders: Auth / Products / Search / Orders / Payments / Skin / Vendors / Admin
  • 4
    Create 2 Environments: LOCAL (base_url=http://localhost:3000) and RAILWAY (base_url=https://yourapp.up.railway.app)
Auto-save JWT token after login
Postman → POST /auth/login → Tests tab
const res = pm.response.json() if (res.data?.token) { pm.environment.set("jwt_token", res.data.token) console.log("✓ JWT saved to environment") } pm.test("Status 200", () => pm.response.to.have.status(200)) pm.test("Has token", () => pm.expect(res.data).to.have.property('token'))

In all other requests, set Authorization header to: Bearer {{jwt_token}}

Standard response format — every endpoint must follow this
Success (200)
{ "success": true, "data": { // payload here }, "message": "Order created" }
Error (4xx / 5xx)
{ "success": false, "error": { "code": "VALIDATION_ERROR", "message": "Phone is required" } }
All API endpoints
MethodEndpointAuthWho calls it
POST/api/auth/registerNoneCustomer signup
POST/api/auth/loginNoneCustomer login, returns JWT
POST/api/auth/refreshRefresh tokenRenew expired JWT
POST/api/auth/logoutJWTInvalidate Redis session
GET/api/search?q=&cod=&category=&skin_type=&page=NoneProduct search
GET/api/products/:idNoneSingle product from WC
POST/api/ordersJWTCreate order, COD check runs here
GET/api/ordersJWTCustomer's order history
GET/api/orders/:idJWTSingle order + tracking status
POST/api/payments/intentJWTCreate PayMongo payment intent
POST/api/payments/webhookSignaturePayMongo webhook — no JWT
POST/api/skin/analyzeOptional JWTSubmit quiz, get recommendations
GET/api/skin/routineJWTGet saved skin routine
POST/api/upload/presignJWTGet S3 pre-signed upload URL
GET/healthNoneCheck Postgres + Redis + Typesense status
// chapter 10 — phase 2

Moving off
WordPress

Phase 2 migrates the web frontend from WooCommerce to our Node.js API — one section at a time. The website never goes down.

Migration order
Month 4
AI skin page — first to move
The skin analyzer page has zero WooCommerce dependency. It calls our Node.js API directly. First full page running on our backend with WP still live everywhere else.
Month 5
Search results page
Replace WordPress search with JavaScript calling /api/search. Search box looks identical. Results come from Typesense via Node.js. WooCommerce product pages still linked.
Month 6
Checkout and orders
Replace WooCommerce checkout with our checkout flow calling /api/orders and /api/payments. Most complex migration. Requires full QA before switching.
Month 7–8
Product catalog to Postgres
Migrate products from WooCommerce DB into Postgres. Node.js stops calling WC REST API — reads from its own database. Vendor management moves to our custom module.
Month 9–10
Decommission WordPress
Blog to headless CMS (Contentful or Sanity). WordPress subscription cancelled. All traffic from Cloudflare → Node.js on Railway or AWS.
// chapter 11 — phase 2

COD &
payments

COD will be 50–70% of Zenska orders. The validation logic must be bulletproof and server-enforced from day one.

COD validation must run server-side in Node.js. Never trust the frontend to enforce the ₱2,000 limit or the COD eligibility tag. Anyone can modify frontend code.
COD eligibility function
src/modules/orders/orders.service.js
export function checkCODEligibility({ cartItems, cartTotal, user }) { // Rule 1: Total must not exceed ₱2,000 if (cartTotal > 2000) { return { eligible: false, reason: 'COD_LIMIT_EXCEEDED', message: `Cart total ₱${cartTotal} exceeds ₱2,000 COD limit` } } // Rule 2: Every item must be tagged COD eligible by vendor const blocked = cartItems.filter(item => !item.cod_eligible) if (blocked.length > 0) { return { eligible: false, reason: 'ITEMS_NOT_COD_ELIGIBLE', items: blocked.map(i => i.name) } } // Rule 3: New accounts (less than 7 days old) limited to ₱1,000 const accountAgeDays = (Date.now() - new Date(user.createdAt)) / 86400000 if (accountAgeDays < 7 && cartTotal > 1000) { return { eligible: false, reason: 'NEW_ACCOUNT_COD_LIMIT', message: 'New accounts are limited to ₱1,000 for COD orders' } } return { eligible: true } }
Payment flows
COD — Cash on Delivery
  • Run COD eligibility check server-side
  • Create order in Postgres, status = PENDING
  • Call J&T / Ninja Van API for pickup booking
  • Send SMS to customer via Semaphore
  • Logistics partner collects cash on delivery
  • Settlement runs after 3PL confirms delivery
GCash / Card — Online
  • Call PayMongo API to create payment intent
  • Return checkout URL to frontend
  • Customer completes on PayMongo hosted page
  • PayMongo calls our /api/payments/webhook
  • Webhook verifies signature → updates order to CONFIRMED
  • Send SMS + email confirmation to customer
// chapter 12 — infrastructure

Why Railway,
not AWS yet

A deliberate decision based on your team size and current stage. Here's the complete reasoning.

Railway monthly cost
$20–50
All services, predictable
AWS minimum viable
$200–400
RDS + ECS + ElastiCache + NAT
Dev time lost to AWS ops
30–40%
Of a senior dev, weekly
Comparison
AreaRailway (now)AWS (later)
Setup time30 minutes3–5 days
Monthly cost$20–50$200–400+
DevOps skill neededNear zeroHigh — dedicated person
PostgreSQLFull Postgres 15, daily backupsRDS — read replicas, PITR, Multi-AZ
RedisSingle node, fine for sessions/cacheElastiCache — cluster mode, Sentinel
Uptime SLA99.5% (~44hrs/year)99.99% (~52min/year)
Right for Zenska whenNow → Month 9Month 10+ / $150+ bill
// chapter 13 — infrastructure

AWS migration
when you're ready

Two focused weekends. One senior developer. Zero app code changes. Only environment variables change.

Trigger conditions — migrate when you hit these
Hard triggers — migrate now
  • Railway bill consistently above $150/month
  • Database exceeds 8GB
  • API p95 response time above 400ms
  • You need read replicas for analytics
Soft triggers — plan the migration
  • GMV consistently above ₱5M/month
  • You hire a DevOps or senior infra engineer
  • You need compliance or data residency in PH
The migration is 2 connection string changes
Because you use Prisma and ioredis, neither library cares where the database lives. Change the URL, redeploy. Your application code is identical.
Railway Variables → AWS Parameter Store
# Before (Railway) — these 2 lines change DATABASE_URL="postgresql://postgres:pass@containers.railway.app:5894/railway" REDIS_URL="redis://default:pass@containers.railway.app:6380" # After (AWS) — only these 2 lines are different DATABASE_URL="postgresql://zenska:pass@zenska.abc123.ap-southeast-1.rds.amazonaws.com:5432/zenska" REDIS_URL="redis://zenska-cache.abc123.0001.apse1.cache.amazonaws.com:6379" # Everything else — TYPESENSE, PAYMONGO, JWT_SECRET, S3 — stays identical
Database migration — one command
terminal
# Dump Railway Postgres and restore into RDS in one pipe # Takes under 5 minutes for Zenska's early data size pg_dump $(railway variables get DATABASE_URL) \ | psql "postgresql://zenska:pass@your-rds-endpoint:5432/zenska" # Run Prisma migrations on RDS to ensure schema is current DATABASE_URL="postgresql://zenska:pass@your-rds:5432/zenska" \ npx prisma migrate deploy
Migration weekend plan
Weekend 1 · Sat
Provision AWS
Create VPC, RDS PostgreSQL (db.t3.medium), ElastiCache Redis (cache.t3.micro), security groups. Use AWS Console. Railway still live as production.
Weekend 1 · Sun
Migrate data
Run pg_dump → pg_restore. Verify schema. Run Prisma migrations on RDS. Test database reads and writes. Railway still live.
Weekend 2 · Sat
Node.js to ECS Fargate
Docker image to ECR, ECS cluster, task definition, Application Load Balancer. Update env vars to RDS and ElastiCache. Run both Railway and ECS in parallel — test ECS thoroughly.
Weekend 2 · Sun
DNS cutover — 60 seconds
Update Cloudflare: point API domain to AWS ALB. Monitor 2 hours. Instant rollback = flip DNS back to Railway. After 7 stable days, cancel Railway.
// chapter 14 — team

Roles &
non-negotiable rules

Four developers, clear ownership. No ambiguity about who builds what.

Team ownership
Senior Dev 1 — Backend lead
  • Node.js architecture, Fastify setup, Railway CI/CD
  • PostgreSQL schema — owns all Prisma migrations
  • Auth system (JWT, OTP, refresh tokens)
  • Order service, COD validation, multi-vendor split
  • PayMongo integration, webhook handling
  • Reviews all backend PRs before merge
Senior Dev 2 — Frontend + mobile lead
  • WordPress JS integration layer (search, skin, checkout)
  • React Native architecture and component library (Phase 3)
  • Typesense search UI integration
  • Skin analyzer quiz flow UI
  • Reviews all frontend / mobile PRs before merge
Junior Dev 1 — WooCommerce + QA
  • WooCommerce product setup, Dokan COD tagging
  • Vendor onboarding flow, product CSV import
  • Owns and maintains Postman collection
  • QA testing — all flows 1–5 from flow document
  • S3 upload integration (supervised by S1)
Junior Dev 2 — AI service + integrations
  • AI skin service — separate Railway deployment
  • Ingredient compatibility table seeding
  • J&T and Ninja Van logistics API integration
  • Twilio/Semaphore SMS, Resend email notifications
  • QA testing — flows 6–10, unit tests for AI logic
Non-negotiable rules
!No junior deploys to production alone. All production deployments require a senior PR review and merge.
!Never commit secrets, passwords, or API keys to GitHub. Railway Variables tab only.
!Every new API endpoint needs a Postman entry before the PR is ready for review.
!WordPress / WooCommerce is read-only. No code ever writes to WooCommerce directly.
!All API inputs must be validated with Zod before any business logic runs.
!Never create a new PrismaClient, Redis client, or Typesense client in a module. Import from lib/ only.
Seniors run a 30-minute daily sync with their paired junior. Not optional.
Check Railway spend dashboard every Monday. Alert team if projected bill exceeds $100/month.
Build and check the /health endpoint after every deploy. All three lights must be green.
// security & reliability — new chapter

Error tracking &
observability

Right now your production monitoring plan is "wait for customers to complain." That is not a plan. This chapter fixes it. Total setup time: one day. Cost: free.

!
This must be done before the first real order goes live. A bug in your PayMongo webhook could silently fail for 12 hours. A COD order not confirmed by SMS could mean a missed delivery. In a trust-first marketplace, silent failures are existential.
The three tools you need — all free at Zenska's stage
Sentry
Free
5k errors/month · error tracking
UptimeRobot
Free
50 monitors · 5min checks
Railway metrics
Built-in
CPU, memory, request logs
Sentry — error tracking setup
Sentry — catches every unhandled error in production sentry.io · free tier
1
Go to sentry.io → create free account → New Project → Node.js → copy your DSN key
2
Add to Railway Variables: SENTRY_DSN=https://abc123@o123.ingest.sentry.io/456
3
Install and initialize in your app
terminal
npm install @sentry/node @sentry/profiling-node
src/app.js — add at the very top, before anything else
import * as Sentry from '@sentry/node' import { nodeProfilingIntegration } from '@sentry/profiling-node' Sentry.init({ dsn: process.env.SENTRY_DSN, environment: process.env.NODE_ENV, integrations: [nodeProfilingIntegration()], tracesSampleRate: 1.0, profilesSampleRate: 1.0, }) // Must be imported AFTER Sentry.init() import Fastify from 'fastify'
4
Capture errors in your global error handler
src/middleware/errorHandler.js
import * as Sentry from '@sentry/node' export function errorHandler(error, req, reply) { // Always log to Sentry in production if (process.env.NODE_ENV === 'production') { Sentry.captureException(error, { user: { id: req.user?.id, email: req.user?.email }, extra: { url: req.url, method: req.method, body: req.body } }) } // 4xx = client error (validation, auth) — don't alert if (error.statusCode >= 400 && error.statusCode < 500) { return reply.status(error.statusCode).send({ success: false, error: { code: error.code || 'CLIENT_ERROR', message: error.message } }) } // 5xx = server error — Sentry already captured above console.error('[SERVER ERROR]', error) reply.status(500).send({ success: false, error: { code: 'INTERNAL_ERROR', message: 'Something went wrong. Our team has been notified.' } }) }
5
Manually capture important business events — not just crashes
examples — capturing business-critical events in Sentry
import * as Sentry from '@sentry/node' // PayMongo webhook received an unknown event type Sentry.captureMessage(`Unknown PayMongo event: ${event.type}`, 'warning') // Typesense sync failed Sentry.captureException(new Error('Typesense sync failed'), { extra: { productCount: docs.length, error: err.message } }) // COD order with suspicious signal Sentry.captureMessage(`COD fraud signal: ${signal.reason}`, 'warning', { extra: { userId, cartTotal, address } }) // Track a performance span around slow operations const span = Sentry.startSpan({ name: 'wc-product-sync' }, async () => { await syncProductsToTypesense() })
UptimeRobot — uptime monitoring
Setup in 10 minutes — monitors every 5 minutes, alerts via email + SMS
  • 1
    Go to uptimerobot.com → free account → Add New Monitor
  • 2
    Monitor type: HTTP(s) → URL: https://your-api.railway.app/health → interval: 5 minutes
  • 3
    Add alert contacts: email addresses for both seniors. Optional: add a Slack webhook for the dev channel.
  • 4
    Add a second monitor for the main website: https://zenska.ph
  • 5
    Add a third monitor for the AI service: https://your-ai-service.railway.app/health
When any monitor goes down, UptimeRobot emails both seniors within 5 minutes. When it recovers, you get a second email. Free tier covers 50 monitors with 5-minute intervals — more than enough for Zenska at launch.
What to alert on — the priority list
EventSeverityWho gets alertedResponse time
/health returns non-200CriticalBoth seniors + SMSImmediate
PayMongo webhook fails to processCriticalSenior 1 + Sentry alert< 15 min
Typesense sync fails 2x in a rowHighSenior 1 via Sentry< 1 hour
COD fraud signal triggeredHighSenior 1 via Sentry< 2 hours
500 errors spike above 10/minuteHighBoth seniors via Sentry< 30 min
Redis connection lostHighSenior 1 via Sentry< 1 hour
Order created but no SMS sentMediumSentry daily digestSame day
Slow API response (> 2 seconds)LowSentry weekly reviewNext sprint
Structured logging — so Railway logs are searchable
src/lib/logger.js — add this, use instead of console.log
// Use Fastify's built-in pino logger — already included // Just configure it to output structured JSON in production const app = Fastify({ logger: { level: process.env.NODE_ENV === 'production' ? 'info' : 'debug', serializers: { req(req) { return { method: req.method, url: req.url, userId: req.user?.id } } } } }) // In route handlers — use req.log instead of console.log app.get('/api/orders/:id', async (req, reply) => { req.log.info({ orderId: req.params.id, userId: req.user.id }, 'Order requested') // Railway stores these as searchable JSON — you can filter by userId, orderId, etc. })
// security & reliability — new chapter

COD fraud detection
& management

COD fraud will hit Zenska within the first two months of live orders. Failed deliveries, fake addresses, and repeat refusers cost real money. This system catches them before they accumulate.

Why this matters financially. At ₱1,500 average order value, a single fraudulent COD order that is refused on delivery costs you: logistics pickup fee (~₱80), return logistics (~₱80), and the settlement you've already calculated. Ten fraud orders a week = ₱1,600+ weekly loss before you notice. This system is not optional.
The fraud signals table — build this in Week 1
prisma/schema.prisma — add these models
model FraudSignal { id String @id @default(cuid()) userId String? phone String? addressHash String? // hashed delivery address for privacy ipAddress String? signalType String // COD_REFUSED | FAKE_ADDRESS | VELOCITY | NEW_ACCOUNT_LIMIT orderId String? riskScore Int @default(0) note String? createdAt DateTime @default(now()) resolvedAt DateTime? resolvedBy String? @@index([phone]) @@index([addressHash]) @@index([userId]) } model BlockedEntity { id String @id @default(cuid()) entityType String // PHONE | ADDRESS_HASH | USER_ID | IP entityValue String reason String blockedBy String // admin user ID who blocked blockedAt DateTime @default(now()) expiresAt DateTime? isActive Boolean @default(true) @@unique([entityType, entityValue]) @@index([entityType, entityValue, isActive]) }
Risk scoring engine — runs on every COD checkout
src/modules/orders/fraud.service.js
import prisma from '../../lib/prisma.js' import redis from '../../lib/redis.js' import crypto from 'crypto' import * as Sentry from '@sentry/node' // Hash address for privacy (we compare hashes, not raw addresses) function hashAddress(address) { return crypto.createHash('sha256') .update(address.toLowerCase().replace(/\s+/g, ' ').trim()) .digest('hex') } export async function calculateFraudRisk({ userId, phone, address, cartTotal, ip }) { let riskScore = 0 const signals = [] const addressHash = hashAddress(address) // ── CHECK 1: Is this entity already blocked? ── const blocked = await prisma.blockedEntity.findFirst({ where: { isActive: true, OR: [ { entityType: 'PHONE', entityValue: phone }, { entityType: 'ADDRESS_HASH', entityValue: addressHash }, { entityType: 'USER_ID', entityValue: userId }, { entityType: 'IP', entityValue: ip }, ] } }) if (blocked) { return { blocked: true, reason: blocked.reason, riskScore: 100 } } // ── CHECK 2: COD refusal history on this phone ── const phoneRefusals = await prisma.fraudSignal.count({ where: { phone, signalType: 'COD_REFUSED', createdAt: { gte: new Date(Date.now() - 30 * 86400000) } } }) if (phoneRefusals >= 3) { riskScore += 50; signals.push('3+ COD refusals this month') } else if (phoneRefusals >= 1) { riskScore += 20; signals.push(`${phoneRefusals} COD refusal(s)`) } // ── CHECK 3: COD refusal history on this address ── const addressRefusals = await prisma.fraudSignal.count({ where: { addressHash, signalType: 'COD_REFUSED', createdAt: { gte: new Date(Date.now() - 30 * 86400000) } } }) if (addressRefusals >= 2) { riskScore += 40; signals.push('Multiple refusals at this address') } // ── CHECK 4: Velocity — too many orders today from same phone ── const velocityKey = `codvelocity:${phone}:${new Date().toDateString()}` const todayOrders = parseInt(await redis.get(velocityKey) || '0') if (todayOrders >= 3) { riskScore += 35; signals.push('3+ COD orders today same phone') } else if (todayOrders >= 2) { riskScore += 15; signals.push('2 COD orders today same phone') } // ── CHECK 5: New account + high cart value ── const user = await prisma.user.findUnique({ where: { id: userId } }) const accountAgeDays = (Date.now() - new Date(user.createdAt)) / 86400000 if (accountAgeDays < 3 && cartTotal > 800) { riskScore += 30; signals.push('Account under 3 days old, high cart value') } // ── CHECK 6: No prior successful orders ── const successfulOrders = await prisma.order.count({ where: { userId, status: 'DELIVERED' } }) if (successfulOrders === 0 && cartTotal > 1200) { riskScore += 15; signals.push('No prior successful orders, high value') } // ── DECISION ── const decision = riskScore >= 70 ? 'BLOCK' : riskScore >= 40 ? 'REVIEW' : riskScore >= 20 ? 'FLAG' : 'ALLOW' if (decision !== 'ALLOW') { Sentry.captureMessage(`COD fraud signal: ${decision}`, 'warning', { extra: { userId, phone, cartTotal, riskScore, signals } }) } return { blocked: decision === 'BLOCK', decision, riskScore, signals } }
Recording fraud events from logistics webhooks
src/modules/orders/fraud.service.js — record when 3PL reports refused delivery
// Called from logistics webhook handler when status = 'delivery_failed' export async function recordCODRefusal(orderId) { const order = await prisma.order.findUnique({ where: { id: orderId }, include: { user: true } }) if (!order || order.paymentMethod !== 'COD') return // Log the fraud signal await prisma.fraudSignal.create({ data: { userId: order.userId, phone: order.user.phone, addressHash: hashAddress(order.deliveryAddress), signalType: 'COD_REFUSED', orderId: orderId, riskScore: 25, note: 'COD delivery refused by recipient' } }) // Update velocity counter (expires in 7 days) const countKey = `codrefusals:${order.user.phone}` await redis.incr(countKey) await redis.expire(countKey, 7 * 86400) // Auto-block if this phone has 3+ refusals this month const recentRefusals = await prisma.fraudSignal.count({ where: { phone: order.user.phone, signalType: 'COD_REFUSED', createdAt: { gte: new Date(Date.now() - 30 * 86400000) } } }) if (recentRefusals >= 3) { await prisma.blockedEntity.upsert({ where: { entityType_entityValue: { entityType: 'PHONE', entityValue: order.user.phone } }, create: { entityType: 'PHONE', entityValue: order.user.phone, reason: 'Auto-blocked: 3+ COD refusals in 30 days', blockedBy: 'SYSTEM' }, update: { isActive: true, reason: 'Auto-blocked: 3+ COD refusals in 30 days' } }) Sentry.captureMessage(`Auto-blocked phone ${order.user.phone} after 3 refusals`, 'warning') } }
Plugging fraud check into the order flow
src/modules/orders/orders.routes.js — updated POST /api/orders
import { checkCODEligibility } from './orders.service.js' import { calculateFraudRisk } from './fraud.service.js' fastify.post('/api/orders', { preHandler: [authMiddleware, validateMiddleware(orderSchema)] }, async (req, reply) => { const { cartItems, cartTotal, address, paymentMethod } = req.body // Step 1: COD eligibility (existing logic) if (paymentMethod === 'COD') { const eligibility = checkCODEligibility({ cartItems, cartTotal, user: req.user }) if (!eligibility.eligible) { return reply.status(400).send({ success: false, error: { code: eligibility.reason, message: eligibility.message } }) } // Step 2: Fraud risk check (new) const fraud = await calculateFraudRisk({ userId: req.user.id, phone: req.user.phone, address, cartTotal, ip: req.ip }) if (fraud.blocked) { return reply.status(403).send({ success: false, error: { code: 'COD_BLOCKED', message: 'COD is not available for this order. Please use online payment.' } }) } if (fraud.decision === 'REVIEW') { // Allow order but flag for manual review — don't block the customer req.fraudFlag = { riskScore: fraud.riskScore, signals: fraud.signals } } } // Step 3: Create order (existing logic) // ... } )
Admin fraud dashboard — what to show
Essential views for your admin panel
  • Fraud signals feed — real-time list of new signals with risk score, entity, and signal type. Seniors review daily.
  • Blocked entities list — all currently blocked phones, addresses, users. With unblock button requiring reason.
  • Review queue — orders with decision=REVIEW awaiting manual approval or cancellation. Target: zero items in queue by end of each business day.
  • Refusal rate by address zone — barangay-level refusal heatmap. High-refusal zones get automatic lower COD limits.
  • Weekly fraud report — total blocked orders, estimated losses prevented, false positive rate (legitimate orders flagged).
COD fraud quick-reference rules
Risk scoreDecisionWhat happensCustomer sees
0–19ALLOWOrder proceeds normallyNormal checkout
20–39FLAGOrder created, logged to SentryNormal checkout (not told)
40–69REVIEWOrder held, senior reviews within 2 hours"Order is being verified"
70+BLOCKCOD blocked, online payment offered"COD unavailable, use online payment"
// vendor — new chapter

Vendor dashboard
essential features

The vendor dashboard is where Zenska's trust promise is either kept or broken. If vendors cannot see their orders clearly, get paid transparently, and manage their store without confusion — they leave. These are the features that matter.

The 6 screens every vendor needs — nothing more at launch
1. Dashboard home — the first screen vendors see
  • Today's stats: orders today, revenue today, pending pickups, items to ship
  • Pending actions: orders awaiting confirmation, low stock alerts, unread messages
  • Recent orders: last 5 orders with status pill (Pending / Confirmed / Shipped / Delivered)
  • Settlement due: amount expected in next payout cycle with date
GET /api/vendor/dashboard — the data this screen needs
const today = new Date() today.setHours(0,0,0,0) const [todayOrders, pendingOrders, pendingSettlement] = await Promise.all([ prisma.order.count({ where: { vendorId, createdAt: { gte: today } } }), prisma.order.count({ where: { vendorId, status: 'CONFIRMED' } }), prisma.settlement.aggregate({ where: { vendorId, status: 'pending' }, _sum: { netAmount: true } }) ]) return { todayOrders, pendingOrders, pendingSettlement: pendingSettlement._sum.netAmount || 0 }
2. Orders — the most-used screen
  • Order list with filters: All / Pending / Confirmed / Shipped / Delivered / Cancelled
  • Order detail: customer name (not phone for privacy), items ordered, full delivery address, payment method (COD/GCash/Card), order total after commission
  • One-click actions: Confirm Order → triggers pickup booking with 3PL + SMS to customer. Mark as Packed. Request pickup.
  • Tracking number: shows 3PL tracking number once pickup is booked. Clickable link to courier tracking page.
  • Vendors must NOT see customer phone numbers directly — only Zenska support can share this if needed.
3. Products — manage listings
  • Product list with live/draft/out-of-stock status. Search and category filter.
  • COD toggle per product — vendor ticks whether each product is COD eligible. This feeds into our COD validation logic. Default: off. Vendor opts in per product.
  • Stock management — update stock count. Alert badge when stock < 5 units.
  • Image upload — drag-and-drop to S3 via pre-signed URL. Max 5 images per product. Auto-optimized via Cloudflare Images.
  • Bulk CSV upload — for vendors with large catalogs. Template downloadable. Junior Dev 1 builds this in Sprint 2.
4. Payouts & settlements — the trust-critical screen
  • Settlement summary: current cycle total, commission deducted, gateway fees (at actual cost, no markup), net payout amount
  • Settlement history: every past payout with date, amount, and breakdown. Downloadable as PDF/CSV for BIR.
  • Per-order breakdown: expandable row showing each order's gross → commission (8%) → net for that order
  • Next payout date prominently displayed. Settlement cycle: T+7 after delivery confirmed.
  • No payout is released until the delivery is confirmed by the 3PL webhook. This is automatic — no manual step.
GET /api/vendor/settlements — payout history endpoint
settlements = await prisma.settlement.findMany({ where: { vendorId }, include: { order: { include: { items: true } } }, orderBy: { createdAt: 'desc' } }) return settlements.map(s => ({ id: s.id, orderId: s.orderId, grossAmount: s.grossAmount, commission: s.commission, // always 8% gatewayFee: s.gatewayFee, // actual cost, no markup netAmount: s.netAmount, status: s.status, // pending | paid settledAt: s.settledAt }))
5. Analytics — vendor performance
  • Revenue chart: daily/weekly/monthly gross revenue. Line chart. 30 and 90 day views.
  • Top products: best-selling products by order count and by revenue. Helps vendors know what to restock.
  • Return rate: % of orders returned or refused. Colour-coded: green <5%, amber 5–10%, red >10%.
  • COD vs online split: pie chart of payment methods. Helps vendors decide COD eligibility strategy.
  • Keep it simple at launch. Vendors don't need 40 charts. They need 4 clear numbers.
6. Store settings — one-time setup
  • Store profile: store name, description, logo upload (S3), banner image
  • Bank account: bank name, account number, account name for settlement payout. Required before first payout. Verified by Zenska admin.
  • Business docs: view verification status of submitted documents. Upload new docs if requested by admin.
  • Notification preferences: which SMS/email alerts to receive (new order, low stock, settlement, etc.)
  • Vendors cannot change their own verification status or bank account without admin approval. Fraud prevention.
Vendor dashboard API endpoints
MethodEndpointWhat it returns
GET/api/vendor/dashboardToday's stats, pending actions, recent orders, next settlement
GET/api/vendor/orders?status=&page=Paginated order list with filters
GET/api/vendor/orders/:idFull order detail with items, address, tracking
PATCH/api/vendor/orders/:id/confirmConfirm order → triggers 3PL pickup booking + customer SMS
PATCH/api/vendor/orders/:id/packedMark as packed and ready for pickup
GET/api/vendor/products?page=&status=Vendor's product listings from WC/Postgres
PATCH/api/vendor/products/:id/codToggle COD eligibility for a product
PATCH/api/vendor/products/:id/stockUpdate stock count
GET/api/vendor/settlementsFull settlement history with per-order breakdown
GET/api/vendor/analytics?period=30dRevenue chart data, top products, return rate
PATCH/api/vendor/settingsUpdate store profile, notification prefs
POST/api/vendor/settings/bankSubmit bank account for admin verification
What vendors must NOT be able to do — the permission guardrails
!Vendors cannot see other vendors' orders, products, or analytics. Every query must filter by vendorId = req.vendor.id.
!Vendors cannot see customer phone numbers or full email. Show only first name + last initial and masked address.
!Vendors cannot change their own verification status, commission rate, or bank account without admin approval via a separate admin review flow.
!Vendors cannot upload files directly to S3. They must request a pre-signed URL from /api/upload/presign with their vendor ID in the key path.
The vendor auth middleware should attach req.vendor with verified status. Any vendor whose status is not active should get 403 on all write operations.
Settlement data is read-only for vendors. The settlement record is written only by the system when a 3PL delivery webhook fires, never by a vendor action.