Data Flow
This page describes how data moves through the SMACKZ platform, from user actions to analytics dashboards.
Primary Data Flow
Customer (browser/mobile)
|
v
Smackz-Websites / SMACKZ-MOBILE
|
| HTTPS (REST API calls)
v
Yum Backend API (Fastify)
|
+-------> PostgreSQL (transactional data)
| |
+-------> Redis (caching, sessions)
| |
+-------> Cloudflare R2 (image uploads)
| |
+-------> Redis Streams (event emission)
|
v
Lakehouse Writer
|
| PyArrow -> Parquet
v
Cloudflare R2 (analytics bucket)
|
| DuckDB httpfs
v
Query API / Metabase
|
v
Dashboards & Ad-hoc SQL
Request Flow: Placing an Order
- Customer browses the menu on the restaurant website or mobile app
- Frontend fetches menu data from
GET /api/v1/restaurants/{id}/menus - Customer adds items to cart via
POST /api/v1/users/{userId}/restaurants/{id}/cart - Customer places order via
POST /api/v1/restaurants/{id}/orders - Yum API validates the order, calculates totals, applies offers/loyalty
- Yum API writes the order to PostgreSQL
- Yum API emits an
order.createdevent to Redis Streams - KDS-Web receives the order notification for kitchen display
- POS Adapter (if connected) syncs the order to Clover
- Lakehouse Writer consumes the event and writes order data as Parquet to R2
- Metabase dashboards reflect the new order in near-real-time (60-120s lag)
Authentication Flow
Mobile Customer Admin/Restaurant Owner
| |
Phone OTP Email/Password
(Firebase Phone Auth) (Firebase Identity Platform)
| |
Firebase ID Token Firebase ID Token
| |
+----------------+------------------+
|
v
Yum Backend API
|
Verify Firebase Token
(platform or restaurant-specific Firebase)
|
Return user session
Key details:
- Mobile users authenticate via phone OTP through Firebase Phone Auth (project-level, no tenants)
- Admin users authenticate via email/password through Firebase Identity Platform
- The
login_source: webheader signals the backend to always use platform Firebase (critical for SuperAdmin) - Each restaurant can have its own Firebase project for isolated user pools
Image Upload Flow
Client Yum API Cloudflare R2
| | |
|-- Request signed URL -->| |
|<-- Signed URL ----------| |
| | |
|-- PUT (direct upload) -----------------------> |
|<-- 200 OK ----------------------------------- |
| | |
|-- Confirm upload ------>| |
| |-- Verify file exists ->|
| |<-- OK -----------------+
| |-- Update entity DB |
|<-- Public URL ----------| |
Images bypass the API server entirely during upload. The API only generates signed URLs and confirms uploads, reducing bandwidth costs and improving performance.
Analytics Pipeline
Yum API
|
| XADD to Redis Streams
v
Lakehouse Writer (1 replica)
|
| Buffers events (time/count thresholds)
| Converts to PyArrow tables
| Writes as Parquet files
v
Cloudflare R2 (analytics bucket)
|
| DuckDB httpfs reads Parquet directly
v
Query API (FastAPI + DuckDB) Metabase (DuckDB driver)
| |
v v
Smackz-Admin (embedded charts) Self-service dashboards
The analytics pipeline is append-only: events flow from Yum through Redis Streams to the Writer, which batches them into Parquet files on R2. Both the Query API and Metabase read these Parquet files via DuckDB's httpfs extension, with no intermediate database.
Key Files
yum/api/lib/-- Redis client and stream utilitiessmackz-lakehouse/writer/-- Event consumer and Parquet writersmackz-lakehouse/query/-- FastAPI analytics endpointsyum/api/services/imageService.ts-- Image upload orchestration