BentPixels
Compass Creator — YouTube Revenue Forecasting SaaS
Built a full-stack SaaS platform for YouTube revenue forecasting using Next.js 15, TypeScript, PostgreSQL, Drizzle ORM, Redis, and BullMQ — featuring a 7-step async ML forecasting pipeline, two-layer Redis caching with timestamp-based invalidation, scenario-aware SQL analytics, and a modular NextAuth system with 3 OAuth providers and admin impersonation.
Project Overview
Compass Creator is a SaaS revenue forecasting platform where YouTube creators connect their channel via OAuth, go through a qualification flow, and receive 36-month revenue projections across optimistic, likely, and pessimistic scenarios — helping them make data-driven decisions about content strategy and monetization.
Architected a modular NextAuth.js setup with 3 separate OAuth providers: standard Google (creators), google-admin (admins with email allowlist), and a YouTube provider that requests yt-analytics.readonly, yt-analytics-monetary.readonly, and youtube.readonly scopes. The signIn callback routes auth logic per provider — YouTube checks channel existence via API, admin validates against DB allowlist, and the JWT callback syncs YouTube channels to DB on initial OAuth only.
Built a 7-step BullMQ forecasting pipeline (SUBMIT → POLL → DOWNLOAD → ARCHIVE → IMPORT → SEND_EMAIL → UPLOAD_S3) backed by two separate Redis queues: a forecasting queue (30s exponential backoff, 5 retries) and a licensing queue (60s backoff, 5 concurrency). The POLL step retries up to 100 times at 2-minute intervals (3.3 hours max window) against an external Python ML API, with a 60-second fetch timeout per attempt.
Designed a two-layer Redis caching architecture: ChannelMetadataCache (10-minute TTL, batch mget support, per-user invalidation) wraps channel status lookups, and ProjectionCache (no TTL, infinite) stores per-channel analytics and timeline results. Projection cache is invalidated by comparing a stored lastProjectionRan timestamp against the current DB value on every read — a mismatch deletes the stale entry and forces a fresh fetch.
Wrote a single optimized PostgreSQL query using Drizzle ORM raw SQL with CASE statements to serve all 3 projection scenarios (optimistic, likely, pessimistic) in one round-trip. The query merges actual (historical) and projected rows using DATE_TRUNC grouping, handles transition months where both actuals and projections overlap, and computes 12/24/36-month cumulative and average metrics server-side before caching.
Implemented an admin impersonation system using a server-side global Map (impersonation store) keyed by one-time tokens. When an admin triggers impersonation, a token is created with a userId, expiry, and admin metadata. The target user's JWT is minted via a special NextAuth sign-in flow and tagged with impersonatedBy and isImpersonating flags in both the JWT and session — keeping the impersonated creator's actual data completely untouched.
Built a multi-channel account system where creators can connect multiple YouTube channels. AuthContext fetches all channels using ChannelMetadataCache.getForUser() (batch DB + per-channel Redis caching), stores the selected channel in localStorage, and scopes all downstream API calls and cache keys to the active channelId.
Integrated HMAC-SHA1 signed API calls to an external BentPixels licensing service for channel monetization verification. The licensing worker (BullMQ, 5 concurrency) processes pending licensingMetaJobs, sends the stored account JSON payload to the licensing endpoint with a date-scoped signature, and handles failures via BullMQ's native retry mechanism.
Key Features
3-Provider OAuth Architecture (NextAuth.js)
Three separate Google OAuth providers registered in NextAuth: 'google' (standard creators), 'google-admin' (email-allowlist admins), and 'youtube' (requests yt-analytics.readonly + yt-analytics-monetary.readonly + youtube.readonly scopes). The signIn callback branches per provider ID — YouTube verifies channel existence via API token before allowing sign-in, admin checks the email against the admins DB table. JWT callback syncs YouTube channels to DB only on initial OAuth (account object present), skips on session refresh.
7-Step BullMQ Forecasting Pipeline
BullMQ worker processes forecasting jobs through 7 defined steps: SUBMIT (POST to external Python ML API), POLL (up to 100 retries × 2-min = 3.3hrs max, 60s fetch timeout per attempt), DOWNLOAD (stream CSV result), ARCHIVE (copy current projections to history with version number), IMPORT (bulk-insert new rows), SEND_EMAIL (AWS SES), UPLOAD_S3. Forecasting queue uses 30s exponential backoff, 5 retry attempts. Per-step errors are written to lastError and lastErrorAt fields, visible in admin UI.
Two-Layer Redis Caching with Timestamp-Based Invalidation
ChannelMetadataCache: 10-minute TTL cache for channel status (isApproved, projectionsLanded, lastProjectionRan). Supports batch mget for multi-channel users and per-user invalidation. ProjectionCache: no TTL (infinite) per-channel cache for analytics and timeline data. On every read, the cached lastProjectionRan is compared to the value from ChannelMetadataCache — a mismatch deletes the stale entry immediately and returns null to trigger a fresh DB query + re-cache.
Scenario-Aware Single SQL Query (Drizzle ORM)
One PostgreSQL query using DATE_TRUNC('month') grouping with SQL CASE statements selects the correct projection columns at query time based on a scenario parameter ('optimistic', 'likely', 'pessimistic'). Actual (historical) and projected rows are merged in a Map by month key server-side. Transition months where both actuals and projections exist are summed correctly. 12/24/36-month cumulative revenue, average monthly revenue, average monthly views, and weighted RPM are all computed in the route handler before the result is cached.
Admin Impersonation via One-Time Token + Global Map Store
Admin triggers impersonation → server creates a one-time token in a Node.js global Map (impersonationStore) containing target userId, adminId, expiry, and admin metadata. A parallel exitImpersonationStore holds the admin's return token. The NextAuth JWT callback reads the token, mints a full JWT for the target user, and sets isImpersonating: true and impersonatedBy: adminEmail. JWT callback skips all processing (YouTube sync, DB lookups) for impersonated sessions.
Multi-Channel Account Management with Batch Caching
AuthContext calls ChannelMetadataCache.getForUser() on mount — fetches all channels for the user from DB in one query, caches each individually in Redis (10-min TTL), and returns the full list. Batch lookups use Redis mget for O(1) round-trip. Selected channel stored in localStorage, read back on hydration. All projection API calls, cache keys, and BullMQ jobs are scoped to the active channelId.
Channel Qualification Flow
After YouTube OAuth, creators go through a step-by-step verification: actively uploading (yes/no), long-form content (yes/no), channel age (yes/no). Answers are written to verificationResponses table with a stoppedAtQuestion field if they exit early. qualified boolean gates dashboard access. Admins can view all responses per channel in the admin UI. Channel status tracks: pending → qualified / disqualified.
HMAC-SHA1 Signed Licensing API Integration
Licensing service calls BentPixels external API with HMAC-SHA1 request signing. Signature is computed as HMAC(secret, 'POST\n{YYYY-MM-DD}') and encoded as base64, sent as Authorization: BP {token}:{signature}. Three endpoints: channel_meta (send full OAuth account JSON for monetization validation), channel_revenue (fetch revenue by channelId), compass/verify_auth. Licensing worker (BullMQ, 5 concurrency, 60s backoff) processes licensingMetaJobs table entries.
Challenges & Solutions
Managing a 7-Step Async Pipeline with External API Polling (3.3-Hour Window)
The external Python forecasting service processes channels asynchronously with no guaranteed completion time. The pipeline had to reliably poll for up to 3.3 hours across server restarts, handle partial failures at any step, and surface progress to admins without blocking the BullMQ queue.
Solution
Each step is a separate job state persisted in Redis. The POLL step retries up to 100 times at 2-minute intervals using BullMQ's delay mechanism, with a 60-second AbortController timeout per HTTP attempt. The forecastingJobs table tracks currentStep, status, attempts, and errorMessage so the admin UI can show exactly where a job is stuck. Failed steps write to youtubeChannels.lastError and lastErrorAt. The 30s exponential backoff on the queue retries transient network errors without blocking other jobs.
Projection Cache Invalidation Without TTL or Polling
Projection data for a channel can be valid for weeks or months between ML forecasting runs, making short TTLs wasteful. But the cache must be invalidated the instant a new projection is generated — no delay acceptable.
Solution
ProjectionCache stores each entry with the channel's lastProjectionRan timestamp embedded in the JSON blob. On every GET, the handler first calls ChannelMetadataCache.get() (10-min TTL, fast) to fetch the current lastProjectionRan from Redis or DB. The two timestamps are compared as ISO strings — any mismatch triggers redisConnection.del(key) and returns null, forcing a fresh DB query. The forecasting worker's IMPORT step calls ProjectionCache.clear(channelId) and ChannelMetadataCache.invalidate(channelId) after writing new projections, so the next request sees fresh data immediately.
Scenario-Aware Analytics from a Single DB Query
The projections table stores 3 sets of view/revenue columns per row (projectedViewsOptimistic, projectedViewsLikely, projectedViewsPessimistic, etc.). Returning analytics for a user-selected scenario while merging historical actuals and projections in a single round-trip was non-trivial.
Solution
Used Drizzle ORM's sql<> template to embed a runtime scenario parameter directly into SQL CASE statements. The query uses DATE_TRUNC('month') grouping and SUM aggregation with COALESCE fallbacks for rows missing revenue columns. After the query, the route handler merges 'Actual' and projected rows into a Map<monthKey, {views, revenue}>, sums transition months, then computes 12/24/36-month metrics before caching under a scenario-scoped key.
Three-Provider OAuth with Per-Provider Business Logic in a Single NextAuth Config
Three distinct Google OAuth flows — regular users, admins, and YouTube-scoped creators — each requiring different validation, JWT fields, and DB operations, all routed through a single NextAuth callbacks chain.
Solution
Registered three GoogleProvider instances with distinct IDs ('google', 'google-admin', 'youtube'). The signIn callback switches on account.provider to apply the correct gate (YouTube: API channel check, admin: DB allowlist). The JWT callback switches on account.provider again: YouTube provider triggers syncYouTubeChannels() only on initial OAuth; admin provider fetches admin record and sets adminId, adminAccess, loginAccess, flow: 'admin'; regular users set flow: 'user' with DB role lookup.
Admin Impersonation Without Touching User Data or Re-Authenticating
Admins needed to step into a creator's exact session — same projections, same channel state, same access gates — without modifying any user records or forcing a real OAuth flow for the target user.
Solution
Built a two-store system using Node.js global Maps: impersonationStore (admin → user token, 5-min TTL) and exitImpersonationStore (user → admin return token). Admin hits an API route that mints a one-time token, stores user metadata in the Map, and redirects to /api/auth/signin with the token as a query param. NextAuth's JWT callback detects the token, reads from impersonationStore (deletes it — one-time use), and builds a full JWT for the target user with isImpersonating: true. All subsequent JWT refreshes short-circuit via early return when either impersonation flag is set.
Batch Redis Caching for Multi-Channel Users with Consistent Invalidation
Users can own multiple YouTube channels. Loading all channel metadata on every page load with individual DB queries would be slow. Invalidating stale entries across all channels after projection updates added complexity.
Solution
ChannelMetadataCache.getForUser() runs a single DB query for all channels belonging to a userId, then batch-sets each entry in Redis with setex (10-min TTL). ChannelMetadataCache.getBatch() uses Redis mget to fetch multiple channel keys in one round-trip and only falls back to DB for cache misses, then backfills them. ChannelMetadataCache.invalidateForUser() fetches all channelIds for a user from DB and calls redisConnection.del() with a variadic key list — one atomic call regardless of channel count.
Project Gallery
Dashboard — Revenue forecast overview (placeholder)
Analytics — 36-month projection scenarios (placeholder)
Channel management & qualification flow (placeholder)
Admin panel — pipeline monitoring & impersonation (placeholder)
Project Details
Timeline
Aug 2025 - Mar 2026
Company
BentPixels
My Role
Full-Stack Engineer
Tech Stack
Links
Visit Live Site