← Back to Portfolio

Case Study · GovTech · AI

GovOpps AI Recon

Federal contract intelligence that replaced $18,000/year enterprise software for $15/month — built in 6 sprints.

Next.js 15 TypeScript Supabase Claude AI SAM.gov API Cloudflare Pages Row Level Security
View Live App → ← Back to Portfolio
Status Live Beta
Timeline 6 Sprints
Monthly Cost $15/mo vs $1,500/mo
Data Source SAM.gov (live)

Small GovCon firms were locked out.

Government contracting is a $700 billion/year market. But the tools to compete in it — GovWin IQ, Deltek GovWin, Bloomberg Government — cost $1,000–$1,500/month. That pricing excludes every small business, veteran-owned firm, and solo contractor who could otherwise compete for federal work.

SAM.gov publishes every federal contract opportunity for free. The data is public. The problem isn't access — it's signal-to-noise. A contractor logging into SAM.gov raw faces thousands of irrelevant listings with no scoring, no filtering by capability, and no AI-assisted analysis.

The opportunity: build the intelligence layer on top of public data. Use Claude AI for scoring and summarization. Charge $15/month instead of $1,500.

How the system is built.

Five layers, each with a single responsibility. The data flows from SAM.gov through AI scoring to a user-specific filtered dashboard, with every user's data isolated at the database level via Row Level Security.

SAM.gov API → Opportunity Ingestion
Pulls live federal contract opportunities from the public SAM.gov API on demand. Deduplicates by NOTICE_ID before writing to Supabase.
Next.js 15 App Router (Cloudflare Pages)
Static-export Next.js app served from Cloudflare's global edge network. Client-side Supabase calls handle data fetching — no server runtime required. Near-zero latency globally via Cloudflare's edge CDN.
Supabase (PostgreSQL + Auth + RLS)
Stores opportunities, scores, and user profiles. Row Level Security (RLS) policies enforce strict user data isolation — no user can access another user's saved opportunities or scores. Supabase Auth handles JWTs.
Claude AI (Anthropic) — Scoring + Summarization
Each opportunity is scored across 6 dimensions: relevance, win probability, competition level, value alignment, past performance match, and set-aside eligibility. Claude also generates a plain-English opportunity summary and identifies teaming partners.
User Dashboard — Filtered, Scored, Actionable
Users see only opportunities relevant to their company profile. Sortable by AI score, value, deadline, and agency. One-click to view full scoring breakdown or save an opportunity for follow-up.
Why these technologies?
Next.js + Cloudflare Pages: Static export with edge delivery means sub-100ms load times globally with zero infrastructure to manage.

Supabase: Postgres with built-in auth and RLS policies. The security model required per-user data isolation — RLS enforces this at the database level, not the application layer. You can't accidentally leak data with a bad query.

Claude AI: Anthropic's models handle nuanced government contracting language better than GPT alternatives in early testing. The 6-dimension scoring rubric required a model that could reason across multiple criteria simultaneously.

Six sprints. One live product.

Each sprint had a clear deliverable and a hard cutoff. The goal was always working software — not perfect software.

Sprint 1
Foundation
Next.js 15 scaffold, Supabase schema design, initial SAM.gov API integration. Established the data model: opportunities, scores, user_profiles.
SAM.gov's API rate limits required a caching layer earlier than planned.
Sprint 2
Opportunity Pipeline
Built the ingestion pipeline — fetch, deduplicate by NOTICE_ID, normalize fields, store. First real data in the database.
Deduplication logic had to account for SAM.gov modifying existing notices, not just adding new ones.
Sprint 3
AI Scoring
Claude integration, 6-dimension scoring rubric, prompt engineering for consistent JSON output. Each opportunity gets a composite score and per-dimension breakdown.
Getting Claude to return consistent JSON required explicit schema definition in the system prompt.
Sprint 4
Auth + Dashboard
Supabase Auth, RLS policies, user profile setup, filtered opportunity views. Users now see only what's relevant to their company capabilities.
Writing correct RLS policies required understanding Supabase's auth.uid() function and testing with multiple accounts.
Sprint 5
Security Hardening
Discovered IDOR (Insecure Direct Object Reference) vulnerabilities — users could access other users' data by modifying URL parameters. Patched at the API route level and reinforced RLS policies.
IDOR is easy to miss in fast-moving builds. Systematic testing of every API endpoint with a second test account is mandatory before any public beta.
Sprint 6
Beta Launch
Cloudflare Pages deployment, onboarded first beta user (Zach, Hodge Group Investments & Contracting LLC). Collecting feedback on SAM.gov refresh cadence and AI scoring accuracy.
Real users find edge cases in minutes that weeks of solo testing miss.

What it replaced. What it costs.

Metric Enterprise (GovWin IQ) GovOpps AI Recon
Monthly cost $1,500/mo $15/mo
Annual cost $18,000/yr $180/yr
AI-powered scoring None 6-dimension Claude AI
Time to find relevant opps 2–3 hours manual < 5 minutes
Data source Proprietary database SAM.gov (official, live)
Beta users 1 (Hodge Group Investments)

Sprint 7 and beyond.

The platform is live and being used. Here's what's on the roadmap:

View Live App → ← Back to Portfolio