Supabase in Production: What I Wish I Knew Before 185 Tables
I've been running Supabase in production for over a year. 185 tables. 69 API endpoints. Stripe webhooks. Real-time subscriptions. Discord bot data. Trading analytics.
This isn't a "getting started" tutorial. This is the honest review after living with it at scale.
What's Genuinely Incredible
Row-Level Security Changes Everything
RLS is Supabase's killer feature, and most people underuse it. Instead of writing authorization checks in every API endpoint, the database enforces access:
```sql -- This one line prevents every "user A sees user B's data" bug CREATE POLICY "users_own_data" ON strategies FOR ALL USING (auth.uid() = user_id); ```
I have RLS on every table. In a year of development, I've had exactly zero data leak bugs. At my previous companies, data access bugs were a monthly occurrence.
The Dashboard Saves Hours
The Supabase dashboard lets me browse data, run SQL, check RLS policies, and manage auth users — without any custom admin tooling. For a solo engineer, this saves easily 10 hours per month of admin panel development.
Real-Time Works (With Caveats)
Real-time subscriptions for price alerts and dashboard updates work well. The caveats are below.
What's Frustrating
Migration Tooling Is Rough
Supabase's migration system (`supabase db diff`) generates migrations by diffing your local and remote schemas. In theory, this is clever. In practice:
- It sometimes generates incorrect migration order (tries to add a foreign key before the referenced table exists)
- It doesn't handle RLS policy changes cleanly
- Complex migrations (adding a column with a default value computed from existing data) need to be written by hand anyway
I now write all migrations by hand and just use the Supabase CLI for running them.
Connection Pooling is Confusing
Supabase provides two connection strings: one direct and one through a connection pooler (PgBouncer). The pooler is required for serverless environments (Vercel, Lambda) because serverless functions open new connections on every invocation.
The confusing part: some PostgreSQL features don't work through the pooler. Prepared statements, LISTEN/NOTIFY, and long-running transactions all need the direct connection. I've wasted days debugging "this works locally but fails in production" issues that turned out to be pooler vs direct connection mismatches.
Real-Time Has a 10-Second Delay (Sometimes)
Real-time subscriptions have near-instant delivery for small payloads. But for larger changes or during high load, I've seen delays of 5-10 seconds. For a trading alert system where milliseconds matter, this was a dealbreaker.
I ended up using a separate WebSocket service for time-critical alerts and Supabase real-time only for non-urgent updates (dashboard refresh, notification counts).
Free Tier Limits Hit Fast
The free tier is generous for prototyping (500MB database, 1GB storage, 50K auth users). But once you hit the limits, the jump to Pro ($25/month) is the only option — there's no intermediate tier.
What Almost Made Me Switch
At table #120, I hit a Supabase Studio bug where the dashboard would time out trying to load my schema. The table list took 15 seconds to render, and schema diffs would crash the browser tab.
I seriously considered migrating to raw PostgreSQL on RDS. What kept me on Supabase:
- RLS would need to be reimplemented manually
- Auth would need to be replaced (probably with Auth.js)
- The migration effort for 120+ tables would take weeks
The Studio performance has improved since then, but it was a wake-up call about depending on a managed service's UI.
My Honest Recommendation
| Use Case | Recommendation |
|---|---|
| MVP / prototype | Absolutely use Supabase |
| SaaS with < 50 tables | Great fit |
| SaaS with 100+ tables | Works, but expect Studio performance issues |
| High-frequency trading data | Use Supabase for CRUD, separate system for real-time |
| Enterprise with compliance needs | Consider self-hosted Supabase or raw PostgreSQL |
The Bottom Line
Supabase is the best developer experience I've used for PostgreSQL-backed applications. RLS alone justifies the choice. But it's not magic — at scale, you'll hit edges that require workarounds.
Would I choose it again for Nexural? Yes. Would I also plan for the workarounds from day one? Absolutely yes.