DEV Community

Cover image for Postgres is Too Good (And Why That's Actually a Problem)
Shayan
Shayan

Posted on

Postgres is Too Good (And Why That's Actually a Problem)

We need to talk about something that's been bothering me for months. I've been watching indie hackers and startup founders frantically cobbling together tech stacks with Redis for caching, RabbitMQ for queues, Elasticsearch for search, and MongoDB for... reasons?

I'm guilty of this too. When I started building UserJot (my feedback and roadmap tool), my first instinct was to plan out a "proper" architecture with separate services for everything. Then I stopped and asked myself: what if I just used Postgres for everything?

Turns out, there's this elephant in the room that nobody wants to acknowledge:

Postgres can do literally all of this.

And it does it better than you think.

The "Postgres Can't Scale" Myth That's Costing You Money

Let me guess - you've been told that Postgres is "just a relational database" and you need specialized tools for specialized jobs. That's what I thought too, until I discovered that Instagram scaled to 14 million users on a single Postgres instance. Discord handles billions of messages. Notion built their entire product on Postgres.

But here's the kicker: they're not using Postgres like it's 2005.

Queue Systems

Stop paying for Redis and RabbitMQ. Postgres has native support for LISTEN/NOTIFY and can handle job queues better than most dedicated solutions:

-- Simple job queue in pure Postgres
CREATE TABLE job_queue (
    id SERIAL PRIMARY KEY,
    job_type VARCHAR(50),
    payload JSONB,
    status VARCHAR(20) DEFAULT 'pending',
    created_at TIMESTAMP DEFAULT NOW(),
    processed_at TIMESTAMP
);

-- ACID-compliant job processing
BEGIN;
UPDATE job_queue
SET status = 'processing', processed_at = NOW()
WHERE id = (
    SELECT id FROM job_queue
    WHERE status = 'pending'
    ORDER BY created_at
    FOR UPDATE SKIP LOCKED
    LIMIT 1
)
RETURNING *;
COMMIT;
Enter fullscreen mode Exit fullscreen mode

This gives you exactly-once processing with zero additional infrastructure. Try doing that with Redis without pulling your hair out.

In UserJot, I use this exact pattern for processing feedback submissions, sending notifications, and updating roadmap items. One transaction, guaranteed consistency, no message broker complexity.

Key-Value Storage

Redis costs $20/month minimum on most platforms. Postgres JSONB is included in your existing database and does most of what you need:

-- Your Redis alternative
CREATE TABLE kv_store (
    key VARCHAR(255) PRIMARY KEY,
    value JSONB,
    expires_at TIMESTAMP
);

-- GIN index for blazing fast JSON queries
CREATE INDEX idx_kv_value ON kv_store USING GIN (value);

-- Query nested JSON faster than most NoSQL databases
SELECT * FROM kv_store
WHERE value @> '{"user_id": 12345}';
Enter fullscreen mode Exit fullscreen mode

The @> operator is Postgres's secret weapon. It's faster than most NoSQL queries and your data stays consistent.

Full-Text Search

Elasticsearch clusters are expensive and complex. Postgres has built-in full-text search that's shockingly good:

-- Add search to any table
ALTER TABLE posts ADD COLUMN search_vector tsvector;

-- Auto-update search index
CREATE OR REPLACE FUNCTION update_search_vector()
RETURNS trigger AS $$
BEGIN
    NEW.search_vector := to_tsvector('english',
        COALESCE(NEW.title, '') || ' ' ||
        COALESCE(NEW.content, '')
    );
    RETURN NEW;
END;
$$ LANGUAGE plpgsql;

-- Ranked search results
SELECT title, ts_rank(search_vector, query) as rank
FROM posts, to_tsquery('startup & postgres') query
WHERE search_vector @@ query
ORDER BY rank DESC;
Enter fullscreen mode Exit fullscreen mode

This handles fuzzy matching, stemming, and relevance ranking out of the box.

For UserJot's feedback search, this lets users find feature requests instantly across titles, descriptions, and comments. No Elasticsearch cluster needed - just pure Postgres doing what it does best.

Real-Time Features

Forget complex WebSocket infrastructure. Postgres LISTEN/NOTIFY gives you real-time updates with zero additional services:

-- Notify clients of changes
CREATE OR REPLACE FUNCTION notify_changes()
RETURNS trigger AS $$
BEGIN
    PERFORM pg_notify('table_changes',
        json_build_object(
            'table', TG_TABLE_NAME,
            'action', TG_OP,
            'data', row_to_json(NEW)
        )::text
    );
    RETURN NEW;
END;
$$ LANGUAGE plpgsql;
Enter fullscreen mode Exit fullscreen mode

Your application listens for these notifications and pushes updates to users. No Redis pub/sub needed.

The Hidden Costs of "Specialized" Tools

Let's do some math. A typical "modern" stack costs:

  • Redis: $20/month
  • Message queue: $25/month
  • Search service: $50/month
  • Monitoring for 3 services: $30/month
  • Total: $125/month

But that's just the hosting costs. The real pain comes from:

Operational Overhead:

  • Three different services to monitor, update, and debug
  • Different scaling patterns and failure modes
  • Multiple configurations to maintain
  • Separate backup and disaster recovery procedures
  • Different security considerations for each service

Development Complexity:

  • Different client libraries and connection patterns
  • Coordinating deployments across multiple services
  • Inconsistent data between systems
  • Complex testing scenarios
  • Different performance tuning approaches

If you self-host, add server management, security patches, and the inevitable 3 AM debugging sessions when Redis decides to consume all your memory.

Postgres handles all of this with a single service that you're already managing.

The Single Database That Scales

Here's something most people don't realize: a single Postgres instance can handle massive scale. We're talking millions of transactions per day, terabytes of data, and thousands of concurrent connections.

Real-world examples:

  • Airbnb: Single Postgres cluster handling millions of bookings
  • Robinhood: Billions of financial transactions
  • GitLab: Entire DevOps platform on Postgres

The magic is in Postgres's architecture. It's designed to scale vertically incredibly well, and when you finally need horizontal scaling, you have proven options like:

  • Read replicas for query scaling
  • Partitioning for large tables
  • Connection pooling for concurrency
  • Logical replication for distributed setups

Most businesses never hit these limits. You're probably fine with a single instance until you're processing millions of users or complex analytical workloads.

Compare this to managing separate services that all scale differently - your Redis might max out memory while your message queue struggles with throughput and your search service needs different hardware entirely.

Stop Overengineering From Day One

The biggest trap in modern development is architectural astronauting. We design systems for problems we don't have, with traffic we've never seen, for scale we may never reach.

The overengineering cycle:

  1. "We might need to scale someday"
  2. Add Redis, queues, microservices, multiple databases
  3. Spend months debugging integration issues
  4. Launch to 47 users
  5. Pay $200/month for infrastructure that could run on a $5 VPS

Meanwhile, your competitors ship faster because they're not managing a distributed system before they need one.

The better approach:

  • Start simple with Postgres
  • Monitor actual bottlenecks, not imaginary ones
  • Scale specific components when you hit real limits
  • Add complexity only when it solves actual problems

Your users don't care about your architecture. They care about whether your product works and solves their problems.

When You Actually Need Specialized Tools

Don't get me wrong - specialized tools have their place. But you probably don't need them until:

  • You're processing 100,000+ jobs per minute
  • You need sub-millisecond cache responses
  • You're doing complex analytics on terabytes of data
  • You have millions of concurrent users
  • You need global data distribution with specific consistency requirements

If you're reading this on dev.to, you're probably not there yet.

Why This Actually Matters

Here's what blew my mind: Postgres can be your primary database, cache, queue, search engine, AND real-time system simultaneously. All while maintaining ACID transactions across everything.

-- One transaction, multiple operations
BEGIN;
    INSERT INTO users (email) VALUES ('user@example.com');
    INSERT INTO job_queue (job_type, payload)
    VALUES ('send_welcome_email', '{"user_id": 123}');
    UPDATE kv_store SET value = '{"last_signup": "2024-01-15"}'
    WHERE key = 'stats';
COMMIT;
Enter fullscreen mode Exit fullscreen mode

Try doing that across Redis, RabbitMQ, and Elasticsearch without crying.

The Boring Technology That Wins

Postgres isn't sexy. It doesn't have a flashy website or viral TikTok presence. But it's been quietly powering the internet for decades while other databases come and go.

There's something to be said for choosing boring, reliable technology that just works.

Action Steps for Your Next Project

  1. Start with Postgres only - Resist the urge to add other databases
  2. Use JSONB for flexibility - You get schema-less benefits with SQL power
  3. Implement queues in Postgres - Save money and complexity
  4. Add specialized tools only when you hit real limits - Not imaginary ones

My Real-World Experience

Building UserJot has been the perfect test case for this philosophy. It's a feedback and roadmap tool that needs:

  • Real-time updates when feedback is submitted
  • Full-text search across thousands of feature requests
  • Background jobs for sending notifications
  • Caching for frequently accessed roadmaps
  • Key-value storage for user preferences and settings

My entire backend is a single Postgres database. No Redis, no Elasticsearch, no message queues. Just Postgres handling everything from user authentication to real-time WebSocket notifications.

The result? I ship features faster, have fewer moving parts to debug, and my infrastructure costs are minimal. When users submit feedback, search for features, or get real-time updates on roadmap changes - it's all Postgres under the hood.

This isn't theoretical anymore. It's working in production with real users and real data.

The Uncomfortable Conclusion

Postgres might be too good for its own good. It's so capable that it makes most other databases seem unnecessary for 90% of applications. The industry has convinced us we need specialized tools for everything, but maybe we're just making things harder than they need to be.

Your startup doesn't need to be a distributed systems showcase. It needs to solve real problems for real people. Postgres lets you focus on that instead of babysitting infrastructure.

So next time someone suggests adding Redis "for performance" or MongoDB "for flexibility," ask them: "Have you actually tried doing this in Postgres first?"

You might be surprised by the answer. I know I was when I built UserJot entirely on Postgres - and it's been running smoothly ever since.


What's your experience with Postgres? Have you successfully used it beyond traditional relational data? I'm always curious to hear how other developers are using it. If you want to see a real example of Postgres doing everything, check out UserJot - it's my proof that you really can build a full SaaS with just one database.

Top comments (27)

Collapse
 
code42cate profile image
Jonas Scholz

Collapse
 
shayy profile image
Shayan

haha!

Collapse
 
zoechi profile image
Günter Zöchbauer

When NoSQL became popular, many thought NoSQL is superior to SQL. While there are scenarios where this can be true, it's only for the most trivial use cases or at extreme scale where performance has much higher priority than functionality. The latter is probably why many thought NoSQL is superior in general, but only very few ever need that kind of distributed performance some NoSQL databases can offer. SQL databases like Postgres offer a shitload of extremely useful functionality and can scale, just somewhat less than specialized NoSQL databases at the cost of extremely limited functionality.

Collapse
 
nathan_tarbert profile image
Nathan Tarbert

been loving this energy tbh, made me rethink all those times i jumped straight to the fancy stacks instead of just trusting one thing that actually works you think sticking to boring tech too long ever backfires or nah

Collapse
 
shaunjvn90 profile image
Shaun Jansen Van Nieuwenhuizen

Very well written!.
I use postgress in enterprise environments, and mysql in private environments (considering moving).

I do believe that the postgress events will still require pub/sub when horizontally scaling.

Redis works great for caching, I have not tried postgress solution that you posted, so will definitely give this a try, that being said, I think there are more libs that support redis out of the box

Collapse
 
ha_aang_kiss profile image
Ha Aang

Finally something good on dev.to even it has op product plug.

Collapse
 
dotallio profile image
Dotallio

Agree with this so much. I run everything from stateful AI flows to realtime dashboards with just Postgres - curious if anyone actually hit a limit that Postgres couldn't handle?

Collapse
 
naviny0 profile image
Navin Yadav

Let give it a try

Collapse
 
nicolus profile image
Nicolus

I completely agree with your sentiment, but I think you're overestimating the cost and complexity of Redis or Valkey: You can install it for free on a $5 VPS in an hour (or 5 minutes if you just use the default configuration) and it works pretty much as expected out of the box. It gives you both a key-value cache and a pubsub queue system.

So doing everything in the DB is absolutely viable and probably the best approach for a MVP, but I've never felt like Redis was a burden.

Collapse
 
nadeem_zia_257af7e986ffc6 profile image
nadeem zia

nice work

Collapse
 
stevepotter profile image
Stephen Potter

Love this. Can’t agree more.

Some comments may only be visible to logged-in visitors. Sign in to view all comments.