Hacker Newsnew | past | comments | ask | show | jobs | submit | maxenglander's commentslogin

It's a reasonable question. I think it's too early days for us to be able to provide a feature-by-feature breakdown of PlanetScale Postgres vs. Aurora/RDS. Our stated mission on day 1 (today) is to be the fastest and most reliable Postgres provider out there. The benchmarks we've provided are the clearest, data-driven thing we can point to right now in support of that.

More features will come later on which I think will set us apart even more from RDS, Aurora and other providers, but too early to talk about those.

Beyond features, there are other reasons you might choose us. For example, we've built a reputation on being having excellent reliability/uptime, with exceptionally good support. These are harder to back up with hard data, but our customer testimonials are a good testament to this.


In addition to the point about performance Sam made, PlanetScale's Vitess (MySQL) offers out-of-the-box horizontal scalability, which means we can maintain extremely good performance as your dataset and QPS grows to a massive scale: https://planetscale.com/case-studies/cash-app. We will be bringing the same capability to Postgres later on.

Our uptime and reliability is also higher than what you might find elsewhere. It's not uncommon for companies paying lots of money to operate elsewhere to migrate to PlanetScale for that reason.

We're a serious database for serious businesses. If a business can't afford to spend $39/mo to try PlanetScale, they may be happier operating elsewhere until their business grows to a point where they are running into scaling and performance limits and can afford (or badly need, depending on the severity of those limits) to try us out.


> instead of emphasizing how they just couldn't scale past their previous limits.

We are not saying that our customers don't have the knowledge or expertise to do what we do. Many of our customers, including the ones mentioned above, have exceptionally high levels of expertise and talent.

Even so, it is not a contradiction to say that we allowed them to scale beyond their previous limits. In some cases those limits were that their previous DBaaS providers simply lacked the ability to scale horizontally or provide blazing fast reads and writes the way we do out of the box. In other cases, we offer a degree of reliability and uptime that exceeded what customers' previous DBaaS could provide. Just two name a couple of limits customers have run into before choosing PlanetScale.

Expertise and know-how, and actually doing the thing, are different. Many of our customers who are technically capable of doing what we do would simply prefer to focus their knowledge and expertise building their core product, and let the database experts (that's us) do the databasing.


There are a lot of differences between Aurora/RDS and PlanetScale I could talk about, some but I'll point to just one for now: PlanetScale offers Metal databases, which means blazing fast NVMe drives attached directly to the host where Postgres is running. This gives you faster reads and writes than what either Aurora or RDS can achieve with their network-attached block storage. Check out our benchmarks: https://planetscale.com/blog/benchmarking-postgres

Also, the architecture of Aurora is very different from PlanetScale's:

* AWS Aurora uses storage-level replication, rather than traditional Postgres replication. This architecture has the benefit that a change made on an Aurora primary is visible very quickly on the read replicas. * PlanetScale is a "shared nothing" architecture using what I would call traditional methods of data replication, where the primary and the replicas have independent copies of the data. This means that replication lag is a possibility customers must consider, whereas Aurora customers mostly ignore this. * If you set up 3 AWS RDS Postgres instances in separate availability zones and set up replication between them, that would be roughly similar to PlanetScale's architecture.


This is an incredibly good example of what I wanted to know.


People can disagree with the claims of course, but I don't think they are baseless.

On the Postgres side: https://planetscale.com/blog/benchmarking-postgres

On the Vitess side, I would point to our customers, who, on individual databases, have achieved pretty high QPS (millions), on large datasets (100s of TiBs), at a latency that is lower than what other DBaaS providers can offer: https://planetscale.com/case-studies/cash-app


Hi n_u,

When we say ephemeral we mean that if the host compute dies in a permanent way (which happens from time to time) the data on the NVMe drives attached to that host is not recoverable by us. AWS/GCP might have recovery mechanisms internally it, but we don't have API access to those APIs.

When we say "semi-synchronous replication" we mean it in the sense of MySQL semi-synchronous replication: https://dev.mysql.com/doc/refman/8.4/en/replication-semisync.... To be honest I'm not exactly sure where the "semi" comes from but here are two possible reasons I can think of why:

1. We actually only require that 1 of the 2 replicas sends an acknowledgement to the primary that it has durably stored the transaction to its relay log before the primary in turn sends an acknowledgement back to the client application. 2. The transaction is visible (can be read) on both the primary and the replica _before_ the primary sends back an acknowledgement that the transaction was committed back to the client application.


Thanks! I see. It's maybe a term they came up with to place it between async and fully synchronous replication.


Hi n_u, PlanetScale engineer here, I'm going to just address just the point about durability via replication. I can't speak to what you've seen with other distributed systems, but, at PlanetScale, we don't do replication instead of writing to disk, we do replication in addition to writing to disk. Best of both worlds.


Good point, Max. I glossed over the "rather than" bit. We do, as you say, write to disks all over the place.

Even writing to one disk, though, isn't good enough. So we write to three and wait until two have acknowledged before we acknowledge that write to the client.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: