HN loves Postgres because Swiss Army knife but mysql is just much faster in most cases. We have a weird use case with data scientists and millions of tables in one db: postgres is terrible at it, mysql has no issues at all. I guess it’s all fine if you are doing an ERP or crm; I would pick Postgres. But for anything else, I would start with mysql and see how far goes without wasting your life in devops (mysql master slave and master master are built in and trivial to set up; it just works).
(Sure, I hate Oracle too, but they actually did a pretty good job of both Java and mysql; i guess the 90s Borg for MS Gates and Satan for Oracle Ellison are less things now than they were)
Heavy concurrency, for one. MySQL uses threads; Postgres uses forks. You can (and should) help the latter with a connection pooler, but if you then throw one in front of the MySQL instance, it wins again.
Also, if you’ve designed your schema and queries around MySQL (InnoDB, specifically), you can get way faster queries. It’s a clustering index RDBMS, so range queries require far fewer reads thanks to linear read ahead, but only if your PK is k-sortable. UUIDv4 will absolutely tank the performance. It will on Postgres too, but under different circumstances, and for different reasons.
Benchmarking anything is fraught with peril. There are so many variables, and the benchmark itself can be quietly selected to favor one or the other. I’d encourage you to create identically-sized instances, with your actual schema, and realistic test data.
(Sure, I hate Oracle too, but they actually did a pretty good job of both Java and mysql; i guess the 90s Borg for MS Gates and Satan for Oracle Ellison are less things now than they were)