Not natively, as in there is no PARTITION BY HASH (<list-of-columns>). What limitations do you face when trying to roll-your-own hash partitioning using check constraints (in 9.6)?
I wanted to partition a table by the foreign key, as the table receives a few hundred rows per foreign key per hour (it is a timeseries db).
So I figured partitioning the table by foreign key would group all data together in a way that allows for faster access (typical access pattern would be select * where foreign_key = x). However, as the number of keys in the foreign table is unbounded and can be quite large, I wanted to partition the data to a limited number of tables, with
mod(foreign_key, number_of_partions)
If I understood correctly, check constraints can't operate on a calculated value
Yes, it is not possible to optimize (ie, prune useless partitions for quicker access) the query select * from tab where key = x. You'd need actual hash partitioning for that. The mechanism Postgres uses to perform partition-pruning (constraint exclusion) does not work for the hashing case.
Greenplum is a fork of Postgres codebase, Citus is not; it's an extension that leverages community Postgres's extensibilty APIs. This point seems to be highlighted in their post.
Not quite sure whether something like multixact data corruption is symptomatic of the kind of underlying issues in database system implementations that Jepsen is after. I may be wrong though.
It wasn't. It's also rather difficult to trigger. I have no doubt that aphyr could do stuff like that if he so chose, but I think pounding on getting a reproducible test case for Postgres's subtle serializable bug would be more his speed.