Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Really happy to see TigerBeetle live up to its claims as verified by aphyr - because it's good to see that when you take the right approach, you get the right results.

Question about how people end up using TigerBeetle. There's presumably a lot of external systems and other databases around a TigerBeetle install for everything that isn't an Account or Transfer. What's the typical pattern for those less reliable systems to square up to TigerBeetle, especially to recover from consistency issues between the two?



Joran from TigerBeetle here! Thanks! Really happy to see the report published too.

The typical pattern in integrating TigerBeetle is to differentiate between control plane (Postgres for general purpose or OLGP) and data plane (TigerBeetle for transaction processing or OLTP).

All your users (names, addresses, passwords etc.) and products (descriptions, prices etc.) then go into OLGP as your "filing cabinet".

And then all the Black Friday transactions these users (or entities) make, to move products from inventory accounts to shopping cart accounts, and from there to checkout and delivery accounts—all these go into OLTP as your "bank vault". TigerBeetle lets you store up to 3 user data identifiers per account or transfer to link events (between entitites) back to your OLGP database which describes these entities.

This architecture [1] gives you a clean "separation of concerns", allowing you to scale and manage the different workloads independently. For example, if you're a bank, it's probably a good idea not to keep all your cash in the filing cabinet with the customer records, but rather to keep the cash in the bank vault, since the information has different performance/compliance/retention characteristics.

This pattern makes sense because users change their name or email address (OLGP) far less frequently than they transact (OLTP).

Finally, to preserve consistency, on the write path, you treat TigerBeetle as the OLTP data plane as your "system of record". When a "move to shopping cart" or "checkout" transaction comes in, you first write all your data dependencies to OLGP if any (and say S3 if you have related blob data) and then finally you commit your transaction by writing to TigerBeetle. On the read path, you query your system of record first, preserving strict serializability.

Does that make sense? Let me know if there's anything here we can drill into further!

[1] https://docs.tigerbeetle.com/coding/system-architecture/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: