Hacker Newsnew | past | comments | ask | show | jobs | submit | akshayshah's commentslogin

I’d be surprised by this: GitHub pretty famously used Vitess, and I’d be surprised if each shard were too big for modern hardware. Based on previous reporting [0], they’re running out of space in the main data center and new management is determined to move to Azure in a hurry. I’d bet that these outages are a combination of a worsening capacity crunch in the old data center and…well, Azure.

[0]: https://thenewstack.io/github-will-prioritize-migrating-to-a...


Here’s a post from 2021 about the migration! [0]

I guess 2021 is a long time ago now. How did that happen…

[0] https://github.blog/engineering/infrastructure/partitioning-...


In broad strokes, I think this is similar to Bitdrift (https://bitdrift.io) - though they’re focused on mobile observability.


And looks similar to Grepr [0].

0. https://www.grepr.ai/


I like Markdown, and generally agree that it strikes a nice balance between correctness and usability...

...but it's delicious that this blog post also demonstrates an ambiguity in Markdown: how to handle intra-word emphasis. In the rendered output, "mark_up_" and "mark_down_" were probably intended to be "mark<em>up</em>" and "mark<em>down</em>", but the underscores were instead rendered literally.

I do appreciate that Markdown's solution to ambiguities like this is dead simple - just inline some HTML.


I think it's delicious how nobody, absolutely nobody, wants _ to mean "emphasis," they want italics, and yet despite there being a markdown-to-HTML build step nobody has ever done what they were told they were supposed to do to circumvent the semantic issue and use <span class="italic"> instead of <em>.

It wouldn't even make sense for markdown if it were language-agnostic to output <em> when that's HTML-only.

I'm going to go to my grave repeating that <em> is just <i> version 2.


Totally fair. At least in part, I blame the choice of <em> and <strong>: it's really not clear what the hierarchy between them is, so I just think of them as the online versions of italic and bold.

<mild> and <strong>, or <em> and <emem> (or <double-em>, or <very-em>) might have been clearer, but at this point we'll never know.

Edit: apparently <i> has been redefined to be "the idiomatic text element" rather than just italic - so perhaps it's a semantically appropriate choice here after all! https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/...


<em> and <strong> were introduced as the supposedly semantic counterparts to the supposedly physical <i> and <b>. That never made a lot of sense, and then later <i> and <b> were redefined to be some subtly different semantic elements. Which also never really made sense. In the end, they both still mean italics and bold, unless you go out of your way to give them a different styling.


Ahem. Org mode user here. _ means underline :-)

Emphasis/italics is using /


I actually _did_ want the underscores, but enough people thought it wasn't intentional that I just gave up and changed it to italics. lol?


Alas! Once again, I’ve learned what happens when I assume.


At least per the Redis docs, clusters acknowledge writes before they're replicated: https://redis.io/docs/latest/operate/oss_and_stack/managemen...

The docs explicitly state that clusters do not provide strong consistency and can lose acknowledged data.


No. Redpanda has made a lot of noise about this over the years [0], and Confluent's Jack Vanlightly has responded in a fair bit of detail [1].

[0]: https://www.redpanda.com/blog/why-fsync-is-needed-for-data-s...

[1]: https://jack-vanlightly.com/blog/2023/4/24/why-apache-kafka-...


I think all modern system even scylla db do commit batch no fsync on every write, you either need throughput or durability both cannot exist together. Only thing what redpanda claim is you have to do replication before fsync so your data is not lost if the written node is dead due to a power failure. this is how scylla and cassandra works, if iam not wrong, so even if a node dead before the batch fsync, replication will be done before fsync from memtable,so other nodes will bring the durability and data loss is no longer true in a replicated setup. single node? obviously 100% data loss. but this is the trade off for a high tps system vs durable single ndoe system brings. its how you want to operate.


Similarly in regular SQL systems, the same is achieved by fsyncing to WAL.


Rapid is excellent. It also integrates with the standard library's fuzz testing, which is handy to persist a high-priority corpus of inputs that have caused bugs in the past.

Testing/quick is adequate for small things and doesn't introduce new dependencies, but it's also frozen. Many years ago, the Go team decided that PBT is complex enough that it shouldn't be in stdlib.

Here's a small example of a classic PBT technique used in a very practical Go project: https://github.com/connectrpc/connect-go/blob/cb2e11fb88c9a6...


Sibling comments have already mentioned some common strategies - but if you have half an hour to spare, the property-based testing series on the F# for Fun and Profit blog is well worth your time. The material isn’t really specific to F#.

https://fsharpforfunandprofit.com/series/property-based-test...


Sometimes, sure - but sometimes, passing around a fat wrapper around a DB cursor is worse, and the code would be better off paginating and materializing each page of data in memory. As usual, it depends.


Very cool! Using “object storage for primary durability” seems difficult for any OLTP workload that’s latency-sensitive - there’s a fundamental tradeoff between larger batch sizes to control write costs and smaller batches to reduce latency. This hurts OLTP workloads especially badly because applications often make multiple small writes to serve a single user-facing request. How does EloqKV navigate this tradeoff?

Also, I’d love to see:

- A benchmark that digs into latency, throughput, and cost for a single workload. Most of the benchmarks I saw are throughput-only.

- Some explanation of the “patented 1PC protocol.” Your website [1] suggests that you treat single EBS volumes as high-durability, replicated storage, which seems unusual to me - apart from io2 volumes, EBS is designed for less than 3 nines of durability [2].

[1]: http://www.eloqdata.com/blog/2025/07/15/data-substrate-detai...

[2]: https://aws.amazon.com/ebs/features/


These are great questions. I appreciate you carefully reading through the documents. For the first question, we have detailed benchmark on EloqKV, with the same architecture (but with Redis API) in our blog, and we will soon publish more about the performance characteristics of EloqDoc. Overall, we achieve about the same performance as using local NVME SSD, even when we use S3 as the primary storage, and the performance often exceed the original database implementation (in the case of EloqDoc, original MongoDB).

As for the durability part, our key innovations is to split state into 3 parts: in memory, in WAL, and in data storage. We use a small EBS volume for WAL, and storage is in S3. So, durability is guaranteed by [Storage AND (WAL OR Mem)). Unless Storage (S3) fails, or Both WAL (i.e. EBS lost) AND Mem fail (i.e. node crash), persistence is guaranteed. You can see the explanation in [1]

[1] https://www.eloqdata.com/blog/2025/07/16/data-substrate-bene...


I'm no expert in corporate finance, but whether or not OpenAI goes bankrupt feels like the wrong question to me (in thinking about this loan). Wouldn't a bank be more concerned with (1) the likelihood that OpenAI can raise another round of financing from which to repay the bank, and (2) the likelihood that OpenAI will have assets worth >10B when/if they do eventually declare bankruptcy?

The bank's risk seems quite a bit lower than the VC's risk.


Also 5% would be a ridiculously low rate for this sort of corporate finance. You would expect more like 8-12% I think?

Plus the post seems to only include 1 year of interest.

Unless we know the terms, I don't think we can necessarily calculate EV from JP Morgan's perspective. I would say that they aren't usually carelessly giving away money though... They probably have terms where they can get out early if OpenAI's position weakens etc.


> feels like the wrong question to me

I agree but had different questions. TFA mentions the consideration of whether failure cases are correlated, but of course if OpenAI wins big, there's a good chance this directly or indirectly creates much instability and uncertainty in many other loans/partners. What's the EV on whether that is net-positive considering this is a loan at 5% and not an investment?

On the other side, if OpenAI crashes hard, is it really such a sure thing that Microsoft will be the on the hook to pay off their debts? Setting aside whatever the lawyers could argue about in a post-mortem, are they even obligated to keep their current stake / can they not just divest / sell / otherwise cut their losses if the writing is on the wall?


JPMorgan Chase might not mind ending up owning much of OpenAI's IP if they default on the loan. Banks have largely been locked out of making equity investments in OpenAI so far so perhaps they see this as the next best alternative?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: