Hacker Newsnew | past | comments | ask | show | jobs | submit | jastr's commentslogin

I had this too until I lowered it's memory limit. In ~/.duckdbrc `set max_memory='1GB';` or even less


By default, it tries to take 80% of your memory. I've found that you need to set it to something much smaller in ~/.duckdbrc `set max_memory='1GB';`


You can be my first user!

https://postgraphs.com/ - it’s an old weekend project of mine that I’d like to finish up soon


Similarly, watch a random walk solving a maze https://stripenight.com/random_walk.html


The best way to get software engineers to solve your issue with X is to phrase your question “Why is X so horrible? It can’t even do Y.”

The engineers will inevitably reply “That’s so simple. You just need to …”

The ecosystem for managing python dependencies has improved a lot: pyenv, virtualenv, poetry.

PATH isn’t innate to Python. Understanding PATH will definitely help with other issues in the future.


I hadn't heard of Opera Unite. It's similar and 10+ years older!

Thanks for the kind words - friends have been mostly confused by this project.


Heroku's github integration has been down for 2 weeks


That doesn’t concern me as much as how our github access keys got accessed/leaked which Heroku has yet to answer.


> there's actually proportionally less failures in Product Hunts busiest period

This is a really interesting post! I think there's a little survivorship bias. As Product Hunt grew 2015-2017, users posted old projects of theirs which were already popular and successful.


Glad you enjoyed the post - I hadn't considered this.


My guess would be that URLs for the categories eliminated after that period (eg. Books and Podcasts) are more likely to remain stable and available, even if the product was a flop.


This is advice that seems reasonable but is actually pretty harmful.

Take a startup with a few users. The senior engineer decides they need pub/sub to ship a new feature. With Kafka, the team goes to learn about Kafka best practices, choose client libraries, and learn the Kafka quirks. They also need to spin up Kafka instances. They ship it in a month.

With postgres, they’ve got an MVP in a day, and shipped within a week.


I can set up an application to use AWS SQS or GCP PubSub in a day and it will scale without a second thought. I don't think it's productive to compare the worst case of scenario A and the best case of scenario B.


How does any of this equally not apply to PostgreSQL ?

Is this some magical database where you don't need to worry about access patterns, best practices or how it is deployed.


> How does any of this equally not apply to PostgreSQL ?

1. Postgres is easier to setup and run (than Kafka) 2. Most shops already have Postgres running (TFA is targeted to these shops) 3. Postgres is easier to adapt to changing access patterns (than Kafka).

----

> Is this some magical ...

Why must your adversary (Postgres) meet some mythical standard when your fighter (Kafka) doesn't meet even basic standards.


Yes, it's that magical database, up to certain scale.


> With postgres, they’ve got an MVP in a day, and shipped within a week.

And the next week they realize they want reader processes to block until there is work to do. Oops that's not supported. Now you have to code that feature yourself... and soon you're reinventing Kafka.


That's where LISTEN comes in. It's very simple to write this loop perfectly correct.


The very source we're talking about describes how to block until there is work to do -- listener.Listen("ci_jobs_status_channel")


This is a neat idea, but I think the site is a bit too barebones for people to want to trust with their data.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: