Hacker Newsnew | past | comments | ask | show | jobs | submit | favflam's commentslogin

Some politician in Japan pushed zoning away from cities up to the prefecture and national level. So locals do not get veto rights over new construction.


It's an archetypal social coordination problem that can't be solved at a local level. If relaxed zoning pushes all new buildings into my neighborhood, because all other vote against it, then I'm going to end up with 20 stories of balconies hanging above my property but see no benefits, not even indirect ones like lower rents leading to lower inflation and prices etc. Some developer will simply capture that rent - both in the rent extraction sense and the real estate rent meanings.

A smart central planner can act for the shared benefit, they are sensitive to the votes of renters in some other high density area that also can't solve the problem locally etc.


if your neighborhood gets denser you will see the benefits

if you want to live there you can pick from more options

developers capture value, but the buildings are there

obviously the usual problem is that the land value goes up, and thus the rent goes up too (because suddenly the neighborhood becomes more desirable - which again is a sign of benefits for those who already live there)


My state did something similar recently as well for land within a quarter mile of transit, they have to be zoned for a minimum number of housing units, and parking minimums cannot be enforced in that radius. Some of the municipalities impacted are suing the state.


I wonder if this just means they will eliminate transit or move stations/stops/routes around.


That statement reminds me of ikiru.

(an akira kurosawa movie about a japanese politician)


20 years ago there were a lot of peer to peer applications. For example, Skype used to bounce calls across peers. Now, all calls gets routed through big-brother Microsoft.

NAT and American assymmetric bandwidth ISPs both killed this business model and now we are stuck with tech monopolies like Cloudflare. I see this ipv4-only strategy as another monopoly tactic to kill competition.

And in Asia, it is getting more difficult not to get stuffed behind a double NAT (CGNAT), which means you can't even play games without using big-brother rent-seeker services (no port-forwarding/upnp). But at least here you get ipv6 for free and everything just works.


I want to echo this comment. I am on Map-e in Asia and it is very difficult to get an exclusive ipv4 address without paying extra money.

And I want to connect to my machines without some stupid vpn or crappy cloud reverse tunneling service. Not everyone in the world wants to subscribe to some stupid SaaS service just to get functionality that comes by default with ipv6.

I think Silicon Valley is in a thought bubble and for people there ipv4 is plentiful and cheap. So good for them. However, the more these SaaS services delay ipv6 support, the more I pray to any deity out there I can move off these services permanently.


btw, is it me or is there any justification for anyone including a developer to run more than 8GB of RAM for a laptop? I don't see functionality as having changed in the last 15 years.

For me, only Rust compilation necessitates more RAM. But, I assume devs just do RAM heavy dev work on a server over ssh.


There's all the usual "$APPLICATION is a memory hog" complaints, for one.

In the SWE world, dev servers are a luxury that you don't get in most companies, and most people use their laptops as workstations. Depending on your workflow, you might well have a bunch of VMs/containers running.

Even outside of SWE world, people have plenty of use for more than 8GiB of RAM. Large Photoshop documents with loads of layers, a DAW with a bazillion plugins and samples, anything involving 4k video are all workloads that would struggle running on such a small RAM allowance.


This depends on industry. Around here, working locally on laptop is a luxury, and most devs are required to treat their laptop like a thin client.

Of course, being developer laptops, they all come with 16 gigs of RAM. In contrast, the remote VMs where we do all of the actual work are limited to 4GiB unless we get manager and IT approval for more.


Interesting. I required all my devs to use local VMs for development. We've saved a fair bit on cloud costs.


> We've saved a fair bit on cloud costs

our company just went with the "server in the basement" approach, with every employee having a user account (no VM or docker separation, just normal file permissions). Sure, sounds like the 80s, but it works rearly well. Remote access with wireguard, uptime similar or better than cloud, sharing the same beefy CPUs works well and gives good utilization. Running jobs that need hundreds of GB of RAM isn't an issue as long as you respect other's needs too dont hog the RAM all day. And in amortized costs per employee its dirt cheap. I only wish we had more GPUs.


> Interesting. I required all my devs to use local VMs for development.

It doesn’t work when you’re developing on a large database, since it won’t fit. Database (and data warehouse) development has been held back from modern practices just for this reason.


Current job used to let us run containers locally, but they decided to wrap initially docker, and then podman with "helper" scripts. These broke regularly, and became too much overhead to maintain so we are mandated to do local dev but access a dev k8 cluster to perform any level of testing that is more than unit and requires a db.

A really shame as running local docker/podman for postges was fine when you just ran the commands.


I find this quite surprising! What benefit does your org accrue by mandating that the db instance used for testing is centralised? Where I am, the tests simply assume that there’s a database available on a certain port. docker-compose.yml makes it easy to spin this up for those so inclined. At that stage it’s immaterial whether it’s running natively, or in docker, or forwarded from somewhere else. Our tests stump up all the data they need and tear down the db afterwards. In contrast, I imagine that a dev k8s cluster requires some management and would be a single point of failure.


I really don't understand why they do what they do.

Large corp gotta large corp?

My guess is that providing the ability to pull containers means you can run code that they haven't explicitly given permission for, and the laptop scanning tools can't hijack them?


For many companies, IP isn’t allowed to leave environments controlled by the company, which employee laptops are not.


Yes, zero latency typing in your local IDE on a laptop sounds like the dream.

In enterprise, we get shared servers with constant connection issues, performance problems, and full disks.

Alternatively we can use Windows VMs in Azure, with network attached storage where "git log" can take a full minute. And that's apparently the strategic solution.

Not to mention that in Azure 8 CPUs gets you four physical cores of a previous gen server CPU. To anyone working with 4 CPUs or 2 physical cores: good luck.


Browser + 2 vscode + 4 docker container + MS Teams + postman + MongoDB Compass

Sure it is bloated, but it is the stack we have for local development


> But, I assume devs just do RAM heavy dev work on a server over ssh.

This assumption is wrong. I compile stuff directly on my laptop, and so do a lot of other people.

Also, even if nobody ran compilers locally, there is still stuff like rustc, clangd, etc. which take lots of RAM.


Chrome on my work laptop sits around 20-30GB all day every day.


I wonder if having less RAM would compel you to read, commit to long term memory, and then close those 80 tabs you have open.


The issue for me is that bookmarks suck. They don't store the state (where I was reading) and they reload the webpage so I might get something else entirely when I come back. They also kinda just disappear from sight.

If instead bookmarks worked like tab saving does, I would be happy to get rid of a few hundred tabs. Have them save the page and state like the tab saving mechanism does. Have some way to remind me of them after a week or month or so.

Combine that with a search function that can search in contents as well as the title, and I'm changing habbits ASAP.


Regarding wanting to preserve the current version of a page: I use Karakeep to archive those pages. I am sure there are other similar solutions such as downloading an offline version, but this works well for me.

I do this mostly for blog posts etc I might not get around to reading for weeks or months from now, and don't want them to disappear in the meantime.

Everything else is either a pinned tab (<5) or a bookmark (themselves shared when necessary on e.g a Slack canvas so the whole team has easy access, not just me).

While browsing the rest of my tabs are transient and don't really grow. I even mostly use private browsing for research, and only bookmark (or otherwise save) pages I deem to be of high quality. I might have a private window with multiple tabs for a given task, but it is quickly reduced to the minimum necessary pages and the the whole private window is thrown away once the initial source material gathering is done. This lets me turn off address bar search engines and instead search only saved history and bookmarks.

I often see colleagues with the same many browser windows of many tabs each open struggling to find what they need, and ponder their methods.


I've started using Karakeep as well, however I don't find its built-in viewer as seamless as a plain browser page. It's also runs afoul of pages which combats bots due to its headless chrome.

Anyway, just strikes me as odd that the browsers have the functionality right there, it's just not used to its full potential.


Websites that are walled off behind obscure captcha don't do well in Karakeep for sure, but so far for me those are usually e-commerce sites or sites I don't return to anyway.


If I'm doing work than involves three different libraries, I'm not reading and committing to memory the whole documentation for each of those libraries. I might well have a few tabs with some of those libraries' source files too. I can easily end up with tens of tabs open as a form of breadcrumb trail for an issue I'm tracking down.

Then there's all the basic stuff — email and calendar are tabs in my browser, not standalone applications. Ditto the the ticket I'm working on.

I think the real issue is that browsers need to some lightweight "sleep" mechanism that sits somewhere between a live tab and just keeping the source in cache.


I wonder if a good public flogging would compel chrome and web devs to have 80 tabs take up far less than a gigabyte of memory like they should in a world where optimization wasn’t wholesale abandoned under the assumption that hardware improvements would compensate for their laziness and incompetence.


The high memory usage is due to the optimization. Responsiveness, robustness and performance was improved by making each tab independent processes. And that's good. Nobody needs 80 tabs, that's what bookmarks are for.


"that's what bookmarks are for"

And if you are lucky, the content will still be there the next time.


Is there a straightforward way to have one-process-per tab in browsers without using significant amounts (O(n_tabs)) of memory?


There is no justification for that IMHO. The program text only needs to be in memory once. However, each process probably has its own instance of the JS engine, together with the website's heap data and the JIT-compiled code objects. That adds up.


I'd very much like a crash in one tab not to kill other tabs. And having per tab sandboxing would be more secure, no?


What do you mean? All these features are provided by process per tab.


Thats a weird assumption to make.


~10 projects in Cursor is 25GB on it's own.


How much would it take up if there was less RAM available. A web browser with a bunch of tabs open but not active seems like the type of system that can increase RAM usage by caching, and decrease it by swapping (either logically at the application level, or letting the OS actually swap)


The computer has 18GB of total RAM so I would hope that it’s already trying to conserve memory.

It’s kind of humorous that everyone interpreted the comment as complaining about Chrome. For all I know, it’s justified in using that much memory, or it’s the crappy websites I’m required to use for work with absurdly large heaps.

I really just meant that at least for work I need more than 8GB of RAM.


I do work off of a Chromebook with 8GB of RAM total, but I do keep an eye on how many tabs I have open.


You asked if there is a justification and then in the same post justified why you need it.


My post was about laptop RAM. I counted server-side RAM as a separate thing.


>But, I assume devs just do RAM heavy dev work on a server over ssh.

Why do you assume that? Its nice to do things locally sometimes. Maybe even while having a browser open. It doesn't take much to go over 8gb.


With 32 GB I can run two whole Electron applications! Discord and Slack!

It's a life of luxury, I tell you.


Browsers can get quite bloated, especially if one is not in the habit of closing tabs or restarting it from time to time. IDEs, other development tools, and most Electron abominations are also not shy about guzzling memory.


How does Mark Zuckerberg triggering a genocide in Myanmar, among election interference, rank up with your disdain for EU digital policy?

Are politicians not supposed to do anything about Zuckerberg after watching Sarah Wynn Williams testify about Mark Zuckerberg selling out Americans for his fetish for kissing up to the CCP? Or hearing the current administration threaten the EU over impinging on Zuckerberg to engage in election interference in EU countries?


The gamble these executives are making is that prosecutors in a different administration will not prosecute them for bribery.

If you watch House of Cards (based loosely on real life), you can see the degree of separation between corporations/lobbyists and Congressmen. These guys participating in building a ballroom are crossing that line. Juries will not have to connect so many dots compared to before in order to put someone in jail.


I started seeing AI slop of US military members celebrating their 1776 USD "bonus".


If companies operated as partnerships instead of limited liability companies, then I guess I could buy into this.

But states grant special privileges of capping personal liability for investors. Perhaps states should rethink the conditions for granting this if too many companies act like Gordon Gecko psycho paths.

The British East India company had its charter revoked once it started stepping over red lines. Voters need to reconsider the cart-blanche granting of privileges to corporate entities.


I want to see if Kei-trucks can break into the market. The last time a new product form broke into the US market, it was during a big recession (japanese auto-makers got compact cars in). People will value functionality over form if we get into a prolonged recession.


I love that the world is loving kei trucks and kei cars right now.

I see them pretty often in Australia which also has an anti yank-tank movement (tongue in cheek name for a big american "truck")

That said our most popular cars are still all three tonne utes or SUVs so it's a small movement.

You are right to note the economic situation being a big part of vehicle decisions. Fuel prices has been a driving force, and image plays a big part too.


The chicken tax pretty much makes this impossible. The only domestic manufacturer interested in cheap light-duty trucks is Slate, which is still in the development phase and faces a lot of risks, notably high cost for the segment.


I think big vehicles are ugly and stupid so I like the form as well as the function.


Is running the git binary as a read-only nginx backend not good enough? Probably not. Hosting tarballs is far more efficient.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: