Hacker Newsnew | past | comments | ask | show | jobs | submit | jordwest's commentslogin

My hypothesis: we are social creatures and have an innate instinct not to hurt others, but we’ve been trained to various degrees (through upbringing/trauma/school/work) to disassociate from the pain of hurting others.

The people who did continue to administer shocks were attempting to focus on what they thought was the most important part of the task (pressing the lever), but internally the unconscious effort to habitually dissociate from their own discomfort led them to make more mistakes.

Combine this dissociation with a desire for power or status and you get the world we live in today.


I tend to agree, I’ve been thinking a lot over the years that this is the way we get out of this mess - lots and lots of smaller independently owned forums that splinter off onto small communities instead of monolithic single-identity mechanisms like social media & to some extent the fediverse.

> The only social stuff I interact with anymore is a private forum that's paid

Im curious when you say private, do you mean you can only post if you’re a member, or is all post content viewable only by members too? And if the latter, how did you discover the forum, and how did you decide to join?


Nothing is accessible to non members. It's only $5/mo and the fee is explicitly to filter out people who don't care enough about the subject matter to pay a nominal fee.

I found it because someone I followed in my Twitter days hated how nasty conversations inevitably got there, and wanted a place to have in depth, non-ephemeral conversation with like minded people.

These days you'd probably discover it via X where they still post, or their Substack.


I’ve been using the offline version of Options+ that somebody recommended me a while back, it removes AI and auto updating and has done the job for me.

It’s kind of hidden on their website but you can grab it here:

https://hub.sync.logitech.com/options/post/logi-options-offl...

That said I think this will be my last Logitech device. They’re just not very durable products and die too quickly


Thanks for pointing this out. I had no idea it existed. The other options in the comments just didn't quite work the way I would like.

- The main topic requires me to pull python dependencies, build, run manually on Mac - All others can't reassign the button below the scroll wheel on the MX Master 3/4


I switched to the offline version right after Logitech forcefully and without my permission downloaded and installed bunch of crap software on my Mac. I was furious that a stupid mouse driver app has the right to install a random crapware. I’m still fuming about it when I remember it.


From an article [1]:

    We can build out discreet systems of brain cells and use them for the purpose we want. They're not going to have traits like consciousness, and we're able to test and assess for that, and build away from it if there is that risk.
Ah, I'm glad they've worked out what consciousness is. /s

From their marketing website [2]:

    Neural compute on demand: We continuously monitor neural health and performance, ensuring optimal conditions and continuous access to an always-on network of living neurons.
At what size of "neural compute" do we start to call it slavery?

[1] https://www.abc.net.au/news/science/2025-03-05/cortical-labs...

[2] https://corticallabs.com/cloud


I think it comes down to a fear of uncertainty. It's comfortable to believe in authority.

Authority provides the illusion of a sense of control, predictability, certainty and orderliness, and it's like we gravitate toward that even when it leads to bad outcomes for us.

For most of us the fear of being out of control seems to be greater than the fear of being controlled.


I'm not sure it's even uncertainty. Authority carries a bigger stick, and things like witch hunts and burning of heretics and rebel peasants, have deselected independence of mind over the centuries. Society has an unconscious memory of what used to happen when people disagreed. And still does in some places.

People today worship the white lab coat and the military/police uniform in the same way their ancestors honoured witch doctors/shamans and the tribe's warriors. They assume the former groups will dish out good advice and the latter will protect them. The general public experiences this in hospitals and schools, with psychiatric hospitals being the most extreme version of hierarchy. I've mentioned that I currently have two friends who are stuck in a mental hospital, and I have told both of them that they need to be respectful of staff if they want to get out sooner. The woman seems to have had her day passes revoked, and been placed on a more secure ward, after being cheeky to staff. Maybe the staff were awful but she isn't in much of a position to negotiate — she's been in there for nine months. (I've heard rumours of one of the other patients being sexually assaulted by staff, but thanks to the nature of these places I don't know whether it is fantasy or a real crime, since the supposed victim is doped up to the eyeballs much of the time and would not remember it properly.)


For me the biggest gaps in LLM code are:

- it adds superfluous logic that is assumed but isn’t necessary

- as a result the code is more complex, verbose, harder to follow

- it doesn’t quite match the domain because it makes a bunch of assumptions that aren’t true in this particular domain

They’re things that can often be missed in a first pass look at the code but end up adding a lot of accidental complexity that bites you later.

When reading an unfamiliar code base we tend to assume that a certain bit of logic is there for a good reason, and that helps you understand what the system is trying to do. With generative codebases we can’t really assume that anymore unless the code has been thoroughly audited/reviewed/rewritten, at which point I find it’s easier to just write the code myself.


This has been my experience as well. But, these are things we developers care about.

Coding aside, LLM's aren't very good at following nice practices in general unless explicitly prompted to. For example if you ask an LLM to create an error modal box from scratch, will it also implement the ability to select the text, or being able to ctrl c to copy the text, or perhaps a copy message button? Maybe this is a bad example, but they usually don't do things like this unless you explicitly ask them to. I don't personally care too much about this, but I think it's noteworthy in the context of lay people using LLM's to vibe code.


I've seen a lot of examples where it fails to take advantage of previous work and rewrites functionality from scratch.


> it’d be wasteful for evolution to only use the brain for computation

Even what we consciously experience as the brain is really only a tiny part of the brain.

The little language centre and the capacity to imagine are only a tiny subset of a multitude of brain functions and yet we believe that those two functions make up “me”. Actually it’s just those two functions telling a story that they are me.


A common trick is that the first click on the X will go to the ad, but if you return and click the X again it will close, gaslighting you into thinking you just misclicked the first time.

Another trick that I’ve noticed on the Reddit app is that the tappable area is much larger for ads than normal posts. If you tap even near the ad it will visit the ad


Also making the hit area smaller than the close graphic itself is a popular one.


> Every previous job I've had has a similar pattern. The engineer is not supposed to engage directly with the customer.

Chiming in to say I’ve experienced the same.

A coworker who became a good friend ended up on a PIP and subsequently fired for “not performing” soon after he helped build a non technical team a small tool that really helped them do their job quicker. He wasn’t doing exactly as he was told and I guess that’s considered not performing.

Coincidentally the person who pushed for him to be fired was an ex-Google middle manager.

I’ve also seen so commonly this weird stigma around engineers as if we’re considered a bit unintelligent when it comes to what users want.

Maybe there is something to higher ups having some more knowledge of the business processes and the bigger picture, but I’m not convinced that it isn’t also largely because of insecurity and power issues.

If you do something successful that your manager didn’t think of and your manager is insecure about their own abilities, good chance they’ll feel threatened.


> but if a OS manufacturer can’t be bothered to interact with their own UI libraries to build native UIs

But if they don’t use web tech it would be too expensive to build the start menu in a way that works cross platform!

Oh wait


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: