Hacker Newsnew | past | comments | ask | show | jobs | submit | rvz's commentslogin

There's no such thing as "ethics in AI" in a company when there are billions of dollars of investor money on the table.

"Safety" was just the smokescreen and the perfect scare tactic towards tricking governments to turn even more tyrannical and place in extreme surveillance on everyone which benefits tech corporations, data brokers and AI companies.


So now they are questioning the valuation?

A bunch of private VPN companies are about to get cooked.

> This is what I've always found confusing as well about this push for AI.

They want you to pay for their tokens at their casino and rack up a 5 - 6 figure bill.


Unfortunately, this is where robotics is going to end up. We already have drones being used in warfare. Humanoids are next.

Won't be surprised to see hundreds of thousands of humanoid robots strapped up with explosives running to their target or some of them flying to their target with drones attached.


I don't see bipedal murderbots being commonplace - they're a lot slower than 4-legged "Big Dogs". I think that the Ukraine war has shown that "slaughterbots" are far more likely.

https://www.youtube.com/watch?v=O-2tpwW0kmU


bipedal murderbots... not yet... I think advanced exoskeletons will be there first. They are already testing basic ones in the field:

https://www.businessinsider.com/ukraine-exoskeleton-test-bat...


That may be true but doesn't matter. The fact that weapons will exist and even the fact that they must exist, and even the fact that you benefit from them existing, none of that means you are obligated to work on or with them yourself.

Cakes exist and I even like them, and I do not choose to work at a bakery.


Something like the robot from interstellar is probably more likely.

All the drone warfare developments remind me of the introduction of tanks during the first world war and perfected by the second world war. In the space of a few years they changed warfare. Then planes changed warfare again. Now drones. Makes you wonder what the next thing will be


Well humanoid / non-flying robotic weapons are already being used, and have been for a while. e.g. Zelenskyy https://x.com/KaterynaLis/status/2043827043863863404?s=20 talking about their successful use recently.

He’s not talking about humanoid robots. He’s talking about tracked and wheeled weapons platforms that are essentially small RC tanks.

Yet there are also many civil uses for drones, and I can totally understand the desire to involved only with the civil side of robotics.

Why would you build a very expensive bipedal robot to suicide bomb someone, when as you note, a very cheap flying drone could do the same thing? (and more over: already is, this is literally how drones are used in Ukraine).

Which of course leads to point 2: it's very easy to take a moral stance on weapons when you don't think you're in any danger, nor going to be doing any of the fighting otherwise.


Why: bipedal maybe not, but non-flying can usually carry more.

You think the US doesn't have enough weapons? Perhaps he thought that the weapons were likely to be used in aggression rather than defense?

Why would they have to be killer robots strapped with explosives? If we have highly capable semi-autonomous robots they could be non-lethal with no risk of life to their owners. It upends the entire paradigm of kill-or-be-killed warfare.

Rather than blowing up a school full of little girls, you could deploy a swarm of thousands of fast-moving cat-sized robots armed with tasers and bolas to identify and capture targeted enemy leaders.


Only if the articles have a pay-wall and no way to bypass it.

This article in particular doesn’t have one. So it should be fine.


> The industry calls this “10x productivity.” I call it what it is: a system that generates output at machine speed and forces humans to process it at biological speed.

The question is can you tolerate the amount of PRs thrown at you per day on top of reviewing the exponentially growing mess of code that continues to double every hour and being paid less for it.

Just learn to say no and leave. Why do you tolerate the increasing comprehension debt that is loaded on to you.

You will never get that time back. Just give it to someone else that thinks it is worth maintaining that slop for less.


The job market under our Great Leader has taken away a lot of this agency. Software engineers have gone from having the pick of the market for themselves to becoming (perceived as) next to disposable.

That's a very American-centric point of view; the job market worldwide for developers is getting tougher and tougher.

I'm willing to have my leader take some of the blame for this as well. I think the decisions of the leader of the what's still the largest economy of the world likely has an outsized impact on the rest of the world too. It's getting less and less, for sure, but still significant. I'm not trying to be American-centric, I'm trying to accept that what we do has an impact despite unequally applied isolationist mentality of some here.

When anyone asks the question: "What is AGI", it is actually this. An "abundance" of nothing else but this.

It's just that the tech workers are the canaries in the coal mines for the other white collar knowledge workers.

This is "AGI".


Meanwhile... I had to step in and hand code lots of css today because copilot couldn't do the thing. And I had to step in and manually fix tests yesterday because copilot couldn't do the thing.

Are you paid by OpenAI and/or Anthropic?


Have you considered not using copilot and using Claude Code or Codex directly?

I'm using codex latest model via copilot.

Is this where you say I'm holding it wrong? Is it so hard to admit these AI tools aren't as good as they are hyped up to be?


> Are you paid by OpenAI and/or Anthropic?

No. The VCs and angel investors screaming “abundance” are the paid promoters.

The problem is their “utopia of abundance” is not for us. They know the opposite is the reality (layoffs, offshoring, wage suppression and AI backlash)

They built their own bunkers and moats for a reason. Because true “AGI” will bring an abundance of very angry people going after them.

That is not worth being paid for by any AI lab.


It used to be 90% of startups would unfortunately fail.

Now with AI, it is likely going to be 98%.


Before AI: 900 of 1000 fail (90%), 100 succeed

After AI: 4900 of 5000 fail (98%), 100 succeed

Like this?


Still better odds than winning the lottery right ? Right ?

Really No one cares?

You're going to get a new class of security issues such as this which will target OpenClaw agents running around getting their x402 wallets drained.

This sort of prompt injection attack will exacerbate this problem, especially in x402 payments.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: