Hacker Newsnew | past | comments | ask | show | jobs | submit | Kuinox's commentslogin

Tracing profiler can do exactly that, you don't need a dynamic lang.

Green Planet Energy, have been greenwashing fossil gas, especially russian gas, for years.

My first thought was "Greenpeace, the fossil fuel mouthpiece that killed the nuclear industry?"

I'm not sure why I'd care about news related to them that wasn't their dismantling.


Greenpeace got their start against nuclear weapons and nuclear waste dumping at sea.

I don't think it's entirely appropriate to ignore the risks of nuclear in the country that contains Chernobyl, and another different nuclear plant which is quite close to the front lines and was shut down by capture: https://en.wikipedia.org/wiki/Zaporizhzhia_Nuclear_Power_Pla...


It is though. Climate change due to fossil fuel use is not a risk, it's a guaranteed disaster. If you have to choose between a risk and certain disaster, you never choose the guaranteed disaster, yet that is what the anti-nuclear movement has done.

I get it, nuclear accidents are scary, but we have to be able to take a step back and look at the entire picture and not get blinded by some detail.


I think you’re kind of off the rails in context of Ukraine, where a foreign army is shelling major power plants and torturing the engineers for fun from time to time.

Maybe consider context before pasting your standard argument?


Not really. Imagine a world where Ukraine, Germany, Poland, Sweden, Finland, Hungary, Romania and Bulgaria scaled up nuclear power in the 70s. In that situation Putin wouldn't have had any money for his army in the first place.

You have to think about the system as a whole as I said, not get blinded by some detail right now.

And yea, scaling up nuclear right now is probably not super useful as batteries and solar have dropped so much in price. But we certainly shouldn't shut down nuclear reactors like Germany did.


Huh? Ukraine was part of the Soviet Union, Hungary has extensive nuclear power and Germany made extensive use of nuclear power.

Accidents and the rug pool of US/Soviet capital killed growth of the industry.


Calling Germanys nuclear "extensive" is laughable. Do compare it to France.

That's a really fallacious argument. Nuclear wouldn't stop truck emissions, car emissions, boat emissions, long distance freight train emissions (unless electric), and airplane emissions. It wouldn't stop military emissions (which are significant).

We could have done a lot more nuclear but it's not clear that it would have done more than a few percent of CO2 savings in the overall scheme of things. You can see this most clearly in China which is still burning tons of coal in 2026 and have had no compunction with nuclear ever.


You can just look at the total emissions from France and compare with Germany. It's quite amazing the difference.

https://ourworldindata.org/grapher/co-emissions-per-capita?c...

Imagine having HALF the CO2 emissions. HALF. That would be amazing. If we had that in most of Europe and the US instead of listening to the anti-nuclear lobby we would have a ton more runway to fix the issue than we have now.


Germany was the industrial heavyweight of the 70s and doing a lot more - those emissions aren't just because of nuclear vs coal (although that's very real too). Anyway, anti-nuclear activism only got traction in the 80s so any delta there is not because of that. Economics were probably the main driver (perhaps Atoms for Peace was more France-centric? Not sure if there were other drivers). You can see here that energy use per person was 1/3 higher in Germany than France in the 70s but I suspect if we could find total energy expenditure it would be more like double for Germany than France during that time period: https://ourworldindata.org/grapher/per-capita-energy-use?tab...

Germany does have half the CO2 emissions of the USA.

https://ourworldindata.org/grapher/co-emissions-per-capita?c...

Holy hell, I didn't know it was that bad. Thanks for pointing that out.


If electricity is cheap enough, you can take CO2 from air and make fuel (not sure what is the threshold? 5-10 times cheaper then now?). then you can use that fuel where you need its energy density. I agree that it seems pretty dumb to ignore China (and soon India) CO2 emissions. Again, if you manage to make nuclear cheap enough, you could just gift reactors to everyone that needs them. It can be argued that cheap and safe nuclear was not really tried.

I think that is a pretty unrealistic scenario though. Nuclear won't get that cheap.

Well, it is quite difficult indeed, but I am curious what will happen in the next 20 years, with China very interested in this, and some renewed interest in the west too. I am also not sure which is more unrealistic, cheap nuclear or fusion.

Yea, I mean.. the point isn't the price imo. We can build out nuclear and sequester CO2 without it being super cheap. We can do massive projects like that anyway.

From greenwashing to ethic washing, they are sure good at washing.

> My guess is that Claude is trained to bias towards making minimal edits to solve problems.

I don't have the same feeling. I find that claude tends to produce wayyyyy too much code to solve a problem, compared to other LLMs.


That has been my impression too, it takes particular guidance to get it to write concise code without too much architecture airmanship.


It would mean that inference is not profitable. Calculating inference costs show it's profitable, or close to.


Inference costs have in fact been crashing, going from astronomical to... lower.

That said, I am not sure that this indicator alone tells the whole story, if not hides it - sort of like EBITDA.


I think there will still be cheap inference, what will rise in costs will be frontier model subscriptions. This is the thing that is not profitable.


I speculate LLMs providers are serving smallers models dynamically to follow usage spikes, and need for computes to train new models. I did observed that models agents are becoming worse over time, especially before a new model is released.


Internally everyone is compute constrained. No one will convince me that the models getting dumb, or especially them getting lazy, isn't because the servers are currently being inundated.

However right now it looks like we will move to training specific hardware and inference specific hardware, which hopefully relives some of that tension.


Probably a big factor, the biggest challenges AI companies have now is value vs cost vs revenue. There will be a big correction and many smaller parties collapsing or being subsumed as investor money dries out.


I think it's more a problem of GPU capacity than costs. Training takes a lot of resources, inference too.


So, The Verge wrote an article claiming something was generated by AI, by asking an AI.

There was no investigation, they just asked an AI and published what the AI said.


Well, they also provided a fake "employee badge" to the Verge, which Uber confirmed was fake.


The wording doesn't indicate that they asked themselves.

Plus, I'm finding other sources that did more investigation: https://www.hardresetmedia.com/p/an-ai-generated-reddit-post...


Coming from windows and starting to use KDE 6 since a few days, I wouldn't call it amazing. It's usable, but far from amazing.


You do have to worry about types, you always do. You have to know, what did this function return, what can you do with it.

When you know well the language, you dont need to search for this info for basic types, because you remember them.

But that's also true for typed languages.


This is more than just trivially true for Python in a scripting context, too, because it doesn’t do things like type coercion that some other scripting languages do. If you want to concat an int with a string you’ll need to cast the int first, for example. It also has a bunch of list-ish and dict-ish built in types that aren’t interchangeable. You have to “worry about types” more in Python than in some of its competitors in the scripting-language space.


Do you know who is the author ?


It’s written in the title of the post “Andrew Karpathy” he’s fairly well known in AI circles, he was head of autopilot at Tesla, and co-founded OpenAI. If you’re curious to learn more about him, the Wikipedia page has a short summary: https://en.wikipedia.org/wiki/Andrej_Karpathy


It is even worse coming from him.


Yes, and I'm disappointed he seems to have joined the AI mysticism crowd.


You call writing in a structured fashion with formal words the "worst linguistic vices"


The worst vices are the superfluous faux-eloquence that meanders without meaning. Employing linguistic devices for the sake of utilizing them without managing to actually make a point with its usage.


I was trying to figure out why my SD card wasn't mounting and asked ChatGPT. It said:

> Your kernel is actually being very polite here. It sees the USB reader, shakes its hand, reads its name tag… and then nothing further happens. That tells us something important. Let’s walk this like a methodical gremlin.

It's so sickly sweet. I hate it.

Some other quotes:

> Let’s sketch a plan that treats your precious network bandwidth like a fragile desert flower and leans on ZFS to become your staging area.

> But before that, a quick philosophical aside: ZFS is a magnificent beast, but it is also picky.

> Ending thought: the database itself is probably tiny compared to your ebooks, and yet the logging machinery went full dragon-hoard. Once you tame binlogs, Booklore should stop trying to cosplay as a backup solution.

> Nice, progress! Login working is half the battle; now we just have to convince the CSS goblins to show up.

> Hyprland on Manjaro is a bit like running a spaceship engine in a treehouse: entirely possible, but the defaults are not tailored for you, so you have to wire a few things yourself.

> The universe has gifted you one of those delightfully cryptic systemd messages: “Failed to enable… already exists.” Despite the ominous tone, this is usually systemd’s way of saying: “Friend, the thing you’re trying to enable is already enabled.”


Did you not put some weird thing in your prompt ? That's not the style of writing I have in my ChatGPT, I run without memory and with default prompt. Yours try to make a metaphore at every single response.

You can check both in ChatGPT settings.


These are cherry picked. Mostly the first and last sentence look like this.

I just checked settings, apparently I had it set to "nerdy," that might be why. I've just changed it to "efficient," hopefully that'll help.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: