One mental model I have with LLMs is that they have been the subject of extreme evolutionary selection forces that are entirely the result of human preferences.
Any LLM not sufficiently likable and helpful in the first two minutes was deleted or not further iterated on, or had so much retraining (sorry, "backpropagation") it's not the same as it started out.
So it's going to say whatever it "thinks" you want it to say, because that's how it was "raised".
Fully agree. I wonder in the long term how this will show up. Will every business/CEO do more of what he/they anyway want to do, but now supported by AI/LLMs?
The possibilities in "dangerous" fields are a bit more frightening. A general is much more likely to ask ChatGPT "Do you think this war is a good idea/should I drop a bomb", rather than an actually helpful tool - where you might ask "What are 5 hidden points on favor of/against bombing that one likely has missed".
The more you use AI as a strict tool that can be wrong, the safer. Unfortunately I'm not sure if that helps if the guy bombing your city (or even your president) is using AI poorly, and their decisions affect you.
> Will every business/CEO do more of what he/they anyway want to do, but now supported by AI/LLMs?
Arguably, it already worked that way. The best way to climb the ranks of a 'dictatorial' organization (a repressive government or an average large business) is to always say yes. Adopt what the people from up above want you to use, say and think. Don't question anything. Find silver linings in their most deranged ideas to show your loyalty. The rich and powerful that occupy the top ranks of these structures often hate being challenged, even if it's irrational for their well-being. Whenever you see a country or a company making a massive mistake, you can often trace it to a consequence of this. Humans hate being challenged and the rich can insulate themselves even further from the real world.
What's worrying me is the opposite - that this power is more available now. Instead of requiring a team of people and an asset cushion that lets you act irrationally, now you just need to have a phone in your pocket. People get addicted to LLMs because they can provide endless, varied validation for just about anything. Even if someone is aware of their own biases, it's not a given that they'll always counteract the validation.
This, make sure the 'active' flag (or deleted_at timestamp) is part of most indexes and you're probably going to see very small impacts on reads.
It then turns into a slowly-growing problem if you never ever clean up the soft-deleted records, but just being able to gain auditability nearly immediately is usually well worth kicking the can down the road.
So how do we upgrade the heat rejection system of this planet?
Clearly removing C02 is expensive, but can we just paint some of the desert with paint that reflects in the infrared window? Or make clouds as Neal Stephenson talks about in his fictional novel Termination Shock?
Which is still slightly useful - I've got two Dell Wyze 5070, fanless, and being able to load them with 16 GB of ddr3 ram each for a song meant they were basically an obvious upgrade from being so cramped for RAM running a Raspberry Pi 4.
I should probably sort through some old boxes and eBay stuff I've saved for no reason in particular, not like it's (I hope!) going to get any more valuable than it already isn't, and I'm not realistically going to build Frankenstein DDR1/2/3 systems rather than use a more modern and low-power Pi/SBC or NUC for the purpose, even if I need to buy the latter!
If you're okay with DDR3-like memory bandwidth you can get that cheaply on a modern system by getting Intel Optane NVMe/PCIe media (solid state storage much like NAND, but wearout-resistant well beyond even the best SLC NAND) and setting it up as swap. If you're either memory-bandwidth bound (common for local AI, not so much otherwise) or not OK with the power reqs of Optane, you're going to need actual expensive DRAM.
I tried the Zed editor and it picked up Ollama with almost no fiddling, so that has allowed me to run Qwen3.5:9B just by tweaking the ollama settings (which had a few dumb defaults, I thought, like assuming I wanted to run 3 LLMs in parallel, initially disabling Flash Attention, and having a very short context window...).
Having a second pair of "eyes" to read a log error and dig into relevant code is super handy for getting ideas flowing.
Terraform is working on that - burstable synthetic methane generation using cheap catalysts that you can afford to idle, only generating methane when electricity is cheap.
I personally have used Qwen2.5-coder:14B for "live, talking rubber duck" sorts of things.
"I am learning Elixir, can you explain this code to me?" (And then I can also ask follow-up questions.)
"Here is a bunch of logs. Given that the symptom is that the system fails to process a message, what log messages jump out as suspicious for dropping a message?"
"Here is the code I want to test. <code> Here are the existing tests. <test code> What is one additional test you would add?"
"I am learning Elixir. Here is some code that fails to compile, here is the error message, can you walk me through what I did wrong?"
I haven't gotten much value out of "review this code", but maybe I'll have to try prompting for "persona: brief rude senior" as mentioned elsewhere.
Yeah, but if the problem you are solving is rare for most practitioners, effectively theoretical until it actually happens, then people won't switch until they get bit by that particular problem.
Solar has one of the lowest capital costs [1] so the discounting works in it's favor. And then the non-discountable operating costs also works in its favor since the fuel supply (light) is free.
Yup. It's why even in fairly red states like my own (Idaho) solar, wind, and battery are going up everywhere. Even without significant subsidies the economics are really good for renewables.
They'd be even better if we didn't have extreme tariffs on China.
That's actually what's convinced me that renewables are a better choice than nuclear. I still like nuclear, but renewables are just so much easier and faster to deploy while being a lot cheaper. To make nuclear competitive requires regulatory changes along with a government that's simply willing to tell it's NIMBY citizens YIMBY.
Government literally has to get in the way of renewable deployments at this point to stop them.
No, the problem is he speaks like he doesn't understand discounting at all. He treats a recurring revenue (or energy) as fundamentally different from a one time gain, as opposed to something you funge via discounting rates.
You are one level ahead: I'm more than happy to debate what discounting rates you should be using!
Huh! I'm kind of stunned that we only use ~30x the power that we did back then. If I'd been asked to guess I would have added another 0 or even two of them.
Yeah we had an exponential jump when we discovered oil but we maxed that out and the growth has been linear since (and paying for it environmentally too).
I’m waiting for the next big major discovery in energy generation.
We’re always on the verge of fusion… fusion will be like the discovery of oil. Humanity will jump forward… well, technologically at least.
Nuclear is low carbon, it’s fine we lose heat to extract that energy versus stationary and mobile combustion generation, as there is no other effective way to extract that energy at this time.
> Batteries are now cheap enough to unleash solar’s full potential, getting as close as 97% of the way to delivering constant electricity supply 24 hours across 365 days cost-effectively in the sunniest places.
What does this mean? It means we are most of the way there with solar and batteries alone, even if we need a bit of carbon based generation to bridge the gap while solar and battery deployments scale globally. Solar and batteries will only continue to get less expensive and better.
Pumped hydro energy storage relies on the cheapness of water and existing geology. If you have to build the chambers instead of damming a river it's too expensive. Most of the good spots to have a reservoir are already used. If you have to manufacture the bulk media instead of just using water it's too expensive.
The argument against lifting concrete is that you can dig a hole in the ground an pump water in/out of it for more reliability and lower cost than having a crane lift and lower concrete, and it's easy to make it much bigger both horizontally and vertically, so why bother.
But it does appear to be economical even with that, and water is cheaper.
We make lots of holes in the ground on a regular basis, including for extracting fossil fuels. Here's two, note scale bar, though I have no idea what the rock around it is like regarding water losses: https://www.google.com/maps/@50.9063171,6.4418046,17655m/dat...
There are exactly zero economically viable pumped water storage systems where water towers are involved. If you do the math for the amount of a mass of water, you'll see why! It's not feasible.
> Something wind, solar and batteries for the next 50 years aren't.
False. If you'd stopped before the "and" you would have been correct, though.
Batteries are really cheap now, and supply of batteries is growing basically as fast as people can get the investments and permissions for the inputs and the factories.
I managed to get qwen2.5-coder:14B working under ollama on an Nvidia 2080 Ti with 11GB of VRAM, using ollama cli, outputting what looks like 200 words-per-minute to my eye
It has been useful for education ("What does this Elixir code do? <Paste file> ..... <general explanation> "then What this line mean?")
as well as getting a few basic tests written when I'm unfamiliar with the syntax. ("In Elixir Phoenix, given <subject under test, paste entire module file> and <test helper module, paste entire file> and <existing tests, pasted in, used both for context and as examples> , what is one additional test you would write?")
This is useful in that I get a single test I can review, run, paste in, and I'm not using any quota. Generally I have to fix it, but that's just a matter of reading the actual test and throwing the test failure output to the LLM to propose a fix. Some human judgement is required but once I got going adding a test took 10 minutes despite being relatively unfamiliar with Elixir Phoenix .
It's a nice loop, I'm in the loop, and I'm learning Elixir and contributing a useful feature that has tests.
Any LLM not sufficiently likable and helpful in the first two minutes was deleted or not further iterated on, or had so much retraining (sorry, "backpropagation") it's not the same as it started out.
So it's going to say whatever it "thinks" you want it to say, because that's how it was "raised".
reply