Do you have any sources to back up your feelings? I’m basing my comments on what I’ve read about the matter from a variety of former public company CEOs, CFOs, and COOs.
I am coming to this from a perspective of a worker who used to get quarterly options of the public company I worked for, and I just cannot for the life of me sympathize with a company complaining that it can only afford to gather the information to calculate the worth of the stocks they are paying me in two times a year. I don‘t care how much it costs them. If you are gonna be paying and trading in stocks, I expect you to do the work required.
I understand your view, and agree that transparency is good, but "the work required" is largely preventing and defending against lawsuits by plaintiff lawyers, and those lawsuits cannot possibly benefit the shareholders (because whether the suit is won, lost, or settled, the money all goes from one pocket to another, with a cut going to the lawyers).
This may sound rough, but I don’t care about shareholders. In fact I consider them my enemy, or at least my class-enemy. Whenever they make money off of the shares of the company I work for, I consider that exploitation, and I want them to stop doing that. I also want them to stop paying me in stocks, and I want my—and my fellow worker’s—pension funds to stop trading in stocks. My shareholders are my exploiters and my enemy and my pension fund should not be my exploiter nor my enemy.
But while we live in this system which forces stocks onto me, and I have no say in the matter, I want it as transparent as possible, and I don‘t care how much it costs my enemies.
Ahhh yes. As we all know regulations and requirements and bureaucracy never have unintended consequences, especially on the little guy. All that matters is intent, right?
knitting machines don’t generate the design from a prompt, and neither does industrial knitwear production facilities. In fact, knitting machines have quite a lot of manual input that goes into the final product, including careful programming.
Not equally true at all. Far from it. If you have ever seen people use knitting machine you would know the amount of skill required to operate one is far beyond creating a prompt. Same is true of looms, etc.
In fact this whole analogy makes no sense, a knitting machine is far closer to a compiler in this analogy then it is to a language model. Many would argue that automatic looms were the first compilers of the industrial age, and I would agree with that argument.
I was never talking about a knitting machine in the first place. Rather, I was referring to the old lady sitting on her sofa, knitting a sock she could also buy for a dollar, but decides to do it herself for the love of the game and nostalgia: a hobby.
The "art" of programming is going exactly that route, maybe with a little fewer ladies and more men.
I didn’t hear the exact analogy so I made some assumption. But I fail to see any insightful analogy which could make such predictions, unless the analogy is operating on top of some flawed assumptions about industrial knitware production.
An old lady could equally sit in front of her desktop PC write some HTML, and upload a blog page with her amazing knitting projects, or she could get pintrest. This was true before LLMs, and it is still true today.
Another potential flaw is the assumption that professional knitwear design does not exist. It does. Plenty of people work in industrial scale knitwear products. We have people designing new products, making patterns and recipes, we have manual labor in the production, operating machines or even knitting by hand. Case in point, travel anywhere and go to a local market popular with tourists, and you will see plenty of mass produced knitted products, most of them took great skill to design and produce. Nothing compatible to prompting an LLM to do this for you.
Not for long, presumably. Apparently the majority of marketable skills will come from a handful of capex heavy, trillion dollar corporations and you will like it.
Irrelevant aside: But I hold grudge against the economists who picked the letter K to represent increased inequality. They missed the perfect opportunity to use the less-then inequality symbol (<) and call it a “less-then economy”.
I think it is unfair to specify safe here, as probably all nuclear powerplants are considered safe until they are not, including Fukushima. But plenty of European countries are either building or planing nuclear new nuclear power reactors, and Finland just opened a new reactor in 2023.
But the simple matter is thought that the economics of nuclear power simply are not delivering. They are expensive and slow to build, while at the same time wind (particularly off shore wind) and solar are getting cheaper and easier to build every year (or month even). Germany also stands out as a success story of nuclear phase-out, that by replacing these expensive to run nuclear power plant has offered the economic wiggle room to phase in renewables a lot faster then otherwise.
The US is in an excellent position to massively harness wind and solar and yet right now it's dialing up the coal usage. I am comfortable celebrating Iceland's decision to not be maliciously dependent on fossil fuels.
I consider minimizing a natural decline with artificial subsidies as ramping up - maybe a fairer phrase would be "dragging out production" but either way the administration is putting a thumb on their scale to counter natural market forces to perpetuate a dumb thing.
We've banned this account for continually posting comments like this that are unsubstantive and clearly in breach of the guidelines and HN's intended use.
I mean, the EIA says "U.S. generation fueled by coal increased by 13% in 2025 to 731 BkWh"
The article you linked is mostly about a model of 2026 and 2027 and sure, in the model coal goes away but that's not a fact about coal it's just a model.
Yes with the next sentence explaining why, and how future years are planned to decrease.
"Ramping up" means planned to increase.
Feel free to provide a reference that supports that it's "ramping up". I, and parent, couldn't find one. This is a super boring factual thing that I was curious about, where opinion has no place or purpose.
Sure, but increasing something like fucking coal power plants isn't some instantaneous event that could start and stop at any time, putting some ambiguity at the moment between "increased" and "increasing". If plants are or will be built, it's because it's planned for development. That '-ing' isn't just present tense, it's there for the continuous/progressive aspect of it.
If they produced 13% more energy from coal in 2025 than 2024, the latest point at which we have real numbers rather than projections, it's fair to say that production of energy from coal is increasing rather than decreasing.
Okay, but you're celebrating make-believe virtues. Iceland is also not destroying its tropical coral reefs. That sounds nice...but it has none. Nor any sort of tradition or incentive to try doing that.
The US coal thing is all about widespread memories (and myths) of sustained good economic times, in large areas of the country which now feel destitute. Millions of voters feeling that they have no future. If not that the elites want them to hurry up and die.
To paraphrase Munger - if you want different outcomes there, then you need to change the incentives.
I like the analogy with Schrödinger’s cat. Like Schrödinger’s cat it is actually not a good thought experiment. Both have been debunked. Schrödinger’s cat is applying quantum behavior (of a single interaction) to a macro system (with trillions of interactions). While the Turing test can be explained away with Searle’s Chinese room thought experiment.
I would argue that Schrödinger’s cat has done more damage to the general understanding of quantum physics then it has done good. In contrary though, I don‘t think the same about the Turing test. I think it has resulted in a net positive for the theory of mind as long as people take Searle’s rebuttal into account. Without it (as is sadly common in popular philosophy) the Turing test is simply just wrong, and offers no good insight for neither philosophy nor science.
The Turing test and Searle's "rebuttal" are both pretty inconsequential. There's no real definition of "thinking," therefore neither proof/disprove or say much.
Turing's imitation game is about making it difficult for a human to tell whether they are communicating with a computer or not. If a computer can trick the human, then... what? The computer is "thinking" ?
I think most people would say that's an insufficient act to prove thinking. Even though no one has a rigorous definition of thinking either.
All this stuff goes around in circles and like most philosophy makes little progress.
Searle’s rebuttal is actually excellent philosophy. But otherwise I agree. Searle was (just learned he passed away last year) a philosopher by trade, but Turing was a mathematician and Schrödinger was a theoretical physicist. So it is to be expected that a mathematician and a physicist might produce sub-par philosophy.
Turing’s point in his 1950 paper was actually to provide a substitute to the question of whether machines could think. If a machine can win the imitation game, he argued, is a better question to ask rather then “can a machine think”. Searle showed that this is in fact this criteria was not a good one. But by 1980 philosophy of mind had advanced significantly, partially thanks to Turing’s contributions, particularly via cognitive science, but in the 1980s we also had neuropsychology, which kind of revolutionized this subfield of philosophy.
I think philosophy is actually rather important when formulating questions like these, and even more so when evaluating the quality of the answers. That said, I am not the biggest fan of the state of mainstream philosophy in the 1940s. I kind of have a beef with logical positivism, and honestly believe that even Turing’s mediocre philosophy was on a much better track then what the biggest thinkers of the time were doing with their operational definition.
Even if a Chinese room isn't a real boy, if it can do basically all text tasks at a human level I'm going to say it's capable of thinking. The issue of "understanding" can be left for another day (not that I think the Chinese room is very convincing on that front either).
I see no reason to disqualify p-zombies from being AGI.
>Turing's imitation game is about making it difficult for a human to tell whether they are communicating with a computer or not. If a computer can trick the human, then... what? The computer is "thinking" ?
If you read his paper, Turing was trying to make a specific point. The Turing test itself is just one example of how that broader point might manifest.
If a thinking machine can not be distinguished from a thinking human then it is thinking. That was his idea. In broader terms, any material distinction should be testable. If it is not, then it does not exist. What do you call 'fake gold' that looks, smells etc and reacts as 'real gold' in every testable way ? That's right - Real gold. And if you claimed otherwise, you would just look like a mad man, but swap gold for thinking, intelligence etc and it seems a lot of mad men start to appear.
You don't need to 'prove' anything, and it's not important or relevant that anyone try to do so. You can't prove to me that you think, so why on earth should the machine do so ? And why would you think it matters ? Does the fact you can't prove to me that you think change the fact that it would be wise to model you as someone that does ?
What do you mean by Schrodinger's cat experiment being "debunked"? The only way I can think to debunk it is to say there are ways to determine if the cat is alive such as heartbeat or temperature, which are impossible to isolate at a quantum level. I don't think anyone claimed the animal was in a superposition.
First of. The Turing test has a rigorous definition. Secondly, it has been debunked for almost half a century at this point by Searle’s Chinese room thought experiment. Thirdly, intelligence it self is a scientifically fraught term with ever changing meaning as we discover more and more “intelligent” behavior in nature (by animals and plants, and more). And to make matters worse, general intelligence is even worse, as the term was used almost exclusively for racist pseudo-science, as a way to operationally define a metric which would prove white supremacy.
Artificial General Intelligence will exist when the grifters who profit from it claim it exists. The meaning of it will shift to benefit certain entrepreneurs. It will never actually be a useful term in science nor philosophy.
>Secondly, it has been debunked for almost half a century at this point by Searle’s Chinese room thought experiment.
Searles thought experiment is stupid and debunked nothing. What neuron, cell, atom of your brain understands English ? That's right. You can't answer that anymore than you can answer the subject of Searles proposition, ergo the brain is a Chinese room. If you conclude that you understand English, then the Chinese room understands Chinese.
> Searle’s response to the Systems Reply is simple: in principle, he could internalize the entire system, memorizing all the instructions and the database, and doing all the calculations in his head. He could then leave the room and wander outdoors, perhaps even conversing in Chinese. But he still would have no way to attach “any meaning to the formal symbols”. The man would now be the entire system, yet he still would not understand Chinese. For example, he would not know the meaning of the Chinese word for hamburger. He still cannot get semantics from syntax.
> The man would now be the entire system, yet he still would not understand Chinese.
Really, here the only issue is Searle's inability to grasp the concept that the process is what does the understanding, not the person (or machine, or neurons) that performs it.
This is an extraordinary claim and it requires extraordinary evidence. There is nothing in Altman’s behavior current or past to suggest this was anything other then a money making grift. The easiest explanation for his betrayal is that he was simply lying.
Your case would have been better if you had used Mad Max: Fury Road, or even Titanic as examples, rather then a mediocre TV show nobody remembers. Ugly Betty used green screens to make production cheaper, that did not improve the show (although it may have improved the profit margins). Mad Max: Fury Road on the other hand used CGI to significantly improve the visual experience. The added CGI probably increased the cost of the production, and subsequently it is one of the greatest, most awesome, movie ever made.
Actually if you look at the scene from Greys Anatomy [0:54] you can see where CGI is used to improve the scene (rather then cut costs), and you get this amazing scene of the Washington State Ferry crash.
I think you can see the parallels here. When people say they hate AI they are generally referring to the sloppy stuff it generates. It has enabled a proliferation of cheap slop. And with few exception it seems like generating cheap slop is all it does (these exception being specialized tools e.g. in image processing software).
Award winning shows and movies does not exclude forgettable cash grabs.
However, my counter examples included Grey’s Anatomy, Mad Max, and Titanic. None of these are considered high literature exactly (and all of them are award winning as well).
I’m not sure your logic is sound. It sounds like you are insisting on some nuance which simply isn’t there. LLM generates unmaintainable slop, which is extremely difficult to reason about, uses wrong abstractions, violates DRY, violates cohesion, etc.
The industry has known how to reuse codes for two decades now (npm was released 16 years ago; pip 18 years ago). Using LLMs for code reuse is a step in the wrong direction, at least if you care about maintaining your code.
Oh sure the quality is extremely unreliable and I am not a fan of its style of coding either. Requires quite a bit of hand holding and sometimes it truly enrages me. I am just saying that LLM technology opens up another dimension of code reuse which is broader. Still a ways to go, not in the foundation model, those have plateaued, but in refining them for coding.
reply