Hacker Newsnew | past | comments | ask | show | jobs | submit | measurablefunc's commentslogin

Mining rigs have a finite lifespan & the places that make them in large enough quantities will stop making new ones if a more profitable product line, e.g. AI accelerators, becomes available. I'm sure making mining rigs will remain profitable for a while longer but the memory shortages are making it obvious that most production capacity is now going towards AI data centers & if that trend continues then hashing capacity will continue diminishing b/c the electricity cost & hardware replenishment will outpace mining rewards.

Bitcoin was always a dead end. It might survive for a while longer but its demise is inevitable.


Because they encode statistical properties of the training corpus. You might not know why they work but plenty of people know why they work & understand the mechanics of approximating probability distributions w/ parametrized functions to sell it as a panacea for stupidity & the path to an automated & luxurious communist utopia.

No this is false. No one understands. Using big words doesn’t change the fact that you cannot explain for any given input output pair how the LLM arrived at the answer.

Every single academic expert who knows what they are talking about can confirm that we do not understand LLMs. We understand atoms and we know the human brain is made 100 percent out of atoms.we may know how atoms interact and bond and how a neuron works but none of this allows us to understand the brain. In the same way we do not understand LLMs.

Characterizing ML as some statistical approximation or best fit curve is just using an analogy to cover up something we don’t understand. Heck the human brain can practically be characterized by the same analogies. We. Do. Not. Understand. LLMs. Stop pretending that you do.


I'm not pretending. Unlike you I do not have any issues making sense of function approximation w/ gradient descent. I learned this stuff when I was an undergrad so I understand exactly what's going on. You might be confused but that's a personal problem you should work to rectify by learning the basics.

omfg the hard part of ML is proving back-propagation from first principles and that's not even that hard. Basic calculus and application of the chain rule that's it. Anyone can understand ML, not anyone can understand something like quantum physics.

Anyone can understand the "learning algorithm" but the sheer complexity of the output of the "learning algorithm" is way to high such that we cannot at all characterize even how an LLM arrived at the most basic query.

This isn't just me saying this. ANYONE who knows what they are talking about knows we don't understand LLMs. Geoffrey Hinton: https://www.youtube.com/shorts/zKM-msksXq0. Geoffrey, if you are unaware, is the person who started the whole machine learning craze over a decade ago. The god father of ML.

Understand?

There's no confusion. Just people who don't what they are talking about (you)


I don't see how telling me I don't understand anything is going to fix your confusion. If you're confused then take it up w/ the people who keep telling you they don't know how anything works. I have no such problem so I recommend you stop projecting your confusion onto strangers in online forums.

The only thing that needs to be fixed here is your ignorance. Why so hostile? I'm helping you. You don't know what you're talking about and I have rectified that problem by passing the relevant information to you so next time you won't say things like that. You should thank me.

I didn't ask for your help so it's probably better for everyone if you spend your time & efforts elsewhere. Good luck.

Well don't ask me to help you then. I read your profile and it has this snippet in there:

"Address the substance of my arguments or just save yourself the keystrokes."

The substance of your argument was complete ignorance about the topic, so I addressed it as you requested.

Please remove that sentence from your profile if that is not what you want. Thank you.


I don't see how you interpreted it that way so I recommend you make fewer assumptions about online content instead of asserting your interpretation as the one & only truth. It's generally better to assume as little as possible & ask for clarifications when uncertain.

There is no other interpretation for that other than what I said. If you disagree then that’s a misinterpretation of the English language.

I am addressing the substance of your argument and that substance is lack of knowledge there is zero other angle to interpret it.


As I said previously, I don't think this is a productive use of time or effort for anyone involved so I'm dropping out of this thread.


We know exactly what is going on inside the box. The problem isn't knowing what is going on inside the box, the problem is that it's all binary arithmetic & no human being evolved to make sense of binary arithmetic so it seems like magic to you when in reality it's nothing more than a circuit w/ billions of logic gates.

We do not know or understand even a tiny fraction of the algorithms and processes a Large Language Model employs to answer any given question. We simply don't. Ironically, only the people who understand things the least think we do.

Your comment about 'binary arithmetic' and 'billions of logic gates' is just nonsense.



"Look man all reality is just uncountable numbers of subparticles phasing in and out of existence, what's not to understand?"

Your response is a common enough fallacy to have a name: straw man.

I think the fallacy at hand is more along the lines of "no true scotsman".

You can define understanding to require such detail that nobody can claim it; you can define understanding to be so trivial that everyone can claim it.

"Why does the sun rise?" Is it enough to understand that the Earth revolves around the sun, or do you need to understand quantum gravity?


Good point. OP was saying "no one knows" when in fact plenty of people do know but people also often conflate knowing & understanding w/o realizing that's what they're doing. People who have studied programming, electrical engineering, ultraviolet lithography, quantum mechanics, & so on know what is going on inside the computer but that's different from saying they understand billions of transistors b/c no one really understands billions of transistors even though a single transistor is understood well enough to be manufactured in large enough quantities that almost anyone who wants to can have the equivalent of a supercomputer in their pocket for less than $1k: https://www.youtube.com/watch?v=MiUHjLxm3V0.

Somewhere along the way from one transistor to a few billion human understanding stops but we still know how it was all assembled together to perform boolean arithmetic operations.


Honestly, you are just confused.

With LLMs, The "knowing" you're describing is trivial and doesn't really constitute knowing at all. It's just the physics of the substrate. When people say LLMs are a black box, they aren't talking about the hardware or the fact that it's "math all the way down." They are talking about interpretability.

If I hand you a 175-billion parameter tensor, your 'knowledge' of logic gates doesn't help you explain why a specific circuit within that model represents "the concept of justice" or how it decided to pivot a sentence in a specific direction.

On the other hand, the very professions you cited rely on interpretability. A civil engineer doesn't look at a bridge and dismiss it as "a collection of atoms" unable to go further. They can point to a specific truss and explain exactly how it manages tension and compression, tell you why it could collapse in certain conditions. A software engineer can step through a debugger and tell you why a specific if statement triggered.

We don't even have that much for LLMs so why would you say we have an idea of what's going on ?


It sounds like you're looking for something more than the simple reality that the math is what's going on. It's a complex system that can't simply be debugged through[1], but that doesn't mean it isn't "understood".

This reminds me of Searle's insipid Chinese Room; the rebuttal (which he never had an answer for) is that "the room understands Chinese". It's just not satisfying to someone steeped in cultural traditions that see people as "souls". But the room understands Chinese; the LLM understands language. It is what it is.

[1] Since it's deterministic, it certainly can be debugged through, but you probably don't have the patience to step through trillions of operations. That's not the technology's fault.


>It sounds like you're looking for something more than the simple reality that the math is what's going on.

Train a tiny transformer on addition pairs (i.e i.e '38393 + 79628 = 118021') and it will learn an algorithm for addition to minimize next token error. This is not immediately obvious. You won't be able to just look at the matrix multiplications and see what addition implementation it subscribes to but we know this from tedious interpretability research on the features of the model. See, this addition transformer is an example of a model we do understand.

So those inscrutable matrix multiplications do have underlying meaning and multiple interpretability papers have alluded as much, even if we don't understand it 99% of the time.

I'm very fine with simply saying 'LLMs understand Language' and calling it a day. I don't care for Searle's Chinese Room either. What I'm not going to tell you is that we understand how LLMs understand language.


No one relies on "interpretability" in quantum mechanics. It is famously uninterpretable. In any case, I don't think any further engagement is going to be productive for anyone here so I'm dropping out of this thread. Good luck.

Quantum mechanics has competing interpretations (Copenhagen, Many-Worlds, etc.) about what the math means philosophically, but we still have precise mathematical models that let us predict outcomes and engineer devices.

Again, we lack even this much with LLMs so why say we know how they work ?


Unless I'm missing what you mean by a mile, this isn't true at all. We have infinitely precise models for the outcomes of LLMs because they're digital. We are also able to engineer them pretty effectively.

The ML Research world (so this isn't simply a matter of being ignorant/uninformed) was surprised by the performance of GPT-2 and utterly shocked by GPT-3. Why ? Isn't that strange ? Did the transformer architecture fundamentally change between these releases ? No, it did not at all.

So why ? Because even in 2026, nevermind 18 and 19, the only way to really know exactly how a neural network will perform trained with x data at y scale is to train it and see. No elaborate "laws", no neat equations. Modern Artificial Intelligence is an extremely empirical, trial and error field, with researchers often giving post-hoc rationalizations for architectural decisions. So no, we do not have any precise models that tell us how a LLM will respond to any query. If we did, we wouldn't need to spend months and millions of dollars training them.


We don't have a model for how an LLM that doesn't exist will respond to a specific query. That's different from lacking insight at all. For an LLM that exists it's still hard to interpret but it's very clear what is actually happening. That's better than you often get with quantum physics when there's a bunch of particles and you can't even get a good answer for the math.

And even for potential LLMs, there are some pretty good extrapolations for overall answer quality based on the amount of data and the amount of training.


What percentage of work would you say deals w/ actual problems these days?

What’s an example of work that does not deal with actual problems?

Online influencers, podcasters, advertisers, social media product managers, political lobbyists, cryptocurrency protocol programmers, digital/NFT artists, most of the media production industry, those people w/ leaf blowers moving dust around, political commentators (e.g. fox & friends), super PACs, most NGOs, "professional" sports, various 3 letter agencies & their associated online "influence" campaigns, think tanks about machine consciousness, autonomous weapon manufacturers, & so on. Just a few off the top of my head but anything to do w/ shuffling numbers in databases is in that category as well. I haven't read "Bullshit Jobs" yet but it's on the list & I'll get to it eventually so I'm sure I can come up w/ a few more after reading it.

In a post-industrial economy there are no more economic problems, only liabilities. Surplus is felt as threat, especially when it's surplus human labor.

In today's economy disease and prison camps are increasingly profitable.

How do you think the investor portfolios that hold stocks in deathcare and privatized prison labor camps can further Accelerate their returns?


Google's antigravity does this automatically by creating Task & Walkthrough artifacts.

The system incentivizes seeking power by consolidating financial wealth. It doesn't have to be that way & this will eventually become obvious to everyone.

> The PAC, dubbed Leading the Future, formed in August with a more than $100 million commitment to support policymakers with a light-touch — or a no-touch — approach to AI regulation. And that means going after policymakers who want to regulate AI. The super PAC has backing from a number of other prominent leaders in tech, including Palantir co-founder and 8VC managing partner Joe Lonsdale as well as AI search engine Perplexity.

I have a subscription for Gemini ($10/month) but it also gives me access to their antigravity services which is useful for keeping track of "agentic" coding tools & dispelling the constant marketing hype.

Why? What is compelling about it?

The tool that is supposed to make programmers obsolete is causing an ongoing outage. Reality these days is much more ironic than I am capable of imagining.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: