I think this is where LLMs shine. I experience the same difficulty with a lot of command line tools, .e.g find is a mystery to me after all these years. Whatever the syntax is, it just doesn't stick in my memory. Since recently I just tell the model what search I want and it gives me the command.
Right. Basically using simple waveforms either using samples or onboard chip like ZX Spectrum's. For tracker modules with more "normal" samples, we simply referred to them as modules or mods for short.
IMHO, a “chiptune” is music for an FM synthesis chip, like on the NES, the SID chip in Commodore 64, or the AdLib sound card for PC. A “mod” or “tracker music” is music made for a range of platforms in a rather narrow time-band, that could play digital samples, but could not reasonably store entire songs recorded digitally, like the Amiga, Atari ST, or early PC’s like 386s or 486s.
Neither the NES nor the SID employs FM synthesis. I'm not even sure what the collective noun is for these. Wikipedia tells me it's PSG (programmable sound generator).
The same behavior could be (also was) teased out of a MOD player if you choose samples with a handful of sample points, like 12. You could also draw up a sawtooth in paint and use that as a sample. These are down-to-earth honest true Scotsman chiptunes.
If we get to the point where a local model can reliably do the coding for a good majority of cases, then the economic landscape changes significantly. And we are not that far from having big open weight models that can do that, which is a first step
Larger, yes, absolutely. Better? Right now it seems that bigger is better, but if we are thinking about long term future, it's not obvious that there isn't a point of diminishing returns with regards to size. I can also imagine a breakthrough, where models become much smaller, with the same or better capabilities as the current, very large ones.
You are always going to get the same scaling laws in model size regardless of what else you do, so the same degree of improvement seen now relative to the smaller models will be achievable in the future. Yes, small models may be on par with previous generation large models, but the same is true for processors and you don't see supercomputers going away. It's the same principle.
Just a heads up, that I found NVIDIA Parakeet to be way better than Whisper - faster, uses less compute, the output is better, and there are more options for the output. I am using parakeet-mlx from the command line. Check it out!
I've been trying both Whisper v3 large and Parakeet in MacWhisper, and I inevitably go back to Whisper large. Which one is better depends on what you dictate, how you speak, and which languages you use.
My understanding of OP was not a claim that "vibe coding doesn't work", but that the way Anthropic does it doesn't work. He seems to be specifically criticizing the "hands off the actual code, human" approach and advocating for keeping the human in the loop.
Sticking with the computation analogy, it could be a long-term memory look up. If memories were passed down the generations, people could simply memorize actions of individuals deemed smarter. Over a large sample size, a heuristic would emerge. Kind of like knowing there is always a sunset following a sunrise without understanding the solar system.
It is a zero sum game because you have a finite state budget for representing heuristics. Increasing the "smartness" (and therefore state required) of one heuristic necessarily requires reducing the smartness of other heuristics. The state is never not fully allocated, the best you can do is reallocate it.
This places an upper bound on the complexity of the patterns you can learn. At the limit you could spend 100% of resources building a maximally accurate model of a single thing but there are limits to ROI. Pre-digested learning makes it more efficient to acquire heuristics but it doesn't change the cost of representing it.
Some simple state machines are resistant to induction by design e.g. encryption algorithms.
I think that's kind of how all the religions were started. Smart people being tired of reasoning with dumb ones and instead going with "do this, because that's the will of God".
Being taken advantage of is not only a function of intelligence. It's also a function of emotional health. Sure, if the person is incapable of understanding they are being taken advantage of, they will be. But one can be perfectly capable of understanding that, see it happen in real time, and let it happen anyway. That has been the case with me for a long time. I could see, but I could not stop it, because I have been emotionally conditioned to allow it. Took a long time to fix.
There is also a risk of confusing a smart person with a person who speaks well. We have a built-in heuristic, that language signals intelligence. To a large extent it does, of course, but it can be deceptive. I've grown very weary of well-spoken people, who seem to want me to think they are also very smart.
Lastly, higher intelligence does not mean the person is a better human being. I find that there is an obsession with intelligence in the West. "Stupid" people can be really lovely and better companions than smart ones. There is something to be said about kindness and honesty.
> I've noticed the smarter a person is, the fewer qualms they have about sharing exactly what they're aiming to do.
I used to be like that. Openly speaking about what I aim to do and how. I ended up moderating that quality a fair bit after noticing some people began copying my ideas or outright stealing them. I was to slow to execute.
The nice thing about observing whether someone is accomplishing what they set out to accomplish is it doesn't matter how well spoken they seemed.
I've found that especially smart people have preternatural bullshit detectors, even when they lack "emotional health" or the ability to socialize well with others.
Smart people can be lovely, stupid people can be lovely, golden retrievers can be lovely... but that's tangential.
> I've found that especially smart people have preternatural bullshit detectors
I really disagree with that. So many smart people fell for obvious bullshit because it appealed to their intellectualism. Look at all the communist sympathizers in the West. Morons, but also intelligent people most of the time. They believed stories spread by the soviet propaganda, because they wanted to believe them.
> The nice thing about observing whether someone is accomplishing what they set out to accomplish is it doesn't matter how well spoken they seemed.
It's funny that you say that - there's another poster in this thready who claims that looking at the output is the stupid people's way of evaluating intelligence. Seems like we really have no idea how to tell (except for an actual IQ test)
> Smart people can be lovely, stupid people can be lovely, golden retrievers can be lovely... but that's tangential.
Yep, I was just making a note, that intelligence might be overrated as a trait.
reply