I'm not a principal, but I would wonder: if AI increases every "coder's" productivity, say, 5x doesn't that replace some teams with 1 person, meaning less "alignment" necessary? Some whole org layers may disappear. Soft skills become less relevant when there are fewer people to interface with.
Even regarding "chase something complex and difficult", there are currently only so many needs for that, so I think any given person is justified fearing they won't be picked. It may be several years between AI eating all the CRUD work from principal down, and when it expands the next generation of complex work on robotics or whatever.
Also, to speak on something I'm even less qualified – the economy feels weak, so I don't have a lot of hope for either businesses or entrepreneurs to say "Let's just start new lines of business now that one person can do what used to take a whole team." The businesses are going to pocket the safe extra profits, and too many entrepreneurs are not going to find a foothold regardless how fast they can code.
I used “decorative emojis” (some colored circles) to essentially color-code the labels in an app I don’t control that only let me provide text labels. It’s a little subjective, but I do believe it enhanced communication, from the data to the user, in that case.
That’s pretty funny when compared with the rhetoric like “AI doesn’t get tired like humans.” No, it doesn’t, but it roleplays like it does. I guess there is too much reference to human concerns like fatigue and saving effort in the training.
This is what happens when a bunch of billionaires convince people autocomplete is AI.
Don't get me wrong, it's very good autocomplete and if you run it in a loop with good tooling around it, you can get interesting, even useful results. But by its nature it is still autocomplete and it always just predicts text. Specifically, text which is usually about humans and/or by humans.
You are not wrong, but after having started working with LLMs, I have this feeling that many humans are simply autocomplete engines too. So LLMs might be actually close to AGI, if you define "general" as "more than 50% of the population".
Humans are absolutely auto-complete engines, and regularly produce incorrect statements and actions with full confidence in it being precisely correct.
Just think about how many thousands of times you've heard "good morning" after noon both with and without the subsequent "or I guess I should say good afternoon" auto-correct.
Well the essence of software engineering is taking this complex real world tasks and breaking them down into simpler parts until they can be done by simple (conceptually) digital circuits.
So it's not surprising that eventually autocomplete can reach up from those circuits and take on some tasks that have already been made simple enough.
I think what's so interesting is how uneven that reach is. Some tasks it is better than at least 90% of devs and maybe even superhuman (which, in this case, I mean better than any single human. I've never seen an LLM do something that a small team couldn't do better if given a reasonable amount of time). Other cases actual old school autocomplete might do a better job, the extra capabilities added up to negative value and its presence was a distraction.
Sometimes there is an obvious reason why (solving a problem with lots of example solution online vs working with poorly documented proprietary technologies), but other times there isn't. They certainly have raised the floor somewhat, but the peaks and valleys remain enormous which is interesting.
To me that implies there is both lots of untapped potential and challenges the LLM developers have not even begun to face.
Yep. The veil of coherence extends convincingly far by means of absurd statistical power, but the artifacts of next token prediction become far more obvious when you're running models that can work on commodity hardware
> The creator is also very selective about the type of politics he supports.
Why would someone express political messages without being selective? It’s understandable not wanting overt politics in your software, but this line is odd.
> don't look into their history where they actually did work for the actual real Nazis
If that kind of argument is on the table, also don’t look into Elon’s Nazi-sympathizing grandpa who moved to be able to rule over Blacks, nor his father’s illegal mining under apartheid that funded the Musk family.
And his mother isn't any better when it comes to racism herself [1], and her father (=Elon's maternal grandfather) Joshua N. Haldeman was not just an outspoken Apartheid supporter, but a conspiracy peddler and White Nationalist [2][3].
Musk's entire family is rotten to the core if you ask me, it's a surprise he could put up enough of an act to credibly convince liberals for well over a decade that "he's a good one".
It's actually been a topic here on HN before but found very, very little resonance [4].
> SQLite solves this issue by allowing you to write with page level granularity rather than being forced to dump the whole file for a single tiny change!
Smaller ideas that would address this: add support for non-CBC encryption modes, tweak/disable the compression so that small changes require less rewriting.
reply