Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Don't bet too much, because I'm still undecided on LLMs.

Interestingly, I have no memory of ever thinking about LLMs as I wrote this. Part of it is I slaved over this talk a lot more than my usual blog posts, for about six months, after starting the initial draft in Dec 2022 (https://akkartik.name/post/roundup22). ChatGPT came out in Nov 2022. So I was following (and starting to get annoyed by) the AI conversations in parallel with working on this talk, but perhaps they felt like separate threads in my head and I hadn't yet noticed that they can impact one another.

These days I've noticed the connections, and I feel the pressure to try to rationalize one in terms of the other. But I still don't feel confident enough to do so. And my training has always emphasized living with ambiguity until one comes up with a satisfying resolution.

It took us 200 years from the discovery of telescopes[1] to attain some measure of closure on all the questions they raised. There's no reason to think the discovery of LLMs will take any less time to work through. They'll still be remembered and debated in a hundred years. Your hot takes or mine about LLMs will be long forgotten. In the meantime, it seems a good idea for me to focus on what I seem uniquely qualified to talk about.

[1] https://web.archive.org/web/20140310031503/http://tofspot.bl... is a fantastic resource.



> And my training has always emphasized living with ambiguity until one comes up with a satisfying resolution.

I wish more people had this attitude, particularly towards LLMs. We're at an interesting point in time where we don't know how LLMs will evolve. Maybe we've hit the plateau and LLMs won't get any better. Even in that case, it will take decades for their effect to be felt in the entire industry, much less the world. Or maybe they will continue to improve on their way to ASI.

My point, though, is that if you're worried about user control over their computing environment (which I 100% understand), then LLMs might be the best solution. I think they could be the only practical solution, as all others seem like pipe dreams.


It's not clear to me why you consider my approach a pipe dream. The only criticism I've heard is that people won't adopt it. Is that the only one?

One critical question for you is: can someone rebuild a piece of software for themselves, from scratch, using LLMs, without changing radically in terms of how many neurons in their brain are devoted to programming?

If they need to have real control here, they'll need to understand a lot about programming, build up a certain skeptical mindset in reviewing code, etc. That sort of requirement feels like it'll also affect adoption.

If they do so without learning much about programming, then I'd argue they don't have much control. It's not them rebuilding the thing for themselves, it's the LLM rebuilding the thing for them. And that comes with the same sorts of Principal Agent problems as depending on other sorts of AIs like tech companies to do so. They'll find themselves awash in eldritch bugs, etc.

So I think LLMs can't square this circle. If they want to not devote a lifetime to programming, user control using LLMs feels like more of a pipe dream than my approach. Because my approach depends crucially on personal relationships. There's no illusion that each person is getting personally customized software. Instead we're banding together in computational space as if we're travelling through medieval Asia -- using a caravan. Caravans had a natural limit in size, because larger caravans were easier to infiltrate with bandits. Managing those risks in a large caravan requires the geopolitical skills of a king to constantly see all the angles of who can screw you over at each point in time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: