In the early days of the web, there wasn't much we could do with it other than making silly pages with blinking texts or under construction animated GIFs. You need to give it some time before judging a new technology.
We don't remember the same internet. For the first time in our lives we could communicate by email with people from all over the world. Anyone could have a page to show what they were doing with pictures and text. We had access to photos and videos of art, museum, cities, lifestyles that we could not get anywhere else. And as a non-English guy I got access to millions of lines of written text and audio to actually improve my English.
It was a whole new world that may have changed my life forever. ChatGPT is a shitty Google replacement in comparison, and it's a bad alternative due to being censored in its main instructions.
In the early web, there already were forums. There were chats. There were news websites. There were online stores. There were company websites with useful information. Many of these were there pretty much from the beginning. In the 90s, no one questioned the utility of the internet. Some people were just too lazy to learn how to use a computer or couldn't afford one.
LLMs in their current form have existed since what, 2021? That's 4 years already. They have hundreds of millions of active users. The only improvements we've seen so far were very much iterative ones — more of the same. Larger contexts, thinking tokens, multimodality, all that stuff. But the core concept is still the same, a very computationally expensive, very large neural network that predicts the next token of a text given a sequence of tokens. How much more time do we have to give this technology before we could judge it?
I have a good enough understanding of what it is capable of, and I remain unimpressed.
See, AI systems, all of them, not just LLMs, are fundamentally bound by their training dataset. That's fine for data classification tasks, and AI does excel at that, I'm not denying it. But creative work like writing software or articles is unique. Don't know about you, but most of the things I do are something no one has ever done before, so they by definition could not have been included in the training dataset, and no AI could possibly assist me with any of this. If you do something that has been done so many times that even AI knows how to do it, what's even the point of your work?
Of course, but does it mean that my argument is flawed? You're just shifting the discourse, without disproving anything. Do you claim that the web was useful for everyone on day one, or as useful as it is today for everyone?
I could just do the same as GP, and qualify MUDs and BBS as poor proxies for social interactions that are much more elaborate and vibrant in person.
As I pointed out in a different comment, the Internet at least was (and is) a promise of many wondrous things: video call your loved ones, talk in message boards, read an encyclopedia, download any book, watch any concert, find any scientific paper, etc etc; even though it has been for the last 15 years cannibalised by the cancerous mix of surveillance capitalism and algorithmic social media.
But LLMs are from the get-go a bad idea, a bullshit generating machine.
That might be a lack of understanding from my part. I had the impression from your comment that you were implying that there was (and is) hope in internet development (ie. many people hold a positive opinion about it), but there cannot be any hope in LLMs (ie. nobody can build a positive opinion about it, because presumably some hard fact prevents it).
As for what I said, I was just mimicking the comment of GP, which I'll quote here:
> The internet actually enabled us to do new things. AI is nothing of that sort. It just generates mediocre statistically-plausible text.
I’m not even heavily invested into AI, just a casual user, and it drastically cut amount of bullshit that I have to deal with in modern computing landscape.
Search, summarization, automation. All of this drastically improved with the most superior interface of them all - natural text.
Not OP, but how much of the modern computing landscape bullshit that it cut was introduced in the last 5-10 years?
I think if one were to graph the progress of technology on a graph, the trend line would look pretty linear — except for a massive dip around 2014-2022.
Google searches got better and better until they suddenly started getting worse and worse. Websites started getting better and better until they suddenly got worse. Same goes for content, connection, services, developer experience, prices, etc.
I struggle to see LLMs as a major revolution, or any sort of step function change, but very easily see them as a (temporary) (partial) reset to trendline.