This. All LLM code I saw so far was lots of abstraction to the point that it’s hard to maintain.
It is testable for sure, but the complications cost is so high.
Something else that is not addressed in the article is working within enterprise env where new technologies are adopted in much slower paces compared to startups. LLMs come with strange and complicated patterns to solve these problems, which is understandable as I would imagine all training and tuning were following structured frameworks
Glad I’m not the only one who experienced this. I have a paid antigravity subscription and most of the time I use Claude models due to the exact issues you have pointed out.
The article itself feels AI-generated. I would expect an article about productivity and economy to include charts, links, and citations, but this one didn't.
However, I did a search, and the author's name is indeed the director of Stanford University Digital Economy Lab, and the article itself shows up on FT when I googled the title.
I suppose PressReader is not showing the full details?
I've never had a potential job reference a single thing on my github, and I've been a user since 2007. Usually I had to point out, when trying to get a job using e.g. Rails, that I had contributed significant code that they were using in production.
This could be a good new channel for advertisers. I didn't see any comment about this perspective.
Anecdotally, the quality of traffic from ChatGPT to one of my websites is much better than Google traffic, in terms of bounce rate and time on site.
If they managed to show ads in a carousel (like the video), it might get a better conversion rate compared to invasive Google ads (covering the organic results).
Though if OpenAI managed to embed the ads within the experience, that might work even better (conversion-based pricing). Examples would be having the shopping list from the grocery shop (in line with the recipe or the question), adding to the basket from ChatGPT, and pay.
In theory, they can even add a new GPTPay to simplify the journey.
"Lower cost to reach customers = lower product and service prices"
This is economically illiterate.
Advertising is not a discount mechanism. It is a tax on the consumer. When I buy a product heavily marketed on Instagram or Google, I'm paying for the product plus the auction bid price required to acquire me plus the margin of the ad-tech middleman (which are trillion dollar companies).
You are conflating "information distribution" with "persuasive surveillance." In a world without behavioral advertising, businesses compete on quality and reputation, not on who can exploit the most psychological vulnerabilities to manufacture demand.
As for innovation: The current ad ecosystem has killed organic discovery. You can't build a "micro-business" based on merit anymore.
The winner SHOULD be the engineer who solved a hard problem efficiently. But instead the winner is the dropshipper who cracked the arbitrage spread between a cheap, garbage product and a highly manipulative Facebook ad campaign.
The article is suggesting that there should be a way for the LLM to gain knowledge (changing weights) on the fly upon gaining new knowledge which would eliminate the need for manual fine tuning.
This. All LLM code I saw so far was lots of abstraction to the point that it’s hard to maintain.
It is testable for sure, but the complications cost is so high.
Something else that is not addressed in the article is working within enterprise env where new technologies are adopted in much slower paces compared to startups. LLMs come with strange and complicated patterns to solve these problems, which is understandable as I would imagine all training and tuning were following structured frameworks
reply