Short-term yes, long-term it is often the other way around. In many cases, abandoning an open standard for a closed, centralised solution is surrendering to future enshittification for short-lived instant gratification.
Do they get more out of it than it costs, or are they still in the "people are just giving us money in the hopes that one day it turns a profit even though we're not charging nearly enough to make a profit" phase?
You're describing the AI companies and their business model.
I'm answering to that cost being a problem regarding "what prevents 100 Billion ChatGPTs from using any protocol?" - the context I have in mind for the above being scammers, political manipulators, spam, and people like that using ChatGPT/LLMs to take advantage of various protocols for profit (and the 100 billion figure being a figure of speech meaning "very many").
I can assure you this has been talked about and is known and it's why you still find a headset port on devices handed out to government officials, though most of them ignore the advice to not use bluetooth.
And people kept downvoting me when I said it has always been about advertising and marketing. It's optimal personalized mattress sales all the way down.
Hopefully the tech bro CEOs will get rid of all the human help on their islands, replacing them with their AI-powered cloud-connected humanoid robots, and then the inevitable happens. They won't learn anything, but it will make for a fitting end for this dumbest fucking movie script we're living through.
What are the AI-never companies doing? May be a useful comparison. Is the AI work actually improving the bottom line, or is it being used to assuage noisy shareholders that think AI is a hack for infinite profit?
reply