Searching for "12" on that page leads to a price of $0.12/1k tokens for ChatGPT-4. Which a) is the cost OpenAI charges, not what OpenAI spends and b) is, obviously, off by a factor of 1000.
I use Siri for cooking (on the iPhone it can still only handle one timer at a time), for adding to Groceries, turning some lights and fans on/off, and playing music.
I enabled 'Always Show Speech' and even if the text indicates Siri understood what I said, it still often does the wrong thing anyway.
Well, that all Siri can do right now. If it could add something to my grocery list (3rd part app) or tell me what the gym schedule looks like tomorrow (in another app), or turning navigation voice on or off while driving, I would be doing that a lot!
Why do people have faith in Apple being competitive in the AI race? It’s like betting on Bing reaching the level of search results that Google provides today, 6 months from now.
Apple's silicon enables them to run incredibly strong models locally, fully offline. This is a huge win over competition that needs clients to connect to a network-connected GPU farm.
Siri wouldn't need to reach out to the web, it's various functions could be abstracted away and driven by embedded models. They've already started positioning for this. The newest M4 Ipad release has a 10 core GPU/16Gi RAM that could be consumed as VRAM.
Could run Llama 3 8Gb Instruct with that and get response content much faster than normal speaking cadence.
Sure they're quite behind in the model space, but there are plenty of routes for them to catch up quickly in this regard. Licensing, open-source models, etc.
Google's hardware is just not as powerful. The Tensor G3 is far less capable than the A16 let alone the A17 Pro or M4 chip that powers the new iPad. Gemini Nano is a tiny model with limited applications. On the page you've linked, it seems like it's mainly used for text summarization, autocomplete, and various image features. It's not used for the Google assistant, likely due to the complexity of that task relative to the strength of the model.
Apple's hardware could run more powerful models that make assistant a first-class citizen. And as another poster mentioned, Apple does not have the same incentive structure as Google (which always has to keep their ad revenue business in mind). Apple would happily offer an all-in-one, does-everything assistant to their users, if it were good enough to compel sales. Google likely would not, but would rather bake integrations into their other products in order to "augment them with AI" while preserving their revenue model.
> Apple's silicon enables them to run incredibly strong models locally, fully offline. This is a huge win over competition that needs clients to connect to a network-connected GPU farm.
I stand by my counterpoint: Google can’t run “Hey Google” on-device because it only controls a small % of the devices it’s available on. In the world of built-in voice assistants, Siri is behind, but only as behind as it was in 2021.
This thread is about the world of on-device models, not the world of built-in voice assistants. The competition that this thread is about comes from this sentence in the original comment: "This is a huge win over competition that needs clients to connect to a network-connected GPU farm."
AI is a lot more than LLMs just because they don't have a public LLM doesn't mean they don't have competent AI strategy. Most of their advancements are around image and video. A lot of companies that are "leaders" in AI right now have no path to profitability for their features because they're not actually that useful
Apple sells hardware at a large profit, they don't need to rely on ad revenue.
Google literally cannot release a voice only assistant that solves all of your problems, it would destroy their ad revenue.
This is the same reason why Microsoft never had a chance to win the online market in the early 2000s, they were reliant on large companies buying on-prem licenses for server and client machines. Websites that could be accessed from any device, running any OS, and that could be served from Linux boxes, was against everything MS stood for.
Eventually MS figured out they could make bank by hosting those Linux boxes, or by running the productivity infrastructure that all those business machines connect to (hosted Exchange, web versions of office) and still make money on software.
But Google is too addicted to ad revenue to ever truly revolutionize the world of computing.
AI companies have learned and struggled for a few years, learning how to make a decent model. Phone and laptop guys are not going to just figure this out. They're not going to just walk in.
There's a link on "12 cents" that goes to https://www.vantage.sh/blog/gcp-google-gemini-vs-azure-opena...
Searching for "12" on that page leads to a price of $0.12/1k tokens for ChatGPT-4. Which a) is the cost OpenAI charges, not what OpenAI spends and b) is, obviously, off by a factor of 1000.
The NYT's tech coverage is truly, truly terrible.