No, handoff design is more difficult than picking and tuning ALU designs from the 70 years of CPU design history, then generating the control circuitry.
A lack of design-for-handoff OF ANY SORT is why Intel failed with 802.16. Qualcomm invented soft handoff which made frequency reuse 3x more efficient, and it made CDMA possible. People are pretty ignorant that due to shadow fading you have to keep 3 or more MACIDs active on different base stations and use high speed feedback control to hand off roughly every 1.6 seconds. I statistically analyzed drive testing logs for 1x-EV-DO at Qualcomm to produce one of the world's first handoff markov models, as part of OFDMA handoff design for Qualcomm.
A lot of practical experimentation goes into cellular system design in the most challenging handOff locales on Earth - San Diego and Hong Kong.
It seems that there is a lot of focus on Qualcomm's strong-arm monopoly tactics. But that seems to me to work exactly as designed - patents are intended to literally grant a temporary monopoly to reward the holder for the expensive research work that otherwise wouldn't have happened. If Qualcomm won't license someone a patent for whatever reason, that is up to them. Clearly if customers are deciding to pay the price, it must be worth it. In fact, Apple itself did after realizing Intel can't deliver the 5G chipsets on time.
The article points out where QCOM actually violated a contract, though. It's related to FRAND patents [1] - they committed to licensing a certain set of patents this way so they can be included in a standard, something very valuable to Qualcomm, and then later reneged on that commitment. That I can totally side with. But all the other bellyaching about QCOM the big bad monopolist for other reasons is missing the point IMO.
I agree. If they alone in the world can make good modems well they can charge a lot for it. 3% seems ok with the value I get from the the modem compared to the whole phone. If not the market will price them out. The issue is not with clients but other suppliers (or lack theeeof). The issue is patents. And enforcing frand policies. Again if no other team in the world can deliver what they do why should they charge less ? If they violated agreements it's not even an systemic issue with patents. Am I missing something ?
CPU design and RF stuff are both very difficult; I wouldn't assume one way or the other. I wonder if knowledge of CPU design is more widely dispersed (e.g. you can hire ex-Intel, ex-AMD, ex-IBM people) than cellular radios where the only (working) ones seem to be made by Qualcomm.
One difference is there is a hella lot more engineers that know how to design and tape out digital designs than mixed signal RF ones. Qualcomm and a few other companies employ most of them.
> cellular radios where the only (working) ones seem to be made by Qualcomm.
Seems strange, given that plenty of manufacturers are making the radios on the other side of the cellular connection. Can't, say, Ericsson take the radios in their cell towers or picocells, redo their tape-outs for phone scale, and end up with a workable mobile radio?
> Can't, say, Ericsson take the radios in their cell towers or picocells, redo their tape-outs for phone scale, and end up with a workable mobile radio?
Generally speaking, no.
Sometimes in the base stations, you'll find FPGA and such, which are super expensive.
The even bigger issue is that you're implementing the other side of the over-the-air protocol, and a whole bunch more stuff too (signaling in the carrier's network). Whereas the modem in a phone is designed to talk to just a single system.
Also, the protocols are designed to be asymmetric. The cell station is fixed, but has relatively unlimited size and power constraints compared to the phone. They also have to talk to multiple phones simultaneously, and manage their access to the network.
> Ericsson take the radios in their cell towers or picocells, redo their tape-outs for phone scale, and end up with a workable mobile radio?
Ericsson used to make cellular chips for mobile devices. They decided to leave that industry rather abruptly, leaving us high and dry on one product I worked on.
> Can't, say, Ericsson take the radios in their cell towers or picocells, redo their tape-outs for phone scale, and end up with a workable mobile radio?
Workable maybe but not power efficient, which is crucial in mobile. That takes years to figure out and is the reason why ARM is so utterly dominant on mobile - there simply is not any Intel x86 offering even close to the power envelope of ARM, even after years of ARM dominance Intel hasn't managed to get anything meaningful.
> the reason why ARM is so utterly dominant on mobile
The economy. When you sell complex chips for $10, you have to sell a lot of them to return R&D costs. That's why they shut down cheap Atoms introducing Core M series: technically the two are close in many respects, but Atoms were sold for $10-20, core M's for $200-300.
Intel could probably do it if they had the luxury of redesigning their instruction set. They're stuck with supporting an overly CISC-based instruction set whose roots date back to the early 1970s. ARM didn't have that problem and designed a much more modern RISC-like instruction set which requires a lot less power.
This argument has held progressively less weight since 1995, when Intel released the Pentium Pro and started the precedent of deciding x86 CISC instructions into the micro-ops which are actually executed. ARM is a respectable architecture and Apple has shipped some very competitive chips but it’s not like Intel’s engineers have been in a coma for forty years.
All the moving pieces in Intel x86-to-RISC decoding (instruction decoder, μop cache, Microcode Sequencer ROM...) use up a non-trivial amount of silicon and power.
That’s not nothing but usually when people talk about this the rhetoric assume it’s much greater and that e.g. ARM doesn’t have similar issues supporting its older instructions, albeit at smaller scale. If you look at the results, and a couple decades where everyone else was struggling to match X86 on either performance or non-embedded power efficiency, it clearly wasn’t holding them back that much. Even Intel’s huge moon-shot clean architecture failed to outperform despite starting with considerable experience and no legacy baggage.
I'm not sure. If that were the case why wouldn't intel expose a native, better fit to uops instruction set in addition to the legacy, difficult to decode one and let the apps use the newer ones (or the subset of existing ones which do map well)?
My guess is (1) inertia, and (2) not wanting to commit to a specific instruction set because they tweak the internal μop set with every release.
And (3), this would add complexity, because, even though one is much simpler, now you need two completely separate decoding pipelines and mechanisms for switching between them.
Can't argue with 1. For 2 it's still an instruction set separate from uops so you might not be as sensitive to changes you still get a level of indirection. For 3 .. it depends. You might gain in power if you use the newer one more and you might as well make the older one simpler to achieve 90% of speed maybe. But maybe the decoding of instructions that counts is not that expensive compared to OOO branch predictors and 512 bits ALUs
Note that I wasn’t arguing that Intel had switched from one textbook architecture to another: only that it isn’t really an accurate way to discuss modern chips after decades of large teams of smart people have been borrowing each other’s ideas.
As to your specific example, ARM has instructions which do complex operations as well. Does that mean it’s not a RISC CPU, or just that some engineers made a pragmatic decision to support things which are done heavily like AES or SHA?
> it isn’t really an accurate way to discuss modern chips
I think it’s still mostly accurate, CISC/RISC is about public API i.e. instruction set. What’s inside a core is implementation details, very interesting ones and can be important for performance, but still.
> ARM has instructions which do complex operations as well
True, but I don’t think they combine these complex math operations with RAM access, in a single instruction?
Apple phones do not use ARM-designed processors. They just implement the same instruction set. If it’s possible for Apple to develop a mobile processor independently of ARM, why shouldn’t it be possible for Intel?
> Can't, say, Ericsson take the radios in their cell towers or picocells, redo their tape-outs for phone scale, and end up with a workable mobile radio?
They can, and they did. Not exactly of course, the two products are fundamentally different, but the organization owns enough IP and good engineers to make a viable product. From a technology standpoint it seems reasonable.
The problem is that the competition is tough and you need to stay on your toes with much more rapid iteration than on the "other leg" where you sell expensive stuff to telcos. And the margins on the consumer side are smaller. So, historically, it has proven very difficult to host the same two "legs" in the same organization. The one with bigger more reliable business tends to push the other out. It's been the same with much of the computer industry too.
And you can also buy the company that designed PowerPC chips better than IBM and Motorola could back in the day and have them kickstart designing custom ARM chips.
They bought a company that already was a significant player building ARM CPUs, that’s how. There aren’t many (any?) significant baseband companies that are as easily acquirable for Apple to buy.
Yep, they were using ARM cores up until the A5 (iPhone 4S, ARM Cortex-A9), then a custom core for the A6, and then... have people forgotten how the A7 came out of nowhere with its custom 64-bit core and completely blew away the rest of the industry with its performance?
I would expect making CPUs and GPUs to be a lot more difficult than modems, no?