Intel is NOT competing against AMD only. In the past couple of years, we’ve seen a number of big tech companies developing their own chips. Focusing on AMD would be quite myopic from a strategic pov. This market is only getting more competitive. Either you compete on performance or price.
After purchasing an M1, i'm starting to realize how viable ARM is as a main platform. Nearly everything I want to run on it has a natively built version, and runs great on it. I could easily move anything I've built to a server running ARM with little frustration. I think that may be a bigger part of the coming future.
Before M1 my only exposure to ARM has been low-power SBCs and Android devices, and the experience was mediocre in the “just works” department. Poor hardware support, and a lack of proprietary software support. Performance was also lacking. Apple’s tight integration and high-end CPUs have resulted in a vastly better experience, but I want to have more options than just macOS and MacBooks. I think we’re trending in the right direction, but it’s going to be a while before (5 years IMO) before we see anything approaching competitive to the M-series chips from major market players. If Microsoft could fix their frankly horrid x86 compatibility on aarch64 devices thing would speed along nicely I think.
I have tried buying microsoft arm computers since the last two gens now both the surface prox x with qualcomm sq1 and sq2 as well as another yoga book 5g.
Windows performance on these platforms is so trash, you feel like going back ten years on ultrabooks. Even their own apps are not optimized or some like visual studio didnt even run.
Compare that to x86 builtin emulation on apple m1, it performs so close to native performance on a 1000 bucks macbook air.
Microsoft has definitely different priorities like how to chnange settings for a user without their permissions or how to hide settings so users have less choice. windows experience has been so downhill since win7.
i agree. But seems like microsoft is back to old habits in win11 with settings regarding browser settings etc. having different standards for edge vs others.hn discussion: https://news.ycombinator.com/item?id=28225043
I vaguely remember seeing a video demonstrating a an M1 device virtualizing Windows ARM faster than it ran on Surface ARM hardware. Kind of reminds me of how an Amiga of the era could be set up to virtualize(?) Mac OS faster than contemporary hardware Mac could.
> If Microsoft could fix their frankly horrid x86 compatibility on aarch64 device
I don't think Microsoft is the real problem there, though.
NT was developed to be portable and was working on architectures other than x86 in the beginning.
So it was interesting when I heard things about "Windows on ARM" half a decade ago--and then the Surface RT. The RT was crap, but it did have real Windows NT working on non-Intel ARM, as was the OS on their Windows Series 10 phones or whatever.
So Microsoft is already there on an OS level. It's the big software vendors that have to be corralled to switch somehow (Autodesk, Adobe, etc.) Honestly .NET overall was probably at least in part Microsoft trying to get developers on something more CPU-agnostic to reduce dependence on x86.
I'm not so optimistic. There are some technical things Microsoft did poorly when going from x86 to x86-64, which in my opinion delayed the transition of a lot of software by a decade. And this is with processors that can run both instruction sets natively, where no actual software emulation was required.
To give some context (this started with Windows Server 2003 64-bit and is still how it works in Windows 11): Instead of implementing fat binaries like OS X did, they decided to run old x86 applications in a virtualized filesystem where they see different files in the same logical path. This results in double the DLL hell nightmare, with lots of confusing issues around which process sees which file where. For many usecases around plugins, this made a gradual transition impossible. (Case in point: The memory hungry Visual Studio is currently still 32-bit. Next release will hopefully finally make the switch.)
Also, it’s surprising how much stuff in Windows depends on loading unknown DLLs into your process, like showing the printer dialog. So you run into these problems all the time.
Have they learned their lesson? It doesn’t look like it. Last I checked, x86 on ARM uses the exact same system as x86 on x86-64. If they ever emulate x86-64 the same way, that’s triple DLL hell right there. And I don’t think they’ll get a decade to sort things out this time around.
Cool - perhaps that opens the way for a x64+ARM big.LITTLE processor, with a few hot fast x64 AMD cores (big) and a lot of slow efficient ARM cores (little).
I very nearly want them to double down on this disastrous strategy so in 3-5 years we’ll all be saved from Windows by an MS-run Linux distro (with windows theming, naturally) that just runs Wine+some MS internal goodies for backwards compat. It’s really not that different from Apple’s approach with Rosetta 2 in M1.
It’s crazy that this now aligns with Microsoft’s goals and could conceivably happen.
Microsoft has the capacity to realize that the value of Windows is not the codebase, but the compatibility. They could let the Linux subsystem swallow Windows and wrap Windows itself inside it.
However, I believe we’ll continue to see their colocation system instead, where Windows and Linux are both wrapped inside a system managing both.
What you described is actually closer to Apple's strategy for moving from Mac OS 9 to Mac OS X, with a virtual machine for running classic apps on the new OS.
Apple has more control over developers - these idiots pay them money for the "privilege" of developing on it. And Apple started by deprecating support for all 32 bits app. That forced many developers to refactor or port their code. The x86 emulation support will end in the near future and will force the remaining developers onto the ARM platform.
> I don't think Microsoft is the real problem there, though.
> [...]
>
So it was interesting when I heard things about "Windows on ARM" half a decade ago--and then the Surface RT. The RT was crap, but it did have real Windows NT working on non-Intel ARM, as was the OS on their Windows Series 10 phones or whatever.
In this specific case, Microsoft is the real problem: Microsoft deeply locked down the Surface RT; you needed a jailbreak to run unsigned applications on it.
> NT was developed to be portable and was working on architectures other than x86 in the beginning.
NT itself yes, but the userland? Not in the slightest. Apple provided Rosetta runtime translation at each arch transition, MS did not. As a result, no company even thought about switching PCs over to ARM which meant that there also was no incentive for the big players you mentioned to port their software over to RT.
We’ve been moving more and more to it. It works, and surprisingly well. It’s not quite up there for absolute single thread performance in our experience, but price/perf is excellent.
edit: really, I’m just waiting for Graviton 2 Fargate support, and then I’ll be able to move a lot of workloads.
M1 is the only arm part with memory ordering like x86. This allows them to hit x86 into Arm but not worry about the change in visibility to main memory by different threads.
None of the server parts have that. But by the time you do run your code on an Arm server, most of the bugs will be worked out.
I think we should look at one step further - RISC-V. Open source is the best way to ensure consumers don't get shafted by someone doing the Intel model again or Apple keeping M1 limited to their devices.
You seem to be making the classic mistake of thinking that a given RISC-V processor is open source. The standard is open, the processor's source "code" (design) doesn't have to be.
This does not mean RISC-V use wouldn't be a good thing, as it prevents a whole boatload of legal issues, but it just isn't what a lot of people seem to think it is.
ARM could end up being a better ISA in the very high-clock high-IPC domain, it remains to be seen.
I accept that I should have been more careful in my wording. I intended to comment that having an open source ISA that you could create processors for is a huge step in creating competition.
If someone wants to compete with Intel they would have a hard time even if they make an excellent processor since they are unlikely to get a x86 license. With arm you have to pay licensing fee and control of the Isa is still with a private company.
With risc v you can make your own processor and have a good shot in market. You will also have a chance to have a to propose/comment on future ISA changes.
Intel IS competing against AMD mainly in the server space now. Of course at some point ARM and RISC-V servers will become mainstream, but it will take years. Intel is taking action now and it's aimed directly at AMD.
ARM is already there in the cloud, cf AWS Graviton.
With Apple's M1 in the laptop/desktop (mac mini) space, ARM and its superior power/performance ratio is a significant contender for mainstream compute now.
I know a few VPS distributors, and I've heard pretty mixed things about ARM's viability in the server space. Not only is it pretty expensive relative to x86, it's also pretty slow: you won't be getting SIMD instructions like AVX, which are huge in the server space. The only thing ARM has going for it is low IPC, but I really fail to see many applications where you could benefit from that, much less one where it would be worth the price premium over x86.
Maybe in 5 or 10 years, ARM will be viable. But by then, we'll all be rocking RISC-V CPUs because someone realized that accelerating for specialized workloads isn't a crock of shit when 90% of your workload is video decoding.
Maybe read a bit about Graviton? AllIntel/AMD instances in AWS has Graviton processors to handle network/disk IO unless its a very old instance type, large amounts of AWS own services run on it as well.
Too soon do people forget that there were budget Zen 2 SiPs whooping the M1's ass back in 2019. It was at the expense of a slightly higher power draw, but that's a price I'm entirely willing to pay for a full-fledged x86 chip. I reckon that I'll accept no substitutes until RISC-V hits the mainstream in a more major way.
Because in the past no one could justify competing with Intel. But the xeon parts with huge profit margins, and companies like apple which only tended to buy the high margin parts in their devices the business people realized that it was cheaper to produce their own. Which is outrageous, if you think about it given the amount of engineering investment required to build a competitive product. The idea that a slice of the customer base has decided that the market is so broken that the financials work better to avoid Intel says they are way past the to greedy stage.
ARM maybe. But I'm not convinced that the ARM-alliance (Fujitsu, Apple, Ampere, Neoverse) is quite as unified as you might think. Apple has no apparent goals for cloud/servers, Fujitsu seems entirely focused on the Japanese market, and Ampere Altra isn't reaching critical mass (Amazon prefers a Neoverse rather than joining forces with Ampere / using Altra).
As long as the ARM-community is fragmented, their research/investments won't really be as aligned as Xeon and/or EPYC servers.
HiFive / RISC-V aren't anywhere close to the server-tier.
Why does this matter? If popular OS distributions consistently target ARM-based CPUs, with a sufficient number of packages (esp. development-support-related) working on them, then who cares about fragmentation? An organization could buy systems with ARM chips and software will basically "just work".
Same argument for consumer PCs, although there you have the MS Windows issue I guess.
The more fragmented your community, the harder it is for software to work consistently across all of them. Intel vs AMD has plenty of obscure issues (see "rr" project, and all the issues getting that debugging tool to work on AMD even though it has the same instruction set).
Sound, WiFi, Ethernet, southbridges, northbridges, PCIe roots. You know, standard compatibility issues that having a ton of SKUs just naturally makes more difficult. Having a "line" of southbridges / consistent motherboards does wonders for compatibility (fix the BIOS/UEFI bug in one motherboard, fix it for all) in Intel/AMD world.
But just as AMD has AMD-specific motherboard bugs, and Intel has Intel-specific motherboard bugs... I'd expect Graviton to have its share of bugs that are inconsistent with Apple M1 or Ampere Altra.
Amazon doesn't offer graviton in the open market. You can only get those chips if you buy AWS.
Graviton is a standard N1 neoverse core, which is slightly slower than a Skylake Xeon / Zen2 EPYC. There's hope that N2 will be faster, but even if it is, we don't really have an apples-to-apples comparison available (since Amazon doesn't sell that chip).
The most likely source of Neoverse cores is the Ampere Altra, which is expected to have N2 cores shipping eventually. As usual though: since Ampere has lower shipping volume than other companies, the motherboards are very expensive.
x86 (both Intel and AMD) have extremely high volumes: so from a TCO perspective, its hard to beat them, especially when you consider motherboard prices into the mix.
its not about the price but the ability to create chip that fit what you needs. For example: YouTube is now building its own video-transcoding chips.
The biggest cost of making chip is the foundry and the foundry ecosystem have reduced the cost to where everyone can be a fabless and just outsource to foundry like TSMC and Samsung.