Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Be careful; the only performance comparison they made was between the x86_64 builds and the arm64 builds. Both were done on the same hardware, with the x86_64 build necessarily running in emulation. This only proves that the ported game runs better than the non–ported game run in emulation, not that it runs better on an M1 Mac than on other computers.


The post actually mentions what map they used for benchmarking, and linked to a bunch of other benchmarks on the same map[0].

They quote around 200 UPS average. It's hard to compare to the linked benchmarks because those quote p75 numbers instead of average, but it seems like the results are in the same general ballpark as the Ryzen 9 5950x.

0. https://factoriobox.1au.us/results/cpus?map=4c5f65003d84370f...


Wow thanks for this. I noticed the 13700k results are surprisingly lackluster. The highest speed memory kit there is 5600. I just got a 13700k with Hynix A-die (now clocked to 7200). I have an AIO arriving in the mail in a few days to replace my NH-D14. The 13700k has quite a bit of headroom when not thermally constrained.

I think that the 13700k and 13900k with the same turbo ratio should perform almost the same in gaming workloads. The only difference should be in the 36 MB of LL cache vs. 30 MB. It's a modest difference, but factorio is memory subsystem performance sensitive.

I'll add a benchmark to that page in a few days with a 5.8 GHz clocked 13700k to test the theory.


Update for posterity: The ALF II 280 actually performed the same as my NH-D14 in thermal stress testing. I ran y-cruncher for an all-core workload that reliably thermal throttled 100 C @ 220 W. The clock frequency is dependent on the voltage given, which has been a pain to tune. I don't think this chip can go beyond 5.6 GHz stable without adding so much voltage that it is actually lower performance in most workloads. 5.7 GHz can be made borderline for most workloads, 5.8 GHz is unstable, and 5.9 GHz does not boot. I know the adaptive selection voltage mode should be able to address this, but something about the BIOS is incorrect. These results refute my theory that the 13700k could match the 13900k and the 13900k is in fact well-binned.

That doesn't mean the 13700k can't match (or exceed) the out-of-box Factorio performance of the 13900k when given better memory, hence the score of 304 UPS.

https://factoriobox.1au.us/result/21784265-472e-4275-847c-dd...

Bonus: E-cores have thermal headroom at stock and can be stable at 4.5 GHz if given +0.1 V, but this cuts into the thermal headroom of the P-cores in all-core workloads and lowers the overall performance. Bumping to 4.3 GHz from 4.2 GHz with no voltage increase is stable.


A Noctua NH-D14 is considered "thermally constrained" for these 13th gen chips? Good lord.

I'm looking forward to your test results though as I'm considering building a new desktop around 13th gen.


Not sure about the performance of NH-D14 and 13700K, but my NH-D15 (upgraded from NH-U12A) is running OK for 13700k with Intel's PL1/PL2 settings, hovering around 60~70c during full load at 25c ambient temperature. Unlimited PL2 is a different story and can go up to 100c during full load. My previous NH-U12A build ran at about 2-4c higher temperature under load.

I've been experimenting with different settings and found that unlimited PL2 and undervolting the CPU by -150mV give the best temperature to performance at ~80c during full load. It has been running stable for few days, and I'm pretty happy with the result so far.


For stability testing I like y-cruncher. It teases out edge cases that many XMP profiles are unstable with. I'd take a slightly less efficient system that I am confident will not error.

Also, those numbers aren't far off the out-of-box behavior I had, but I like to tinker. I throttle with PL set to 190 but not at 180.


I'd avoid Intel entirely if heat/power efficiency matter to you at all. AMD has had acceptable performance with far better heat/power use across the board for a few years now.


The factorio benching scene is a lot more sophisticated than when last I looked.


> with the x86_64 build necessarily running in emulation

Rosetta 2 is not emulation, at all, it's AOT, static binary translation, backed by hardware that implements Intel specific behaviour from the latest chips down to the oldest 8080 or something. It's eerily fast.

In fact, it happens that arm64-translated x86_64 running on Apple Silicon can often be faster than x86_64 running on the latest Macs with Intel processors.

So you really have to ask two questions here:

- does the x86_64 Factorio build run faster on Apple Silicon than on a comparable† Intel?

- on Apple Silicon, does the arm64 Factorio build run faster than the x86_64 Factorio?

† whatever that means


Are there any 5nm Intel processors to compare with? I think an AMD chip might have a more comparable performance profile.


>Rosetta 2 is not emulation, at all, it's AOT, static binary translation, backed by hardware that implements Intel specific behaviour from the latest chips down to the oldest 8080 or something.

So, it is hardware emulation.


No.

Imagine you only speak English and you want to read a novel in French.

Emulation: you hire a translator to read the novel to you. They translate each word while reading.

Static translation: you hire a translator to transcribe the book from French to English. They give you a printed book purely in English. But simple French words like flâner and râler are expanded into lengthy passages because there is no simple English translation.

Rosetta 2: you hire the translator to transcribe the book to English, but they leave in unique French words and teach you what they mean so you can understand them in an English phrase without even noticing that the word isn’t “real” English.

Rosetta 2 isn’t emulation because no instruction is translated on the fly to a different ISA. It’s static translation plus ISA extensions. There is no lower level emulating anything.


As a slight correction, I believe Rosetta 2 also has a JIT mode, which is a bit more like conventional emulators. But it's used infrequently, eg when dealing with x86_64 apps that themselves use a JIT.


> I believe Rosetta 2 also has a JIT mode

It does have JIT translation (not a JIT "mode" though, as it always use AOT translation, only relying on JIT translation at runtime for the parts that need it)

> which is a bit more like conventional emulators

Not at all†, Rosetta 2 does the same†† translation step on dynamic Intel code, whose arm64 output can be reused afterwards

> But it's used infrequently, eg when dealing with x86_64 apps that themselves use a JIT

Yes, although it's more like "exceedingly rarely" in practice since usually those interpreters are up to date enough to have a native arm64 release.

See there for details: https://dougallj.wordpress.com/2022/11/09/why-is-rosetta-2-f...

† Unless you've been meaning dynarec, but I would not call that "conventional" although it is a well-known technique https://en.wikipedia.org/wiki/Dynamic_recompilation

†† IIUC minus a few things that can't be done when just-in-time because some assumptions are not guaranteed to be satisfied.


Thanks, that is definitely correct. Rosetta 2 can do JIT, and it gets exercised for native JIT / dynamic code.

I could probably extend the metaphor to an avant garde French novel that asks the reader to look up and include today’s headlines from Le Monde, but it was already stretched.


>> implements Intel specific behaviour from the latest chips down to the oldest 8080 or something.

> So, it is hardware emulation.

It's more like there's a full Intel CPU in disguise, only with instructions and registers having another name.


Factorio runs quite well on the M1. The graphics system (FPS) is partially decoupled from the factory simulation side (updates per second, or UPS), so there are two components to performance. UPS mostly depends on how big and complex your entire factory is, and on what mods you're running, and FPS mainly depends on how many sprites are on screen. FPS is limited to be <= UPS, since there's no point in redrawing until the game state changes, but UPS can be greater.

FPS: Despite being a 2D sprite game, sometimes it has trouble keeping FPS at 60, at least when running at max graphics and max zoom level with a graphically intensive mod. I would guess it's using OpenGL, and Apple's OpenGL stack isn't great. You can see the article mentioning the M1 Max only hitting 45 FPS in one of the tests, and this is without mods (but with a huge base and presumably a wide zoom level). In my experience, if you adjust the graphics settings appropriately (eg max sprite atlas size and max vram usage, since integrated graphics use unified memory), you can usually keep it at a smooth 60 FPS 99% of the time even in graphically-intensive setups with max or almost-max quality settings.

UPS: Scoring 199 UPS on the flame_sla 10k base puts the M1 Max above any other laptop processor for that benchmark. This matches my experience: the simulation part of the game almost never lags, except for unavoidably heavy operations (eg generating new worlds when playing with mods that do that). See a comparison at:

https://factoriobox.1au.us/results/cpus?map=4c5f65003d84370f...


> UPS: Scoring 199 UPS on the flame_sla 10k base puts the M1 Max above any other laptop processor for that benchmark.

It puts it above an EPYC 7763! I presume it wasn't using all 64 cores though.


Yeah, Factorio is multithreaded, but in practice it usually only runs a few ways in parallel. Instead its performance is determined in large part by the memory subsystem, which is why the X3D processors do so well. It's probably also part of the M1's great performance: with a large cache and stacked DRAM, it has very competitive bandwidth and latency.


Some anecdata: I've played Factorio on both an Intel MBP (2018) and an M1 MBP (2021), the performance even under Rosetta blew away the Intel chipset. Being M1 native means even faster performance with a lower power impact.


Steam has a surprising amount of Mac games but I fear many will never receive a native M1 update.


The current Steam problem on macOS is more about 32 bit games that never received 64 bit builds, which means they are not playable anymore since Catalina independently of the Intel vs Apple Silicon hardware, notably the first-party GoldSrc and Source engine games, and all of their third party derivatives.

I would not really care if my game library was going through Rosetta 2, as I'd rather take a theoretical performance hit (vs a native arm64 build) than outright be unable to play.


At one point even intel macs broke compatibility with every game that was 3 years old or something stupid like that. I remember that my working game simply refused to execute after a macos version upgrade. Exact same machine, refusing to run my software overnight.

This kind of attitude just isn't conducive for gaming, where people like to build libraries in steam and expect everything to keep working for a long time.

On my PC, I can fire up games from 20 years ago and they work perfectly. Witcher 3, a 7 year old game, is getting an overhaul. I expect no problems in downloading it on steam from my library and playing it seamlessly on my relatively new PC.


Yep - they killed 32 bit compatibility and it’s annoying.

IIRC win64 finally killed win16 support but that was rarely used for games and those games you can dosbox (which amusingly enough works fine on Mac in many cases).


Meanwhile I’ve been happy playing games in Parallels (virtual Windows arm64) like Against the Storm. Crusader Kings 3 has better perf in Parallels than the macOS build.


Indeed! Here is a list of good games that run on the Mac:

https://hypertexthero.com/mac-video-games-for-streaming/

I try to keep it updated and suggestions are always welcome.


Here's another list to keep around.

https://www.applegamingwiki.com/wiki/Home


Does protonDB work for Mac titles, or is it only for Windows? I've been able to play basically every game in my steam library on linux using protonDB. I have hundreds of games.


Theoretically it could, although:

> The main Proton issue is that it runs on DXVK, and MoltenVK is not always up to parity with implementing Vulkan API calls on Metal reliably.

https://github.com/ValveSoftware/Proton/issues/1344#issuecom...

An alternative would be using CrossOver (which pulls from Wine and adds stuff like MoltenVK), which is what Proton does as well (pulling from Wine and adding stuff, but not MoltenVK) and vendors internally, Valve "just"† doesn't pull from the CrossOver changes nor expose Proton on macOS.

† scare quotes because it may not be as easy as it seems


Proton helps you run Windows games on Linux. A few years ago, the problem with Mac games on Steam was running 32-bit Mac games on 64-bit only macOS builds. Now, it's running x86 Mac games on arm64 Macs. Often, we're talking about those same 32-bit games that never got updated to 64-bits x86, let alone arm.


Fair; I'll try to not get my hopes up too high.


IIRC from the benchmark M1 have some truly great single-core performance. Then again Apple stuff is so fucking expensive you can get faster for cheaper easily...


What laptop has comparable perf for much cheaper?


Or anything like the same battery life?


Is this the part where we find out they are comparing laptops to desktops?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: