PhD physicist (Stanford/SLAC), Research Software Engineer doing low-level systems work in C/C++ and LLM research. Not a founder or investor — just a practitioner.
One data point for this thread: the jump from Opus 4.5 to 4.6 is not linear. The minor version number is misleading. In my daily work the capability difference is the largest single-model jump I've experienced, and I don't say that casually — I spent my career making precision measurements.
I keep telling myself I should systematically evaluate GPT-5.3 Codex and the other frontier models. But Opus is so productive now that I can't justify the time. That velocity of entrenchment is itself a signal, and I think it quietly supports the author's thesis.
I'm not a doomer — I'm an optimist about what prepared individuals and communities can do with this. But I shared this article with family and walked them through it in detail before I ever saw it on HN. That should tell you something about where I think we are.
one feels the llm wow moment whenever what they do on an area has been surpassed by an llm. newer versions of llms are probably trained by the feedback from developer code agent sessions; so this is probably why pro developers started to feel "wow" recently.
the real challenge will be in the frontier of the human knowledge and whether llms will be able to advance things forward or not.
ps1; i'm using 5.3/o4.6/k2.5/m2.5/glm5 and others daily for development - so my work has 1.5x intensified - i tackle increasingly harder problems but llms still really fail big in brand new challenges like i fail too. so i'm more alert than ever.
ps2: syntactical autocomplete used to write 80% of my code; now llms replaced autocomplete but at a semanticlevel; i think and LLM implements most of my actions like a cerebellum for muscle coordination; but sometimes teaching me new info from the net.
The frontier-of-knowledge point is the right question. My own research is a case in point - I apply experimental physics methods to LLMs, measuring their equations of motion in search of a unified framework for how and why they work. Some of the answers I'm looking for may not exist in any training data.
That's where the 4.5->4.6 jump hit me hardest - not routine tasks but problems where I need the model to reason about stuff it hasn't seen. It still fails, but it went from confidently wrong to productively wrong, if that makes sense. I can actually steer it now.
The cerebellum analogy resonates. I'd go further - it's becoming something I think out loud with, which is changing how I approach problems, not just how fast I solve them.
That wrongness is the frontier labs trying to remove their benchmaxxing bias, so the models now have a concept of 'I don't know' and will rethink directions and goals better. There was lots of research last year on this topic, and it takes 6 to 12 months before it is implemented for general consumption.
If you use Claude Code, it will take you half a day to learn to use Codex, and like 30 minutes to start being productive in it. The switching cost is almost zero. Just go test out GPT 5.3, there is no reason not to
It's a bit more than zero, because I have substantial tooling around Claude Code – subagents, skills, containerization, &c – that I'd have to (have Opus...) reimplement.
Exactly. If Codex is really as good, it should have no problem porting any settings or config from the Claude setup. (and I do believe it wouldn't have much of a problem)
Experimental particle physicist here. It's just hard.
I measured the electron's vector coupling to the Z boson at SLAC in the late 1990s, and the answer from that measurement is: we don't know yet - and that's the point.
Thirty years later, the discrepancy between my experiment and LEP's hasn't been resolved.
It might be nothing. It might be the first whisper of dark matter or a new force. And the only way to find out is to build the next machine. That's not 'dead', that's science being hard.
My measurement is a thread that's been dangling for decades, waiting to be pulled.
What would the cost of the “next machine” be? Is it going to be tens of billions or can we make progress with lesser money. If it is going to be tens of billions, then maybe we need to invest in engineering to reduce this cost, because it’s not sustainable to suspend thirty years, tens of billions for every incremental improvement.
This kind of slow, incremental improvement that costs tens of billions of dollars and takes decades gave us the microchips that ultimately enabled you to type this comment on your phone/computer. The return on that investment is obvious.
But it is not just about making money: The entire field of radiation therapy for cancer exists and continues to improve because people figured out ways to control particle beams with extreme precision and in a much more economical way to study particle physics. Heck, commercial MRIs exist and continue to improve because physicists want cheaper, stronger magnets so they can build more powerful colliders. What if in the future you could do advanced screening quickly and without hassle at your GP's office instead of having to wait for an appointment (and possibly pay lots of money) at an imaging specialist center? And if they find something they could immediately nuke it without cutting you open? We're talking about the ultimate possibility of Star Trek level medbays here.
Let the physicists build the damn thing however they want and future society will be better off for sure. God knows what else they will figure out along the way, but it will definitely be better for the world than sinking another trillion dollars on wars in the middle east.
Jack Kilby at Texas Instruments and Robert Noyce at Fairchild did not require tens of billions of dollars. Sherman Fairchild invested 1.3 million and the treacherous eight each put in $500. Fairchild did have the right to purchase the firm for $3 million, which of course he exercised. Similarly, Shockley's lab was funded by a $1 million grant in the 50s.
There is a lot of handwaving going on here to justify the incredibly cheap, mostly privately funded investments that launched the computer generation with the massively expensive, extremely gradual gains we are making now with particle accelerators. Part of it is that people just can't imagine how little was invested in R&D to get these stunning results, given how much we have to invest today to get much less impressive results, so they just assume that semiconductors could not have been invented without tens of billion dollars of research.
There is diminishing returns, just as a 90nm process is really all you need to get 90% of the benefits of computerization -- you can drive industrial automation just fine, all the military applications are fine, etc. But to go from a 90nm process to a 3nm process is an exponential increase in costs. In a lot of fields we are at that tail end where costs are incredibly high and gains are very low, and new fields will need to be discovered where there is low hanging fruit, and those fields will not require "tens of billions" of dollars to get that low hanging fruit.
Even with particle accelerators, SLAC cost $100 million to build and generated a massive bounty of discoveries, dwarfing the discoveries made at CERN.
To pretend that there is no such thing as a curve of diminishing returns, and to say that things have always been this way is to not paint an accurate picture of how science works. New fields are discovered, discoveries come quickly and cheaply, the field matures and discoveries become incremental and exponentially more expensive. That's how it works. For someone who is in a field on the tail end of that process, it's not good history to say "things have always been this way and have always cost this much".
Duh. The first cyclotron was built for, like, a 1000 bucks. Many of the following colliders were also ridiculously cheap by comparison. But in the same way the semiconductor industry now spend billions on EUV research to keep making progress, particle physics spends billions on colliders. But when you account for real GDP growth, collider costs have actually been stagnating for decades.
> This kind of slow, incremental improvement that costs tens of billions of dollars and takes decades gave us the microchips that ultimately enabled you to type this comment on your phone/computer.
No. These two cases are absurdly different, and you're even completely misunderstanding (or misrepresenting) the meaning of the "tens of billions of dollars" figure.
Microchips were an incremental improvement where the individual increments yielded utility far greater than the investment.
For particle physics, the problem is that the costs have exploded with the size of facilities to reach higher energies (the "tens of billions of dollars" is for one of them) but the results in scientific knowledge (let alone technological advances) have NOT. The early accelerators cost millions or tens of millions and revolutionized our undestanding of the universe. The latest ones cost billions and have confirmed a few things we already thought to be true.
> Let the physicists build the damn thing and future society will be better off for sure.
>Microchips were an incremental improvement where the individual increments yielded utility far greater than the investment.
You should look up how modern EUV lithography was commercialised. This was essentially a big plasma physics puzzle. If ASML hadn't taken on a ridiculous gamble (financially on the same order of magnitude as a new collider, esp. for a single colpany) with the research, Moore's law would have died long ago and the entire tech industry would be affected. And there was zero proof that this was going to work beforehand.
High vacuum in enormous volumes maybe. Otherwise it was certainly a problem solved decades ago.
Not sure what role of EUV optics was in LHC. But Zeiss would develop you anything on the frontier of optics if you have deep enough pockets.
The rest I don't know enough to comment on, but as far as technology goes both LHC and EUV lithography are bespoke systems. Seriously doubt there is any path dependency. Huge part of LHC cost were earthworks and precision construction of complex machinery at enormous scale.
EUV uses mirrors rather than lenses, and the precision surfaces on those are something that more likely came out of space programs. But honestly, I have no problem with throwing a few billion at basic science that might go nowhere. It's a drop in the ocean compared to war and corporate welfare.
Engineers not being able to fathom that by building this huge-ass and complicated machines to answer questions about the fundamentals of nature, other problems are solved or new things are invented that improve and change our life will never not be funny to me
This is a pretty common mistake - why not invest directly in trying to solve those problems instead of hoping to learn something by chance from different activities?
Just as funny as armchair science enthusiasts not being able to fathom that research budgets are limited and it makes sense to redirect them into other, more promising fields when a particular avenue of research is both extremely expensive and has shown diminishing returns for decades.
The more important question is, are you content with simply dismantling any progress in accelerator science at all for the next century? Because the LHCs successors won't be online till the 2050s at least. If you don't fund them now though and start the work, then no one does the work, no one studies the previous work (because there's no more grant money in it) and the next generation of accelerator engineers and physcists doesn't get trained and the knowledge and skill base withers and literally dies.
Because the trade off of no new accelerators is the definite end of accelerator science for several generations.
Real scientists don’t call others armchair scientists, it’s just belittling. Do you resort to ad hominem because you feel like your argument is not strong enough, so you have to try to attack the person as well?
There is no way to answer that - we have limited money/people/time. Whatever we fund - we will get whatever the returns are - but there is no way to know what we don't have because we didn't fund some other thing. Even if in a few years we fund that other thing - what we get out of those funds is influenced by the other things we already know and so whatever we get out of it also shows the results of the other research that we already have.
The only exception is if some research reveals nothing. Though this isn't a useful claim: "it doesn't work" still revealed something.
Given that you can do a lot more research in different fields at the same time for the amount of money the next bigger particle accellerator would cost, the answer is very likely yes.
Ok, which field? How much money will be needed? What potential experiments are lined up in those fields that need money to go forward?
Particle physics has told us a lot about the base nature of our model and the affirmation of the standard model. The fruits of these labors still take decades to make their mark on our world.
And, we still are working on those other things at the same time too. It turns out with 8 billion people on the planet and modern technology we can get an absolute fuckload done at once.
How SpaceX and Tesla patent new industrial scale techniques and technologies for any competitor to use is the bullcase for bringing the future forward faster. Lookup at Starlink.
You might say that the statement you were replying to also needs some backing, but they did give some, although you believe it was incorrect.
It just seems that "absolutely not" goes against the conventional wisdom that knowledge for knowledge sake will lead to some greater return than was expended on getting that knowledge somewhere down the road which really is one of the main underlying ideas of Western Civilization since before Newton.
Absolutely not means future society will not be better off! That seems to be a big weird absurdly pompous and conceited statement to make unless you have a time machine, or at least a big mess of statistics that can show that scientific advances in physics for a significant amount of time has failed to provide a return value on existence, although I would think that does not rise to the promise of "absolutely not".
> The latest ones cost billions and have confirmed a few things we already thought to be true.
Yes, but we had hopes that it would lead to more. And had lead to more, something only known to be false in hindsight, who knows where that would have ended us up? What if it upended the standard model instead of reinforcing it?
> Absolutely not.
What are we supposed to do then? As humans, I mean. No one knows why we're here, what the universe really is like. We have some pretty good models that we know are wrong and we don't know what wonders the theoretical implications of any successor models might bring. That said, do we really need to motivate fundamental research into the nature of reality with a promise of technology?
I'm not arguing for mindlessly building bigger accelerators, and I don't think anyone is - there has to exist a solid line of reasoning to warrant the effort. And we might find that there are smarter ways of getting there for less effort - great! But if there isn't, discrediting the venue of particle accelerators due to their high upfront cost as well as historical results would be a mistake. We can afford it, and we don't know the future.
>I'm not arguing for mindlessly building bigger accelerators, and I don't think anyone is
But you are and they are. Just by the comments here its clear that even suggesting not to use untold billions on maybe pushing theoretical physics a little forward is meet with scorn. The value proposition either, in knowledge or technology, is just not well argued anymore besides hand waving.
No, I'm not and neither is anyone else. It's common sense that we should explore options that require less effort, just as one would in any project. I'm saying that we can't discredit huge particle accelerators due to, in the grandest scheme of things, a small economic cost and past results of a different experiment.
> Yes, but we had hopes that it would lead to more. And had lead to more, something only known to be false in hindsight, who knows where that would have ended us up? What if it upended the standard model instead of reinforcing it?
Sure, but it didn't. Which is knowledge that really should factor into the decision to build the next, bigger one.
> What are we supposed to do then? As humans, I mean.
Invest the money and effort elsewhere, for now. There are many other fields of scientific exploration that are very likely to yield greater return (in knowledge and utility) for less. You could fund a hundred smaller but still substantial intiatives instead of one big accelerator. And be virtually guaranteed to have an exciting breakthrough in a few of them.
And who knows, maybe a breakthrough in material science or high-voltage electrophysics will substantially reduce the costs for a bigger particle accelerator?
>> Yes, but we had hopes that it would lead to more. And had lead to more, something only known to be false in hindsight, who knows where that would have ended us up? What if it upended the standard model instead of reinforcing it?
>Sure, but it didn't. Which is knowledge that really should factor into the decision to build the next, bigger one.
Not this week, no. And if, next week (or next year or next decade) we resolve some of the most significant problems in modern physics, any expenditures in those fields were a waste?
You've repeatedly bashed particle physics based on your perception of a lack of progress vis-a-vis the costs, and claimed that other fields should be prioritized. Which fields? What would you hope to gain from those fields?
Is there no room for basic research that attempts to validate the bases (Standard Model, Quantum Field Theory, the marriage of the former with General Relativity, etc.) of modern physics? If not why not? Our models are definitely wrong, but they're measurably less wrong than previous models.
Should we not continue to hone/probe those models to find the cracks in the theories underpinning those models? If we don't, how will we solve these extant issues?
> Which is knowledge that really should factor into the decision to build the next, bigger one.
It was always factored in, and of course it would be in any next iteration.
> Invest the money and effort elsewhere, for now. There are many other fields of scientific exploration that are very likely to yield greater return (in knowledge and utility) for less. You could fund a hundred smaller but still substantial intiatives instead of one big accelerator. And be virtually guaranteed to have an exciting breakthrough in a few of them.
I agree with this to a large extent. I'm just not against particle accelerators as a venue for scientific advancement and in the best of worlds we could do both.
I'd not be so sure about that. Doing this research will probably allow us to answer "it works but we don't know exactly why" cases in things we use everyday (i.e. li-ion batteries). Plus, while the machines are getting bigger, the understood tech is getting smaller as the laws of physics allows.
If we are going to insist on "Absolutely not" path, we should start with proof-of-work crypto farms and AI datacenters which consume county or state equivalents of electricity and water resources for low quality slop.
That "probably" is really more of a "maybe" given the experience with the current big accelerators, and really needs to be weighed against the extreme costs - and other, more promising avenues of research.
> If we are going to insist on "Absolutely not" path, we should start with proof-of-work crypto farms and AI datacenters which consume county or state equivalents of electricity and water resources for low quality slop.
Who exactly is the "we" that is able to make this decision? The allocation of research budgets is completely unrelated to the funding of AI datacenters or crypto farms. There is no organization on this planet that controls both.
And if you're gonna propose that the whole of human efforts should somehow be organized differently so that these things can be prioritized against each other properly, then I'm afraid that is a much, MUCH harder problem than any fundamental physics.
>> Let the physicists build the damn thing and future society will be better off for sure.
> Absolutely not.
And what do YOU mean, "absolutely not"? You have no more say in what happens than anyone else unless you're high level politician, who would still be beholden to their constituents anyway.
And yet big science, like particle accelerators, STILL gets funding. There's plenty to go around. Sure, every once in a while a political imperative will "pull the plug" on something deemed wasteful or too expensive and maybe sometimes that's right. But we STILL have particle physics, we STILL send out pure science space missions, there are STILL mathematicians and theorists who are paid for their whole careers to study subject matter that has no remotely practical applications.
Not everything must have a straight-line monetary ROI.
I'm torn between "yes, these experinets are way too expensive and the knowlage is too niche to be really usefull" and "We said this about A LOT and we found utility in surprising ways so it could be a gamble worth taking"
That's the problem with cutting edge reaserch....you don't even know if you will ever needed it or if a trilion dollar industry is waiting for just a number to be born
Yes, we don't really know. But at some point the gamble is just too big.
Because the costs aren't just numbers. They represent hundreds or thousands of person-years of effort. You're proposing that a large number of people should spend their entire lives supporting this (either directly as scientists, or indirectly through funding it) - and maybe end up with nothing to show for it.
And there's the opportunity costs. You could fund hundreds of smaller, yet still substantial scientific efforts in many different fields for the cost of just one particle accelerator of the size we think is sufficient to yield some new observations.
Why can't some of these trillion dollar companies invest back in the quantum tech that got them there, if it's so certain there will be benefits? Why not Apple and Nvidia fund the next particle collider, and give something back to society instead of letting tax payers fund it so billionaires can privatize the profits?
Fundamental physics research has an extremely profitable returns ratio, but it takes decades to amortize. This does not work with capitalist corporations who only care about immediate profits. Even for governments this is a difficult sell, but at least they don't have to soothe shareholders every quarter. Generational projects take a different kind of economic thinking.
Is that just because there's shareholder anxiety with the unknown on if their investment will "be vested" by the time they need to pull it out for retirement?
If that's the case it seems like it might be shrewd for younger investors to buy into physics research on a 15-20 year timeline?
> Why not Apple and Nvidia fund the next particle collider, and give something back to society instead of letting tax payers fund it so billionaires can privatize the profits?
Where do you think that tax money comes from?
Apple and Nvidia are creating the economies that produce tax revenue at every step of the way.
I believe the point was these companies benefited greatly and specifically from basic research funded by the government: they should therefore "give back" in kind (vs simply contributing to the tax base and relying on a government to figure out what to fund). The reality is these companies care only about shareholder value, and the current US administration has been terminating grants and cutting funding in basic research. I think it's fair to question, in this environment, what these companies' ethical responsibilities really should be.
I think your starting premise is obviously false and where are you getting that billionaires are privatizing the profits from the particle collider (sounds like a talking point). No one can guarantee that there are benefits - we can surmise that there are but there are still massive risks associated with large form science experiments.
Government has always been the backbone of basic science research - no one else can reasonably bear the risk and the advances are public domain.
I'm so sick of this "good guy approach". It didn't give us progress, it gave us those like Watt and Intel, highly celebrated bullshiters who stopped being relevant as soon as their IP deadlock expired.
I suppose the only solution is undeground science. Do enough progress in silence, dont disseminare the results, unless the superiority becomes so obvious that an armed resistance becomes unthinkable.
Spending tens of billions every thirty years is pretty sustainable actually.
"Fundamental Research" may or may not pan out, but the things that happen along the way are often valuable... I don't think there's any practical applications related to generating Higgs Bosons, but it's interesting (at least for particle physicists) and there's a bunch of practical stuff you have to figure out to confirm them.
That practical work can often generate or motivate industrial progress that's generally useful. For example, LHC generates tons of data and advances the state of the art in data processing, transmission, and storage; that's useful even if you don't care about the particle work.
You could say the same thing about the world wars or porn. Any human pursuit taken to an extreme can produce knock-on effects, that isn't an argument in a vacuum to continue to fund any one area.
In the scope of international cooperation, tens of billions of dollars is not very much money. For context, the U.S. economy generates $10 billion every ~3 hours. One private company, Google, spends $10 billion in about 2 weeks.
So look at it this way. Let’s take a bunch of the smartest people alive, train them for decades, give them a month of Google money, and they’ll spend 30 years advancing engineering to probe the very fabric of reality. And everything they learn will be shared with the rest of humanity for free.
Takes like this are an optical illusion meant to create the idea that there is an insane amount of money freely floating around that is just being hoarded.
But just like that money is generated, it's also all spent.
So the actual hard part is deciding what not to spend money on so we can build some crazy physics machines with a blurry ROI instead.
> Let’s take a bunch of the smartest people alive, train them for decades, give them a month of Google money
Unpopular opinion: Google makes an insane amount of money, so they can afford this salary. The CERN (or whatever your favourite research institute is), on the other hand, is no money-printing machine.
Every step towards understanding subatomic physics is a step towards cold fusion. The second we're able to understand and capture this energy, money literally doesn't exist. Infinite energy means infinite free energy, which would also abolish money from a fundamental market value perspective. I'll continually preach that we need to plan for this economically as a species because none of our current government or economic systems will survive the death of scarcity.
> Every step towards understanding subatomic physics is a step towards cold fusion.
Is it?
You are assuming cold fusion is possible. We don't know that. It might be one more step before we finally prove it is never possible.
You are also assuming that cold fusion is something this path of research will lead us to. However this might be a misstep that isn't helpful at all because it doesn't prove anything useful about the as yet unknown physical process that cold fusion needs.
We just don't know, and cannot know at this point.
Unless cold fusion allows everyone to literally pull infinite energy out of thin air with no maintenance or labor costs, I don't buy that premise. Many other utilities are effectively free already in some places, but you still need metering to deter bad actors, which is what money is. Otherwise I'm going to take all available cold fusion capacity in existence and use it to build my own artificial sun with my face on it.
My point is that you shouldn't believe in marketing claims that are obviously too good to be true, like
> The second we're able to understand and capture this [cold fusion] energy, money literally doesn't exist. Infinite energy means infinite free energy, which would also abolish money from a fundamental market value perspective.
I mean obviously this statement is false as we live in a finite section of the visible universe.
This said beyond the marketing there is a reality that if cold fusion did show up that there is a singularity event that occurs that making predictions past that point will almost always fail as the world would change very rapidly.
There are people in this thread saying tens of billions isn't that much in the long term (I'd agree) but there's a bigger point that comes into play whatever the price: The universe doesn't care if exploring it is expensive. You can't make a "that's not sustainable" argument to the universe and have it meet you half way. And that's who you're arguing against: not the scientists, the universe. The scientists don't decide how expensive future discoveries will be.
There are talks of a Muon collider, also there's a spallation source being built in Sweden(?) and also of an electron 'Higgs factory' (and while the LHC was built for the Higgs boson it is not a great source for it - it is built as a generic tool that could produce and see the Higgs)
I think that engineering progress made while building those machines are maybe more relevant for practical technical development than the discovery they make.
The problem isn't the cheaper MRI. The problem is the expert that needs to interpret the results. Detecting millions of cancers that don't actually exist doesn't help anybody.
This is a problem domain AI is good at. Have AIs do first-pass, then when they flag something an actual doctor reviews it. Then if they concur it goes to your doctor, who knows you, who can review it.
A combination to some degree. Scientists yearn to stumble upon something hitherto unexplainable that requires a new theory or validates or definitely rules out some of the more fringe theories.
While other natural sciences often suffer from an abundance of things that "merely" need to be documented, or where simulation capability is the limit, particle physics is mostly based on a theoretical framework from the middle of the 20th century that has mostly beth explored.
Getting ahead in particle physics comprises measuring many arcane numbers to as high precision as possible until something doesn't line up with existing theories or other measurements anymore. More people could help with brainstorming and measuring things that don't require humongous particle accelerators.
> Scientists yearn to stumble upon something [that] definitely rules out some of the more fringe theories
The existing measurements at CERN ruled out a lot of the "more natural" variants of string theory. Until now this insight has not lead to a big scientific breakthrough.
Its a clickbait article name (from otherwise good place), of course its not dead... we are now getting understanding of all things we don't know yet, discrepancies like yours, unified theory and so on.
Everybody knows we are not there yet and how the final knowledge set will look like, if its even possible to cover it (ie are quarks the base layer or we can go deeper, much deeper all the way to planck scales? dynamics of singularities etc)
But.. are you saying your vector coupling isn't explained by the existing standard model, that the measurement lacked sufficient resolution, or that existing calculations don't agree with your measurement?
Good question. It's mostly the third — but let me unpack that.
The Standard Model predicts a specific value for the weak mixing angle, which determines the electron's vector coupling. My measurement at SLAC, along with other SLD measurements, consistently preferred a slightly different value than what LEP (the European competitor experiment) found using a different technique.
The key word there is "different technique." SLD used a polarized beam of electrons — a completely novel approach at the time — which gave us direct access to the left-right asymmetry without needing to untangle final-state effects. LEP extracted the same parameter from b-quark forward-backward asymmetry. Two fundamentally different methods probing the same physics, with different systematic exposures, giving different answers.
Both experiments had good resolution. We spent enormous effort characterizing the systematics, and they're small compared to the statistical uncertainty. But the two most precise determinations of this parameter disagreed at roughly the 3-sigma level — and that disagreement has never been explained. The world average splits the difference, and the Standard Model prediction is consistent with that average, so you could say "the SM is fine" if you squint. But nobody knows why the two experiments don't agree with each other.
It could be an unidentified systematic error in one experiment. It could be that something beyond the Standard Model is subtly shifting one measurement and not the other. That ambiguity is exactly what makes it a "dangling thread" rather than a resolved question.
Fair - that sounds hyperbolic. But my point is specific: if the weak mixing angle is shifted from the Standard Model value, one of the standard explanations is a heavier cousin of the Z boson mixing in.
Many of those models naturally include a dark matter candidate. I didn't mean to imply 'we found dark matter' — it's that the theories which could explain the discrepancy often come with one attached.
fwiw I live in [macOS emacs](https://emacsformacosx.com/) all day long for systems engineering (C/C++) and have 201 open buffers, an uptime of 57 days and ~540 MB memory usage.
I used to use that version of emacs, but performance issues on my Mac Studio made using it just untenable. I switched to Homebrew's "emacs-plus" which does not suffer, for whatever reason, the same performance issue. Based on TFA, I'm somewhat baffled as to why, but I can't argue with results.
I also live in macOS Emacs built from source, currently running 31.0.50. Linux kernel dev over TRAMP w/ clangd language server, ~300 open buffers and <500MB memory.
According to the published paper (linked in the article) Kletetschka's theory correctly predicts several experimentally measured quantities of the Standard Model. The two that jumped out at me were:
1) the weak mixing angle
2) the three particle generations and the ratio of their masses
reply