> The 20 most-changed files in the last year. The file at the top is almost always the one people warn me about. “Oh yeah, that file. Everyone’s afraid to touch it.”
I've got my Emacs set up to display next to every file that is versioned the number of commits that file has been modified in (for the curious: using a modified all-the-icons-ivy-rich + custom elisp code + custom Bash scripts I wrote and it's trickier than it seems to do in a way that doesn't slows everything down). For example in the menu to open a file or open a recently visited file etc.: basically in every file list, in addition to its size, owner, permissions, etc. I also add the number of commits if it's a versioned file.
I like the fix/bug/broken search in TFA to see where the bugs gather.
> I really enjoyed the demoscene back in the 90s. Was never a part of it but I was always fascinated by the effects and music and ascii art that these guys created.
It was quite something... I take it there are quite a few hotshots on HN who used to be in the top groups. I was in a group and we were writing small intros for BBSes with a couple of friends and then we'd get infinite leech/upload ratio on those BBSes. Best memory was driving through Belgium / the Netherlands / Denmark / putting the car on the boat / Sweden (Uppsala) with our computers (Amiga, Atari ST and PCs) to participate at a demo compo. Forgot its name but in the PC category we tied first place with Future Crew (we would have been first had I not screwed the sound playback routine which crashed half-way the demo), before they had their big breakthrough on the PC demoscene. I think that was in 1991.
Cops/customs stopped us as the boat arrived in Sweden and thought we were dealing drugs: they tore the car apart and had no idea what we were talking about when we were explaining them in broken english that we were going to participate in a demo compo :-/
I still have a few effects as executables but I don't have the code anymore for these.
Thankfully I still have the entire source code of a game I made in assembly (for PC / 386+) in 1991 (never published but it's how my career started, long story) and lately I've been having a huge lot of fun trying to compile it again with Claude Code CLI / Sonnet. I'm using UASM, which is compatible with MASM which I used to use. I managed to have all the utilities I wrote back then (picture converters / sprites extractor / etc.) compiling and running (in DOSBox) but haven't managed to compile the main game yet. A few more hours with Claude Code CLI and I should get it running.
FWIW it's hilarious to go back to code from 1991 and see comments in my code talking about this and that bug and asking the LLM: "Find where that bug could be" and the LLM manages to find it. It's also insane the lack of version control: version control was copying entire directories. Copy/pasta code everywhere. And then 10 000 lines of code per source code file.
What an era. Diving in that old code of mine brings me back: the decades they've been flying.
P.S: funnily enough by lack of luck a macro I had used back then happen to become a reserved keyword/macro in assemblers later on. I had named back then a macro "incbin" and that was preventing my code from compiling in UASM: Claude Code / Sonnet 4.6 found that issue instantly.
P.P.S: 0x777 in hex gives 1911. RZR, legendary: probably the most legendary of them all. Probably still have a few 5"1/4 floppies (both C64 and Amiga for I had an Amiga with a little software mod to read 5"1/4 floppies as if they were 3"1/2 for the 5"1/4 were way cheaper) with Razor 1911 "cracktros" (even if they weren't called that yet) still working (back in 2020 quite a few of my floppies were still reading: maybe half to 2/3rd of them). I know it won't last, nothing will.
It's an old MS-DOS .EXE. Actually it compiles with the ".286" directive too. So I don't use protected mode.
It requires a VGA card and those were more common in 386 IIRC and, anyway, performance-wise to run at 60 Hz it needs a 386. I never tried to run it on a 286 with a VGA card: don't know if that was a thing.
It's funny looking at that old assembly code and see ax, bx, cx, dx registers and not the eax, etc. ones.
The utilities I've compiled to .EXE so far are self-contained in one file and I just use UASM to create directly the .EXE:
uasm -mz myutil.asm
UASM v2.57 does the job in my case (note that I compile from Linux: UASM exists for several platforms/OSes):
Seen that many are already moving to QC-resistant cryptography and that more are shifting by the day... I've got a question: what are the implications of quantum computers going to be if we consider that the entirety of cryptography will have moved to quantum-resistant cryptography?
In other words: I only ever read about quantum computing when it's to talk about breaking cryptography. But what if all cryptography moves to quantum-resistant scheme, all of it... Then what are the uses of quantum computing? Protein folding? Logistics?
Basically, so far, quantum computing research has the effect of many companies and projects adding quantum-resistant cryptographic schemes.
If, say, we've got a $10 million quantum computer that can break one 256 bit elliptic curve key in an hour... Great, EC is broken. But what if browsers, SSH, auth, etc. just about everything moves to PQ schemes...
Then what are those quantum computers useful for?
I understand that breaking even a single EC 256 bit key in a few hours on a $$$ machine is a very big deal.
But what else are they going to be useful for? For breaking ECC doesn't help humanity. It doesn't bring anything. It only destroys.
EDIT: for example I read stuff like: "Estimates are about three years to break a single 256 bit EC key on a 10 000 qbits quantum computer". What's a 10 000 qbits quantum computer going to be used for when everybody shall have moved to quantum-resistant algos?
To start, I am NOT an expert on the underlying technologies. But I have some exposure to the topic at let’s say more like an ecosystem level.
There are tons of hypothesized applications for quantum computing based on the expectation it will provide better simulation of quantum effects for e.g. chemistry, and offer major speedups of highly parallel simulation problems like nuclear plasma or some things in finance. Easy to Google to learn more about these.
But keeping the focus squarely on the military and intelligence services, one answer to your question is that everyone is not going to switch to post-quantum cryptography instantaneously. It’s going to take a while, especially for a long tail of “infrastructure” type things like networking gear, “internet of things,” industrial sensors, etc. Things that national intelligence services might like to break into to enable breaking into other things.
Quantum breaks may also still succeed against stored encrypted data from before the switch to PQ. And for at least a couple decades, national intelligence services have been scaling up their storage resources. So they might have a “backlog” they can work through.
Finally, things don’t have to last forever. Everything the military / government builds has an expected lifespan, and it only has to be valuable during that life span. And risks can be rare but huge in national security. So if quantum code-breaking computers only help the NSA learn a few very important things for a limited time, that still might be “worth it” to them. Or if a quantum computer doesn’t break any important cryptography, but helps advance the engineering and enables better quantum computers in the future for other anpplications—again, still might be worth it.
We can assume that organizations like NSA have collected a huge amount of traffic that is protected by RSA or EC. So they well have plenty of use for those quantum computers.
> Given that for a number of these benchmarks, it seems to be barely competitive with the previous gen
We're not reading the same numbers I think. Compared to Opus 4.6, it's a big jump nearly in every single bench GP posted. They're "only" catching up to Google's Gemini on GPQA and MMMLU but they're still beating their own Opus 4.6 results on these two.
This sounds like a much better model than Opus 4.6.
That's why I listed out the ones where it is barely competitive from @babelfish's table, which itself is extracted from Pg 186 & 187 of the System Card, which has the comparison with Opus 4.6, GPT 5.4 and Gemini 3.1 Pro.
Sure, it may be better than Opus 4.6 on some of those, but barely achieves a small increase over GPT-5.4 on the ones I called out.
It's higher than all other models except vs Gemini 3.1 Pro on MMMLU
MMMLU is generally thought to be maxed out - as it it might not be possible to score higher than those scores.
> Overall, they estimated that 6.5% of questions in MMLU contained an error, suggesting the maximum attainable score was significantly below 100%[1]
Other models get close on GPQA Diamond, but it wouldn't be surprising to anyone if the max possible on that was around the 95% the top models are scoring.
barely competitive ? Mythos column is the first column.
You are the only person with this take on hackernews, everyone else "this is a massive a jump". Fwiwi, the data you list shows the biggest jump I remember for mythos
> Given the limited transaction throughput, migrating all vulnerable coins would take years ...
How? I just googled: about 55 million addresses with bitcoin in them, about 144 blocks per day, about 3000 to 5000 tx per block.
In something like 100 days all the coins would be moved to other addresses.
I gotta say it'd be hilarious if to speed up that migration-to-quantum-resistant-addresses process, the Bitcoin community were to finally allow bigger blocks.
EDIT: I take it if the network had to have full blocks for 100 days, then "shit would happens". Maybe they should force an orderly move: e.g. only addresses ending with "3a" are eligible to be moved in a block whose hash ends with an "3a", etc. to prevent congestion?
The signatures would be larger than they are today. The article touches on it but doesn't give any estimates. What I read online were claims from 10 to 100 times larger than currently.
This paper claims 60-70% throughput loss with 59 times(!) larger storage space requirements.
> On linux, if I can't build and run software with just my user account, that software has some explaining to do.
Yup same. The software I install as root are those shipping stock with the distro. Others I compile and run from user accounts (like Emacs, which I always compile from source).
> It’s truly strange that people keep citing the quality of Claude code’s leaked source as if it’s proof vibe coding doesn’t work.
It's not a proof vibe-coding doesn't work. It's a proof it's shitty, rube-goldberg, crappy code. It doesn't mean there aren't other shitty products out there (the heavy turds Microsoft produced throughout the years do comes to mind for example).
But when you've got a project upvoted here recently complaining that people do run into issue while quickly cut/pasting from Claude Code CLI to, say, Bash to test something because of Unicode characters in Claude Code CLI's output... And when you realize that it's because what Claude Code CLI shows you in the TUI is not the output of the model because there's an entire headless browser rendering the model's output to a webpage, which is then converted to text (and changing at that moment ASCII chars for Unicode ones), you realize that some serious level of fucktardery is ongoing.
It's normal that at times people aren't going full agentic and shall want to cut/paste what they see. I'm not the one complaining: I saw a project complaining about it and people are affected by that terribly dumb ASCII-to-Unicode conversion of characters.
When you can produce turds by the kilometer, a near infinity of turd is going to be produced.
We're not saying it's not working: I pay an Anthropic subscription and use it daily... We're saying it's a piece-of-shit of a codebase.
Shit that works is still shit.
If anyone from Anthropic is reading: STOP FUCKING CHANGING THE CHARACTERS THAT YOUR MODEL DOES OUTPUT.
(now of course it's just one issue out of thousands, but it'd be a nice one to fix)
Common... I've got tools I "inherited" from my grandpa that are still fine (brothers and I basically inherited the house and the tools where in the shed and whenever I go there on vacation, I use those tools to fix the house). I've got a screwdriver which I definitely remember using as a teenager, in the late 80s (and which I used for a variety of DIY jobs ever since) to assemble the trucks on my skateboards. And that screwdriver is a prized possession of mine: it's got a story. Hammers, saws, stainless steel scissors, hoses (to water the plants), multi-tool tools (don't know if they're stainless steel but they still look good), etc. Plenty of stuff still totally usable decades later.
You cannot compare tools that can outlast humans (like my grandpa and now myself) with an Apple watch that's going to be junk in a few years at most.
Even for oil that needs changing, things that needs lubricating once every blue moon (like, say, a mechanical watch): it's quite different to drop a tiny bit of lubricant inside a mechanical watch that's already 30 years old compared to having to update the firmware of whatever Internet-of-insecure-and-shitty-Thing gizmo that's going to be a thing of the past in a few years.
And if you really let a nice mechanical watch idle for decades, at least someone can do this:
"Restoring a Vintage Rolex Submariner with the Original Box, Paperwork... Even the Receipt!"
While I'm really not sure there are going to be people out there keeping a connected wristwatch from 2026 going in the year 2066 (not sure about the value of that either).
When The Force Awakens, I spent $99 on a toy version of BB8 that you could control from your iPhone. It was a cool toy. Then after a while the app was no longer supported... Sad times.
I also owned every iPhone from the first through iPhone 7 and kept each as I replaced the old one. After a while, none were usable due to changes in cellphone tech. And I realized keeping LiO batteries around was a huge fire hazard...
If it’s the same BB8 I had, there was a repo on GitHub that allowed you to control it from your computer via Bluetooth. Might be worth looking around if you want to bring it back to life
$15 B gains... Just to put things in perspective: France has a GDP of about 3.5 trillion USD and a public debt of 117% of that amount. $15 B is not even a drop in the bucket.
To add to France's problem: in 2024 the PIB growth was 1.2%, which doesn't even counter inflation. And it's been like that since 2008: inflation adjusted in USD, no growth (while both the US and China's GDP inflation-adjusted skyrocketed).
The EU, and the eurozone in particular, is totally losing the plot: 1 company in the top 50 companies by market cap, ASML (and it's not french).
> ... and the mix of ingredients that doesn't trigger any reflux
Ah reflux! I drink way too much coffee since forever and recently asked my doc about it: he told me that if I had no reflux, then I simply shouldn't worry about it. Some people have reflux with coffee, others don't. I drink more coffee than 99% of the population and I get zero reflux. Since decades.
It's a cool article but in a way many coffee became instant coffee: as my coffee machine is often already warm (wife btw she's also a heavy coffee drinker), it's actually more instant to have my full auto coffee machine ground the beans and make a coffee than it'd take to boil water for an instant coffee. Same for the people doing the (very costly compared to beans) capsule coffee thing: it's ultra quick (and one of the reason capsule coffee like Nespresso conquered so many).
I could only find an article claiming 4% drink 6+ cups per day so a top 1st percentile coffee drinker must go much further beyond that. I'm guessing at least 2 litres per day.
I've got my Emacs set up to display next to every file that is versioned the number of commits that file has been modified in (for the curious: using a modified all-the-icons-ivy-rich + custom elisp code + custom Bash scripts I wrote and it's trickier than it seems to do in a way that doesn't slows everything down). For example in the menu to open a file or open a recently visited file etc.: basically in every file list, in addition to its size, owner, permissions, etc. I also add the number of commits if it's a versioned file.
I like the fix/bug/broken search in TFA to see where the bugs gather.
reply